Job description
Holycode/Localsearch is Searching for an AI Ops Engineer (GCP and Vertex AI)
Localsearch is the leading marketing and advertising partner for Swiss SMEs. They make their customers successful in the digital world.
They own two of the most popular platforms in Switzerland, local.ch and search.ch, which are used by millions of users every month.
Localsearch supports Swiss businesses in gaining customers and ensuring success in the digital world – thanks to professional website creation, online advertising materials, online appointment booking, and customer management systems, as well as a comprehensive online presence on all of the key platforms in Switzerland.
Within our Data & Technology department, we are looking for a new colleague.
About the Role & The Team
At Localsearch, we are building a state-of-the-art AI team to drive business innovation. Our team is structured for success: AI Solution Engineers partner with the business to design the “what” and the “why,” and as our AI Ops Engineer, you will be the expert who masters the “how.”
You are the bridge between the architectural blueprint and a live, breathing AI system. This is a hands-on engineering role focused on building, deploying, and, most importantly, owning the operational lifecycle of our AI solutions on Google Cloud Platform (GCP). You won’t just build a model and hand it off; you will build automated systems and ensure they run reliably, efficiently, and at scale.
If you are a builder at heart who gets satisfaction from creating robust, high-quality systems and seeing them perform in the real world, this role is for you.
What You’ll Do (Key Responsibilities)
Build & Implement AI Solutions:
Translate Blueprints into Reality: Take the solutions and model architectures designed by the AI Solution Engineer and lead their hands-on implementation.
Business-Driven Data & Feature Engineering: Partner directly with non-technical business stakeholders to deeply understand the core processes that generate our data. Collaborate on the definition and creation of robust data models (facts, dimensions) that serve as the foundation for our AI systems.
Code & Develop Models: Write clean, production-grade Python code to build, train, and fine-tune models, translating business logic and domain knowledge into powerful, predictive features.
Collaborative Model Tuning & Refinement: Work in a tight feedback loop with business users to refine model outputs. You will ensure the model’s results are not only statistically accurate but also intuitive and actionable for the people using them.
Own the Operational Lifecycle (MLOps):
Master of MLOps: This is your primary domain of ownership. You will design, build, and maintain our entire MLOps infrastructure using GCP and Vertex AI.
Automate Everything: Establish and manage the CI/CD pipelines for our AI models, automating the end-to-end process from code commit to production deployment.
Orchestration Expert: Use tools like Airflow to orchestrate complex model retraining jobs, data refresh cycles, and other essential workflows.
Infrastructure Management: Own the deployment of models as scalable, low-latency API endpoints. You are responsible for monitoring their health, performance, and cost, making continuous optimizations.
Set the Standard: As a foundational member of the team, you will establish and enforce best practices for code quality, model versioning, monitoring, and alerting.
What You’ll Bring (Job Requirements)
Essential Skills & Experience:
Business Acumen & Stakeholder Partnership: You have a genuine curiosity for how the business works and an ability to understand the “why” behind the data. You have proven experience acting as a bridge between technical teams and business departments, skilled at active listening, asking the right questions, and building trust with non-technical colleagues.
Proven Engineering Background: Demonstrable experience in a hands-on role like ML Engineer or AI Engineer where you were responsible for building and shipping AI production systems.
Expert-Level Python: High proficiency in Python, writing clean, maintainable, and testable code for both data processing (e.g., Pandas) and ML development (e.g., Scikit-learn, TensorFlow/PyTorch).
Strong SQL for Large-Scale Data: The ability to write complex, efficient SQL queries to work with large datasets. Experience with Google BigQuery is a major asset.
Cloud Production Experience: Hands-on experience deploying and managing applications or services on a major cloud platform (GCP is strongly preferred; significant AWS/Azure experience is also relevant).
A “Builder’s” Mindset: You are passionate about writing code and building systems. You have a pragmatic approach to engineering and a high bar for quality.
Skills That Will Make You Stand Out (Highly Desirable):
Deep Vertex AI Knowledge: Proven experience using the GCP Vertex AI suite (especially Pipelines, Endpoints, Training, and Model Registry) to build and operate ML systems.
CI/CD & Orchestration Expertise: Direct experience building CI/CD pipelines for ML (e.g., using GitHub Actions, Jenkins) and orchestrating workflows with Apache Airflow.
Infrastructure as Code (IaC): Practical experience with tools like Terraform for managing cloud infrastructure.
Containerisation: A strong working knowledge of Docker and experience with Kubernetes (GKE).
Why You Will Love Working Here:
High-growth company with exciting and trend-setting challenges
Friendly working atmosphere in an open-minded multinational team
All necessary equipment – up to you to decide what you prefer
Flexible hours and remote working possible
A budget for professional improvement (courses, conferences, books…)
A budget for multi-benefits platform (meal tickets, private medical insurance, private pension, etc.)
Budget for the mastery of the English and German languages
Skilled and senior co-workers
Opportunities to learn and grow with us
If that sounds interesting, please submit your CV in English.