Senior AI Infrastructure Engineer

🇺🇸 United States - Remote
🔧 DevOps🟣 Senior

Job description

Who We Are

TetraScience is the Scientific Data and AI Cloud company. We are catalyzing the Scientific AI revolution by designing and industrializing AI-native scientific data sets, which we bring to life in a growing suite of next gen lab data management solutions, scientific use cases, and AI-enabled outcomes.

TetraScience is the category leader in this vital new market, generating more revenue than all other companies in the aggregate. In the last year alone, the world’s dominant players in compute, cloud, data, and AI infrastructure have converged on TetraScience as the de facto standard, entering into co-innovation and go-to-market partnerships: Latest News and Announcements | TetraScience Newsroom

In connection with your candidacy, you will be asked to carefully review the Tetra Way letter, authored directly by Patrick Grady, our co-founder and CEO. This letter is designed to assist you in better understanding whether TetraScience is the right fit for you from a values and ethos perspective.

It is impossible to overstate the importance of this document and you are encouraged to take it literally and reflect on whether you are aligned with our unique approach to company and team building. If you join us, you will be expected to embody its contents each day.

What You will Do

We’re looking for a Senior AI Infrastructure Engineer to help design, build, and scale our AI and data infrastructure. In this role, you’ll focus on architecting and maintaining cloud-based MLOps pipelines to enable scalable, reliable, and production-grade AI/ML workflows, working closely with AI engineers, data engineers, and platform teams. Your expertise in building and operating modern cloud-native infrastructure will help enable world-class AI capabilities across the organization.

If you are passionate about building robust AI infrastructure, enabling rapid experimentation, and supporting production-scale AI workloads, we’d love to talk to you.

  • Design, implement, and maintain cloud-native infrastructure to support AI and data workloads, with a focus on AI and data platforms such as Databricks and AWS Bedrock.

  • Build and manage scalable data pipelines to ingest, transform, and serve data for ML and analytics.

  • Develop infrastructure-as-code using tools like Cloudformation, AWS CDK to ensure repeatable and secure deployments.

  • Collaborate with AI engineers, data engineers, and platform teams to improve the performance, reliability, and cost-efficiency of AI models in production.

  • Drive best practices for observability, including monitoring, alerting, and logging for AI platforms.

  • Contribute to the design and evolution of our AI platform to support new ML frameworks, workflows, and data types.

  • Stay current with new tools and technologies to recommend improvements to architecture and operations.

  • Integrate AI models and large language models (LLMs) into production systems to enable use cases using architectures like retrieval-augmented generation (RAG).

  • 7+ years of professional experience in software engineering and infrastructure engineering.

  • Extensive experience building and maintaining AI/ML infrastructure in production, including model, deployment, and lifecycle management.

  • Strong knowledge of AWS and infrastructure-as-code frameworks, ideally with CDK.

  • Expert-level coding skills in TypeScript and Python building robust APIs and backend services.

  • Production-level experience with Databricks MLFlow, including model registration, versioning, asset bundles, and model serving workflows.

  • Expert level understanding of containerization (Docker), and hands on experience with  CI/CD pipelines, orchestration tools (e.g., ECS) is a plus.

  • Proven ability to design reliable, secure, and scalable infrastructure for both real-time and batch ML workloads.

  • Ability to articulate ideas clearly, present findings persuasively, and build rapport with clients and team members.

  • Strong collaboration skills and the ability to partner effectively with cross-functional teams.

Nice to Have

  • Familiarity with emerging LLM frameworks such as DSPy for advanced prompt orchestration and programmatic LLM pipelines.
  • Understanding of LLM cost monitoring, latency optimization, and usage analytics in production environments.
  • Knowledge of vector databases / embeddings stores (e.g., OpenSearch) to support semantic search and RAG.

Benefits

  • 100% employer-paid benefits for all eligible employees and immediate family members
  • Unlimited paid time off (PTO)
  • 401K
  • Flexible working arrangements - Remote work
  • Company paid Life Insurance, LTD/STD
  • A culture of continuous improvement where you can grow your career and get coaching

We are not currently providing visa sponsorship for this position.

Share this job:
Please let TetraScience know you found this job on Remote First Jobs 🙏

Similar Remote Jobs

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service 🙏

Apply