Staff Data Engineer

πŸ‡ΊπŸ‡Έ United States - Remote
πŸ“Š Data🟣 Senior

Job description

SMASH, Who we are?

We believe in long-lasting relationships with our talent. We invest time getting to know them and understanding what they seek as their professional next step.

We aim to find the perfect match. As agents, we pair our talent with our US clients, not only by their technical skills but as a cultural fit. Our core competency is to find the right talent fast.

This position is remote within the United States. You must have U.S. citizenship or a valid U.S. work permit to apply for this role.

Role summary

You will design and deliver scalable, GCP-native data solutions that power machine learning and analytics initiatives. This role focuses on building high-quality, domain-driven data products and decentralized data infrastructure that enable rapid iteration, measurable outcomes, and long-term value creation.

Responsibilities

  • Design and implement a scalable, GCP-native data strategy aligned with machine learning and analytics initiatives.

  • Build, operate, and evolve reusable data products that deliver compounding business value.

  • Architect and govern squad-owned data storage strategies using BigQuery, AlloyDB, ODS, and transactional systems.

  • Develop high-performance data transformations and analytical workflows using Python and SQL.

  • Lead ingestion and streaming strategies using Pub/Sub, Datastream (CDC), and Cloud Dataflow (Apache Beam).

  • Orchestrate data workflows using Cloud Composer (Airflow) and manage transformations with Dataform.

  • Modernize legacy data assets and decouple procedural logic from operational databases into analytical platforms.

  • Apply Dataplex capabilities to enforce data governance, quality, lineage, and discoverability.

  • Collaborate closely with engineering, product, and data science teams in an iterative, squad-based environment.

  • Drive technical decision-making, resolve ambiguity, and influence data architecture direction.

  • Ensure data solutions are secure, scalable, observable, and aligned with best practices.

Requirements – Must-haves

  • 8+ years of professional experience in data engineering or a related discipline.

  • Expert-level proficiency in Python and SQL for scalable data transformation and analysis.

  • Deep expertise with Google Cloud Platform data services, especially BigQuery.

  • Hands-on experience with AlloyDB (PostgreSQL) and Cloud SQL (PostgreSQL).

  • Strong understanding of domain-driven data design and data product thinking.

  • Proven experience architecting ingestion pipelines using Pub/Sub and Datastream (CDC).

  • Expertise with Dataform, Cloud Composer (Airflow), and Cloud Dataflow (Apache Beam).

  • Experience modernizing legacy data systems and optimizing complex SQL/procedural logic.

  • Ability to work independently and lead initiatives with minimal guidance.

  • Strong critical thinking, problem-solving, and decision-making skills.

Nice-to-haves (optional)

  • Experience applying Dataplex for data governance and quality management.

  • Exposure to proprietary SQL dialects (T-SQL, PL/pgSQL).

  • Experience supporting machine learning or advanced analytics workloads.

  • Background working in decentralized, squad-based or product-oriented data teams.

  • Experience influencing technical direction across multiple teams or domains.

Share this job:
Please let SMASH know you found this job on Remote First Jobs πŸ™

Similar Remote Jobs

Find Remote Jobs

Connect with top companies hiring for remote jobs, work-from-home roles, and 100% online jobs worldwide.

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Apply