Job Description
Who we are
80% of the workers across the globe are Deskless. These are the people who keep our lights on and gas flowing, build roads and bridges, run our manufacturing factories, ensure that we get healthcare service, and provide us with reliable phone and internet connectivity. As entrepreneurs, have we considered solving their problems and making them more productive?
Zinier is a company on a mission to empower frontline workers - and the people supporting them - to achieve greater things for themselves and the world around them. With the majority of workers worldwide being deskless, Zinier recognizes the need for Technology Equity to improve the lives and productivity of these workers who keep the world up and running.
We are a remote-first, global team headquartered in Silicon Valley, with a hybrid workforce across the United States, Canada, Europe, Latin America, Singapore, and Bangalore, India, with leading investors that include Accel, ICONIQ Capital, Founders Fund, Newfund Capital, NGP Capital, Tiger Global Management, and Qualcomm Ventures LLC.
What we are looking for
Do you get excited about transforming messy, fragmented data into a clean, scalable foundation that entire teams can build on? Are you the kind of engineer who sees a dozen bespoke schemas and thinks “I can normalise this” rather than “that’s just how it is”? Zinier is searching for a hands-on, technically excellent Senior Data Engineer who wants to build the data platform that powers our next chapter of growth.
In this high-impact role, you’ll be the architect and builder behind a medallion data architecture (Bronze → Silver → Gold) on AWS that turns our client deployments—each with its own quirks—into a unified, governed data estate. You’ll design CDC pipelines, build ETL transformations, model star schemas for self-serve BI, and lay the groundwork for AI/ML capabilities.
This isn’t a maintain-the-dashboard role. You’ll be building from the foundations up—co-designing the canonical ERD with Solutions and Product, migrating legacy schemas, and standing up infrastructure that cuts QBR preparation from four weeks to under two days. You’ll need to think like an architect one moment and debug a Glue job the next.
Bring your builder mentality, your first-principles thinking, and your genuine passion for data engineering done right. This is your opportunity to define the data platform for a field service management product used by some of the world’s largest infrastructure operators.
Where you are located
Based in India, ideally Bangalore, able to collaborate with engineering, data, and product teams across time zones as needed.
What the role offers
- Design and own the canonical data model— co-create the normalised ERD (3NF) with Solutions and Product, defining naming conventions, data types, and relationship standards that all deployments conform to
- Build the medallion data platform (Bronze → Silver → Gold) on AWS using S3, DMS, Glue, Redshift Serverless, and QuickSight — from CDC ingestion through to self-serve dashboards
- Develop and maintain CDC pipelines and ETL jobs that ingest data from Amazon RDS, apply business logic, calculate KPIs, and produce clean, analytics-ready star schemas
- Migrate client schemas to the standard model, starting with a pilot account and scaling across all accounts with row-level tenant isolation. Each client deployment carries its own schema customisations and configuration variations; a core challenge of this role is normalising divergent client data models into the canonical ERD without losing client-specific fidelity
- Enable self-serve BI for Customer Success — build the Gold-layer flat marts and analytics-ready schemas that enable self-serve QuickSight dashboards, so the CS team can generate QBR materials in under two days without specialist involvement
- Establish data governance processes including change control for schema modifications, a data dictionary, cataloging standards via Glue Data Catalog, and audit-ready data lineage
- Lay the data foundation for AI/ML — ensure clean, historical data pipelines that support future AI use cases, and cross-client benchmarking models
- Collaborate cross-functionall y with Product Engineering on schema migration, Solutions on ERD validation and regression testing, and the AI team on model training data requirements
What you’ll bring to the role
- 5–10 years of data engineering experience in a product or SaaS company; proven track record building production data platforms, warehouses, or lake architectures from the ground up
- Domain context is a plus: prior experience in FSM, field operations, utilities, telecom, or industrial SaaS is a plus — you’ll be modelling data for entities like Work Orders, Technician schedules, Assets, and SLA events
- Strong data modelling fundamentals: experienced in designing normalised (3NF) operational schemas and dimensional/star schemas for analytics; comfortable navigating schema divergence across multi-tenant deployments
- Pipeline and orchestration fluency: CDC, ELT/ETL patterns, job orchestration, idempotent pipelines, and schema evolution handling
- Strong programming skills in Python, Go, or Java with hands-on experience building and maintaining data pipelines, transformation logic, and automation in production environments
- Deep AWS data stack expertise: hands-on with S3, Glue (ETL + Data Catalog), DMS, Redshift (or Redshift Serverless), Athena, and QuickSight — or comparable experience with equivalent tools and a willingness to ramp quickly
- First-principles problem solver: strong technical judgement with the ability to make pragmatic architectural decisions, balancing long-term scalability with near-term delivery
- Governance and quality mindset: experience implementing data cataloguing, lineage tracking, access controls, and change management processes
- Hustler mentality with engineering craft: resourceful, persistent, and pragmatic; you ship working systems, not just designs. Comfortable navigating ambiguity in a fast-paced environment
- Excellent communication and collaboration: ability to work across Data, Product, Solutions, and AI teams; distil technical complexity into clear decisions for non-technical stakeholders
- Be Hungry. Be Humble. Be Honest. And Hustle.
Own the data platform. Build the foundation. Be the reason our customers and teams thrive.












