Data Engineer Closed

๐Ÿ’ฐ $95k-$110k

Job description

Data Engineer I

Who we are

Jellyvision ALEXยฎ, is on a mission to improve lives by helping people choose and use their benefits. We are raising the barโ€”for benefits and the employee experience (for our employees and those of the customers we serve) โ€“ by scaling personalization, compassion and an earnest intent to be helpful in all that we do.

Jellyvision people are a group of creative problem solvers who use good judgment, give each other honest feedback, engage in real debate, and snack frequently. We are curious, hungry, and humbleโ€”because we know this is how weโ€™ll continue to make an impact. Weโ€™re kind, biased towards action, and sweat the details to create great experiences for those we serve.

We are an inclusive, human-first workplace. Respect and trust for each other are foundational, and our equitable total rewards offerings support the lives and holistic well-being of our unique people. At Jellyvision, expect career experiences that challenge you, empower you to have a direct impact on our mission, and enable you to learn, try, and do while having fun along the way.

Whatโ€™s the role?

As a Data Engineer I, you’ll help build and maintain the data infrastructure that enables our teams to personalize benefits for millions of users. This is an early-career role focused on developing strong data engineering fundamentals - building pipelines, designing storage structures, and modeling data that power products, client analytics, and strategic decisions. You’ll work closely with the rest of our data team to deliver reliable, well-structured data across the organization.

What youโ€™ll do to be successful

1.) Build and maintain data pipelines and storage

  • Build, test, and deploy Airflow DAGs in MWAA following team standards
  • Write clean, production-ready Parquet files using PyArrow or Spark with proper compression and partitioning
  • Implement ETL/ELT pipelines from source โ†’ landing โ†’ transformed layers
  • Help maintain S3 storage structures following existing partitioning, lifecycle, and access policies
  • Assist with dimensional modelingโ€”turn normalized data into star-schema facts and dimensions under senior guidance
  • Implement basic data quality checks using Python
  • Troubleshoot failing DAGs, rerun backfills, and respond to alerts
  • Participate in on-call support rotation
  • Update and maintain pipeline documentation

KPIs: pipeline reliability, on-time deliver, code quality

2.) Learn, collaborate, and help grow our data platform

  • Participate in code reviews to learn best practices and improve code quality
  • Partner with Senior Data Engineers and Analytics Engineers on architecture alignment
  • Work with analytics and product teams to understand and deliver data requirements
  • Document troubleshooting procedures and platform patterns
  • Contribute to sprint planning and technical discussions

KPIs: documentation quality, responsiveness to feedback, stakeholder satisfaction

Experience & skills youโ€™ll need

Required:

  • 2+ years of practical experience (professional, internship, and bootcamps count)
  • Solid Python and working SQL proficiency
  • Familiarity with cloud platforms (AWS/GCP/Azure) and modern data tooling (e.g Airflow, dbt, Spark, Snowflake, Databricks, Redshift)
  • Understanding of data modeling concepts (e.g., star schema, normalization) and ETL/ELTdesign practices
  • Experience reading/writing Parquet
  • Ability to write and run basic Airflow DAGs
  • Docker fundamentals
  • Git fundamentals, agile development and CI/CD practices
  • Demonstrated curiosity
  • Genuine tinkerer energy: youโ€™ve built personal data projects for fun, tried random tools (Polars, DuckDB, Ollama, local LLMs, MotherDuck, etc.), and probably have a messy but awesome docker-compose.yml on your machine

Nice to Have:

  • Experience with AI coding assistants
  • Exposure to AI tools (MCP servers, Ollama, local LLMs)
  • Knowledge of Delta Lake or Snowflake clustering
  • Basic Terraform experience
  • Simple data-quality tooling

The Details

  • Location: Remote, Chicago-based preferred for occasional in-office days.
  • Starting Salary: $95,000 - $110,000

What Jellyvision will give you

Check out our benefits here!

Jellyvision is committed to continuous evolution and fostering a more diverse and inclusive workplace where everyone is welcomed, valued, and respected. It doesnโ€™t matter your race, ethnicity, religion, age, disability, sexual orientation, gender, gender identity/expression, country of origin, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), criminal histories consistent with legal requirements or any other basis protected by law…we just want amazing people who are willing to grow along with us.

Although we have a Chicago-based HQ that employees are welcome to work out of whether theyโ€™re local or just visiting, this position is also eligible for work by a remote employee out of CA, CO, FL, GA, IL, IN, KY, MI, MN, NC, NY, OH, OR, PA, SC, TN, TX, UT, VA, WA or WI.

Similar Remote Jobs

Find Remote Jobs

Connect with top companies hiring for remote jobs, work-from-home roles, and 100% online jobs worldwide.

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.