Job Description

About Company FiftyFive is a global software development and technology consulting company, delivering full-cycle digital solutions across industries and geographies. We help businesses across the USA, UK, Australia, MENA, and the Nordics accelerate innovation and drive digital transformation. Our offerings span: - Custom Software & MVP Development - Web & Mobile App Development - Cloud Engineering (AWS, Azure, GCP) - Embedded Software & IoT Solutions - UI/UX and 3D Design - Software Testing & QA We specialize in cutting-edge technologies including: - AI/ML, Data Engineering, and Blockchain - Cybersecurity & DevOps - Robotic Process Automation - Sustainability-Focused Digital Initiatives We also support: - Legacy Modernization - IT Consulting & System Integrations - ERP/CRM Implementation (SAP, Microsoft Dynamics, Salesforce) - Power BI, OpenAI/ChatGPT, and other third-party integrations Through flexible remote team extension services, we help businesses build scalable and cost-effective engineering teams that move fast. Our domain expertise spans finance, healthcare, manufacturing, energy, logistics, retail, e-commerce, telecommunications, and more, empowering clients to solve complex challenges, optimize operations, and drive sustainable growth.

Website- https://www.fiftyfivetech.io/

Location: Gurgaon / Indore / Jaipur (Hybrid)

Experience: 5–9 Years

  • Good to Have

Experience with Microsoft Purview or Informatica.

Exposure to Power BI or other reporting/visualization tools.

Knowledge of Medallion Architecture and Lakehouse concepts.

Experience supporting AI/ML data engineering workflows.

Familiarity with cloud security and governance best practices.

Key Responsibilities

  • Design, develop, and maintain scalable end-to-end data pipelines using Python, SQL, and Azure services.

  • Build and optimize ETL/ELT workflows for large-scale enterprise data processing.

  • Work extensively with Azure Data Factory (ADF), Azure Databricks, Azure Data Lake, and Azure Synapse Analytics.

  • Develop scalable data ingestion frameworks for APIs, relational databases, flat files, and cloud data sources.

  • Perform data transformation, cleansing, validation, and performance optimization using PySpark and Pandas.

  • Implement modern data lake and lakehouse architectures including Medallion Architecture.

  • Write optimized SQL queries, stored procedures, and data validation logic.

  • Monitor, troubleshoot, and improve pipeline performance and data quality.

  • Collaborate with Data Analysts, BI Developers, Architects, and Business Stakeholders.

  • Implement CI/CD pipelines, deployment automation, and version control best practices.

  • Maintain technical documentation, operational runbooks, and data workflow standards.

  • Support cloud migration and enterprise data modernization initiatives.

Share this job:
Please let NextHire know you found this job on Remote First Jobs 🙏

11611 similar remote jobs

Explore latest remote opportunities and join a team that values work flexibility.

Remote companies like NextHire

Explore remote-first companies similar to NextHire. Discover other top-rated employers that offer flexible schedules and work-from-anywhere options.

Project: Career Search

Rev. 2026.5

[ Remote Jobs ]
Direct Access

We source jobs directly from 21,000+ company career pages. No intermediaries.

01

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

02

Advanced Filters

Filter by category, benefits, seniority, and more.

03

Priority Job Alerts

Get timely alerts for new job openings every day.

04

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

21,000+ SOURCES UPDATED 24/7
Apply