Job Description
About Company FiftyFive is a global software development and technology consulting company, delivering full-cycle digital solutions across industries and geographies. We help businesses across the USA, UK, Australia, MENA, and the Nordics accelerate innovation and drive digital transformation. Our offerings span: - Custom Software & MVP Development - Web & Mobile App Development - Cloud Engineering (AWS, Azure, GCP) - Embedded Software & IoT Solutions - UI/UX and 3D Design - Software Testing & QA We specialize in cutting-edge technologies including: - AI/ML, Data Engineering, and Blockchain - Cybersecurity & DevOps - Robotic Process Automation - Sustainability-Focused Digital Initiatives We also support: - Legacy Modernization - IT Consulting & System Integrations - ERP/CRM Implementation (SAP, Microsoft Dynamics, Salesforce) - Power BI, OpenAI/ChatGPT, and other third-party integrations Through flexible remote team extension services, we help businesses build scalable and cost-effective engineering teams that move fast. Our domain expertise spans finance, healthcare, manufacturing, energy, logistics, retail, e-commerce, telecommunications, and more, empowering clients to solve complex challenges, optimize operations, and drive sustainable growth.
Website- https://www.fiftyfivetech.io/
Location: Gurgaon / Indore / Jaipur (Hybrid)
Experience: 5–9 Years
- Good to Have
Experience with Microsoft Purview or Informatica.
Exposure to Power BI or other reporting/visualization tools.
Knowledge of Medallion Architecture and Lakehouse concepts.
Experience supporting AI/ML data engineering workflows.
Familiarity with cloud security and governance best practices.
Key Responsibilities
Design, develop, and maintain scalable end-to-end data pipelines using Python, SQL, and Azure services.
Build and optimize ETL/ELT workflows for large-scale enterprise data processing.
Work extensively with Azure Data Factory (ADF), Azure Databricks, Azure Data Lake, and Azure Synapse Analytics.
Develop scalable data ingestion frameworks for APIs, relational databases, flat files, and cloud data sources.
Perform data transformation, cleansing, validation, and performance optimization using PySpark and Pandas.
Implement modern data lake and lakehouse architectures including Medallion Architecture.
Write optimized SQL queries, stored procedures, and data validation logic.
Monitor, troubleshoot, and improve pipeline performance and data quality.
Collaborate with Data Analysts, BI Developers, Architects, and Business Stakeholders.
Implement CI/CD pipelines, deployment automation, and version control best practices.
Maintain technical documentation, operational runbooks, and data workflow standards.
Support cloud migration and enterprise data modernization initiatives.






