Job Description
iKnowHow Group is a leading Software & Robotics Solutions group of companies operating internationally for over 24 years, with 300+ professionals delivering innovative technology solutions across Energy, Telecommunications, Banking & Financial Services, and Public Sector industries. The group is structured into specialized subsidiaries, each focused on distinct technology domains and market verticals.
We are now looking for a mid-level Data Engineer to work in new challenging outsourced projects.
You will design and develop scalable data pipelines, modernize legacy data flows into a cloud-native architecture, and partner with data scientists, analysts, and business stakeholders to ensure trusted, well-governed data is available across the enterprise. The primary technology footprint is Microsoft Azure, with selected workloads on Google Cloud Platform and a smaller Amazon Web Services presence.
Responsibilities:
Design, build, and maintain scalable batch and streaming data pipelines across Azure Data Factory, Azure Synapse, and Databricks, ingesting data from policy administration, claims, CRM, and external data providers.
Develop curated data models in a medallion (bronze/silver/gold) architecture using Delta Lake, ensuring data quality, lineage, and reusability across analytics and AI use cases.
Develop and optimise SQL and PySpark transformations for high-volume datasets, with strong attention to performance, cost, and reliability.
Operationalise pipelines through Azure DevOps and/or GitHub Actions, embedding automated testing, deployment, and observability into the data delivery lifecycle.
Implement data quality checks, monitoring, and alerting across critical data products, working with platform engineering on lineage and cataloguing (e.g., Microsoft Purview, Unity Catalog).
Collaborate with data architects to align pipelines with the enterprise data model and governance standards, including PII handling, retention, and access controls relevant to insurance regulation.
Work closely with analytics, actuarial, and data science teams to translate business requirements into robust data products and self-service datasets.
Participate in code reviews, design sessions, and Agile ceremonies, contributing to engineering standards and continuous improvement of the data platform.
Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related technical field.
3–5 years of hands-on experience as a Data Engineer or in a closely related role, delivering production data pipelines.
Proven track record of building cloud-native data solutions in Agile/Scrum environments.
Strong experience with Microsoft Azure data services: Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Gen2, and Azure SQL.
Hands-on experience with Databricks and Apache Spark (PySpark), including Delta Lake and the medallion architecture.
Advanced SQL skills and solid Python development for data engineering workloads.
Familiarity with CI/CD pipelines using Azure DevOps and/or GitHub Actions, infrastructure-as-code (Terraform or Bicep), and Git-based workflows.
Understanding of data modelling (dimensional, Data Vault, or lakehouse patterns) and data governance concepts including data quality, lineage, and security.
Nice to have:
Experience in regulated industries (insurance, banking, healthcare.
Working knowledge of Google Cloud data services
An attractive salary package
Private health insurance plan
Career development and growth opportunities
Continuous training via personalized seminars
An amazing private & open-office workspace in Athens #LI_Hybrid
Stable and enjoyable working environment












