Data Engineer

at epay, a Euronet Worldwide Company
  • Remote - United Kingdom

Remote

Data

Mid-level

Job description

Overview of the job:

At epay, data is at the core of everything we do. We’ve built a global team of Developers, Engineers, and Analysts who transform complex datasets into actionable insights for both internal stakeholders and external partners. As we continue to scale our commercial data services, we are looking to hire a mid-level Data Engineer (2+ years’ experience) who has worked with Azure and Databricks, and can contribute immediately across pipeline development, categorisation, and optimisation for AI/ML use cases.

You’ll work with diverse datasets spanning prepaid, financial services, gambling, and payments, supporting business-critical decisions with high-quality, well-structured data. While engineering will be your focus, you’ll also need to collaborate across analytics and product functions—comfortable switching between roles to meet team goals.

This role includes occasional global travel and requires flexibility across time zones when collaborating with international teams.

This role is remote based however frequent attendance to one of 3 locations is required on a regular basis (Billericay, Essex; Bracknell and Baker Street, London)

The ideal candidate will also need to be able to travel globally when required.

Three best things about the job:

  • Be part of a high-performing team building modern, scalable data solutions used globally.
  • Work hands-on with cutting-edge Azure technologies, with a strong focus on Databricks and Python development.
  • Play a key role in evolving epay’s data architecture and ML-enablement strategies.

In the first few months, you would have:

  • Taken ownership of a data pipeline or transformation flow within Databricks and contributed to its optimisation and reliability.
  • Worked across raw and curated datasets to deliver categorised and enriched data ready for analytics and machine learning use cases.
  • Provided support to analysts and financial stakeholders to validate and improve data accuracy.
  • Collaborated with the wider team to scope, test, and deploy improvements to data quality and model inputs.
  • Brought forward best practices from your prior experience to help shape how we clean, structure, and process data.
  • Demonstrated awareness of cost, latency, and scale when deploying cloud-based data services.

The Ideal candidate should understand they are part of a team and be willing to occupy various roles to allow the team to adjust work more effectively.

Responsibilities of the role:

  • Data Pipeline Development: Build and maintain batch and streaming pipelines using Azure Data Factory and Azure Databricks.
  • Data Categorisation & Enrichment: Structure unprocessed datasets through tagging, standardisation, and feature engineering.
  • Automation & Scripting: Use Python to automate ingestion, transformation, and validation processes.
  • ML Readiness: Work closely with data scientists to shape training datasets, applying sound feature selection techniques.
  • Data Validation & Quality Assurance: Ensure accuracy and consistency across data pipelines with structured QA checks.
  • Collaboration: Partner with analysts, product teams, and engineering stakeholders to deliver usable and trusted data products.
  • Documentation & Stewardship: Document processes clearly and contribute to internal knowledge sharing and data governance.
  • Platform Scaling: Monitor and tune infrastructure for cost-efficiency, performance, and reliability as data volumes grow.
  • On-Call support: Participate in an on-call rota system to provide support for the production environment, ensuring timely resolution of incidents and maintaining system stability outside of standard working hours.

What you will need:

The ideal candidate will be proactive and willing to develop and implement innovative solutions, capable of the following:

Recommended:

  • 2+ years of professional experience in a data engineering or similar role.
  • Proficiency in Python, including use of libraries for data processing (e.g., pandas, pySpark).
  • Experience working with Azure-based data services, particularly Azure Databricks, Data Factory, and Blob Storage.
  • Demonstrable knowledge of data pipeline orchestration and optimisation.
  • Understanding of SQL for data extraction and transformation.
  • Familiarity with source control, deployment workflows, and working in Agile teams.
  • Strong communication and documentation skills, including translating technical work to non-technical stakeholders.

Preferred:

  • Exposure to machine learning workflows or model preparation tasks.
  • Experience working in a financial, payments, or regulated data environment.
  • Understanding of monitoring tools and logging best practices (e.g., Azure Monitor, Log Analytics).
  • Awareness of cost optimisation and scalable design patterns in the cloud.
Share this job:
Please let epay, a Euronet Worldwide Company know you found this job on Remote First Jobs 🙏
epay, a Euronet Worldwide Company logo

epay, a Euronet Worldwide Company

  • 1 remote job

Latest Jobs at epay, a Euronet Worldwide Company

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service 🙏

Apply