Data Engineer

πŸ‡ͺπŸ‡Έ Spain - Remote
πŸ“Š DataπŸ”΅ Mid-level

Job description

About Us

Macropay is a fintech leader in payment orchestration, providing businesses with seamless access to global payment solutions for over four years. Specializing in revenue optimization, we oer card processing and alternative payment methods enhanced by smart routing, fraud prevention, and an intuitive dashboard. Backed by a team of payment and fraud experts, our all-in-one platform is designed to maximize revenue, reduce costs, and improve the payment experienceβ€”all through a single API integration.

About the role

In this role, you will design, build, and maintain scalable, secure, and high-performance cloud-based data pipelines to support real-time and batch analytics within our payments platform. You will work closely with product owners, data scientists, and cross-functional engineering teams to translate business requirements into robust data models and ETL/ELT workflows. Your day-to-day work will include architecting and implementing Kafka-based streaming pipelines, processing event streams and orchestrating data ingestion and transformation jobs on AWS. You will leverage Snowflake as our central data warehouse.

Key Responsibilities:

  • Define and enforce best practices for data ingestion, cataloging, and lineage across our cloud infrastructure (AWS S3, Glue, EMR, Lambda, etc.).

  • Develop and maintain real-time processing applications using Kafka (Producers, Consumers, Streams API) or similar technologies to aggregate, filter, and enrich streaming data from multiple sources.

  • Collaborate with development and analytics teams to understand and fulfill the company’s data requirements.

  • Develop ecient and scalable data pipelines for data transformation and enrichment.

  • Implement monitoring and alerting mechanisms to ensure the integrity and availability of data streams.

  • Work with the operations team to optimize the performance and eciency of the data infrastructure.

  • Automate management and maintenance tasks of the infrastructure using tools such as Terraform, Ansible, etc.

  • Stay updated on best practices and trends in data architectures, especially in the realm ofreal-time data ingestion and processing.

  • Monitor and troubleshoot data workflows using tools such as CloudWatch, Prometheus, or Datadogβ€”proactively identifying bottlenecks, ensuring pipeline reliability, and handling incidentresponse when necessary.

  • Ensure data quality and performance

  • Define and test disaster recovery plans (multi-region backups, Kafka replication, Snowflake Time Travel) and collaborate with security/infra teams on encryption, permissions, and compliance

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field (equivalent experience is valued).

  • We are looking for an experienced Data Engineer with 5+ years of professional experience and a solid technology background using Java or Python as a primary language.

  • 3+ years of programming experience with Java / Python

  • Experience in data engineer design and delivery with cloud based data Warehouse technologies, in particular Snowflake, or Redshift,BigQuery

  • Development with cloud services, especially Amazon Web Services

  • Demonstrable experience in designing and implementing data pipeline architectures based on Kafka in cloud environments, preferably AWS.

  • Deep understanding of distributed systems and high availability design principles.

  • Experience in building and optimizing data pipelines using technologies like Apache Kafka, Apache Flink, Apache Spark, etc., including real-time processing frameworks such as Apache Flink or Apache Spark Streaming.

  • Excellent communication and teamwork skills.

  • Ability to independently and proactively solve problems.

Extra bonus if:

  • Experience with other streaming platforms such as Apache Pulsar or RabbitMQ.

  • Familiarity with data lake architectures and technologies such as Amazon S3, Apache Hadoop, or Apache Druid.

  • Relevant certifications in cloud platforms such as AWS (optional).

  • Understanding of serverless architecture and event-driven systems

  • Previous professional experience in FinTech / online payment flows

  • Experience with data visualization tools like Tableau, PowerBI, or Apache Superset.

  • Understanding of machine learning concepts and frameworks for real-time data analytics.

  • Previous experience in designing and implementing data governance and compliance solutions.

  • Competitive salary and comprehensive benefits package.

  • Opportunity to shape HR strategy for a global, innovative fintech company.

  • Professional development opportunities and resources.

  • A collaborative, inclusive, and dynamic work culture.

  • Full Remote Work.

Share this job:
Please let Leadtech Group know you found this job on Remote First Jobs πŸ™

Similar Remote Jobs

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service πŸ™

Apply