Job description
Data Engineer
Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Bringg is a cutting-edge delivery and logistics management platform designed to streamline last-mile operations for businesses across various industries. It provides powerful tools to optimize fleet management, delivery scheduling, route planning, and real-time tracking, ensuring efficient and timely deliveries. The platform integrates with various third-party logistics providers and offers full visibility into delivery workflows, helping companies reduce operational costs and improve customer experience.
About the Role:
Weβre looking for a talented and forward-thinking Data Engineer to join our team at a pivotal moment of transformation. As we completely redesign our data pipeline architecture, youβll play a hands-on role in shaping and building the next generation of our data platform. This is a high-impact opportunity to work with a modern tech stack, influence architecture decisions, and solve complex technical challenges in a fast-paced, product-driven environment.
Our data team works cross-functionally with data scientists, backend engineers, and analysts to deliver scalable, real-time solutions used by global enterprise clients.If you are passionate about clean architecture, eager to learn new technologies, and ready to take ownership of core infrastructure β this role is for you.
Key Responsibilities:
- Drive the design and architecture of scalable, efficient, and resilient batch and streaming data pipelines.
- Shape the implementation of modern, distributed systems to support high-throughput data processing and real-time analytics.
- Collaborate cross-functionally with data scientists, engineers, and product stakeholders to deliver end-to-end data-driven capabilities.
- Optimize legacy systems during the migration phase, ensuring a seamless transition with minimal disruption.
- Contribute to DevOps and MLOps processes and enhance the reliability, monitoring, and automation of data infrastructure.
- Support the integration and deployment of AI/ML models within the evolving data platform.
Required Competence and Skills:
- 4+ years of experience building and maintaining production-grade data pipelines
- Hands-on experience modern data tools such as Kafka, Airflow, Spark, or Flink
- Deep understanding of SQL and NoSQL ecosystems such as PostgreSQL, Redis, and Elasticsearch or Delta Lake
- Solid backend development experience with strong understanding of OOP/OOD principles and design patterns (Python)
- Demonstrated experience designing and implementing new data architectures, especially in fast-paced or transitioning environments.
- Strong understanding of ETL/ELT processes and data flow logic
Nice to have:
- Exposure to MLOps and integrating ML models into production
- Experience in DevOps and asynchronous systems
- Familiarity with RabbitMQ, Docker, WebSockets, and Linux environments
- Familiarity with routing or navigation algorithms
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of the country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.