Job Description
Working with stakeholders to grow the amount and kinds of customer-defined targetings we can calculate in real-time
Develop new features for our in-house streaming key/value DBs
Improve resource utilization across all stages of our data pipelines, from validation and ingestion to storage to dynamic queries
Collaborate with developers and stakeholders to design services in order to meet product and business requirements
Strong experience designing, building, implementing, and maintaining large real-time data pipelines with Kafka and Kafka Streams
Significant expertise with data consistency, throughput, and event-driven system concepts
Fluent in Python and/or Java
Familiar with designing and deploying data jobs
Experience with managing and monitoring service infrastructure and deployments
Worked with Kafka, Kafka Streams, Trino (or any distributed SQL engine), Airflow, Kubernetes, Docker, Helm, Jenkins, Prometheus, Grafana, Elasticsearch
Nice to have: Basic knowledge of AWS services such as S3 and RDS, and Terraform
A chance to be a part of a casual but professional environment where you will have a safe place to try, fail and lear
Coaching from our tech leads to advance your soft and technical skills and set your own development path
Defined and organized the onboarding process for both, the company and the project
Competitive compensation depending on experience and skills
Private pension and medical insurance for you and your family. Also, maternity and sick leave are 100% paid
Sport clubs – from fishing to basketball, whatever rocks your boat
Awesome referral fees - because great people know great people
Work-life balance – this is the company that really supports your professional, family and personal goals
Freedom to decide how you want to work - partly or fully remote or from our offices












