Job description
Company Profile:
Our client is an Australian-based company data-driven trading business that relies on live sports-market information to make time-critical decisions. They collect only a modest proof-of-concept stream of odds and event updates.
Due to their continued success, they are looking to expand their team in the Philippines and are seeking for a reliable, critical thinker, and solutions-oriented Senior/Lead Data Engineer to join their dynamic team.
Duties & Responsibilities:
· Design & run a multi-cluster Apache Kafka backbone that can grow beyond 200 Mbps sustained ingest.
· Build stateful pipelines in Apache Flink (DataStream API & Flink SQL) for enrichment, cleansing, aggregation and feature generation.
· Persist processed streams into fast analytical stores (e.g. Redis) for low-latency look-ups and dashboards.
· Create batch back-fill tooling for historical re-processing when needed.
· Manage object storage (Amazon S3 / MinIO / Ceph) and define retention / lifecycle policies.
· Stand up observability (Prometheus, Grafana), CI/CD, and infrastructure-as-code with Terraform from scratch.
Work day-to-day with the founder on roadmap, budget and hardware choices-and, if success follows, help onboard the next engineers
• 5+ years of Data Engineer experience
• Autonomous ownership - plan, prioritise and deliver without hand-holding.
• Apache Kafka operations and performance tuning.
• 2+ years building real-time stream-processing pipelines in production.
• Clear, concise async communication.
• Calm under pressure / incident composure.
• Proficiency in Java or Python, plus solid SQL.
• Some understanding of distributed-systems basics (partitioning, watermarking, checkpointing, back-pressure).
• Linux administration and containerisation (Docker/Podman).
Strongly Preferred
• Hands-on Apache Flink experience - deploying, scaling and securing clusters.
• Schema-driven serialisation (Avro / Protobuf) and schema-registry management.
• Robust web scraping techniques for structured and unstructured sports data.
• Experience with object storage platforms (Amazon S3, Ceph, or MinIO).
• High throughput key-value stores (Redis or RocksDB).
• Familiarity with modern virtualisation platforms (Kubernetes, Proxmox, VMware).
• Infrastructure-as-Code with Terraform (or similar).
• Rapid self-learning & curiosity.
• Problem slicing & pragmatic decision making.
• Bias for automation & documentation.
Nice to Have
• Apache Pulsar, Faust or equivalent streaming frameworks.
• DuckDB, ClickHouse or similar OLAP engines.
• Observability stack: Prometheus, Loki, Grafana, Tempo, OpenTelemetry.
• CI/CD with GitHub Actions, ArgoCD or Flux.
• Collaborative knowledge sharing.
• Domain exposure to sports data, trading or high-frequency market making.
• Full time work from home offshore contractor employment from our client’s Australian headquartered company
• 13th month pay guaranteed
• 30 days combined leaves and 1 birthday leave (Day 1)
• HMO Allowance: Php5,000
• One off ergonomic kit of Php12,000 (Day 1)
• Monthly Wellness Allowance Php1,000
• UPS or LTE fail-over stipend (choose one)
• Work equipment will be provided
• Observance of PH regular Holidays
• Annual Performance Bonus 10-20%, Salary based on KPI (to be given late April – Pro rated)
• All expense paid trip to Brisbane HQ at the end of 2025 – 2026 (NBA Regular season, late April 2026)