Senior Data Engineer - DBA

at Oportun
  • Remote - Mexico

Remote

Data

Senior

Job description

ABOUT OPORTUN

Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members’ financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009.

WORKING AT OPORTUN

Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization’s performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups.

POSITION SUMMARY

As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects).

RESPONSIBILITIES

  • Database Design & Architecture

    • Design, implement, and maintain optimal database schemas for relational (MariaDB) and NoSQL (MongoDB) databases.
    • Participate in data modeling efforts to support analytics in Databricks.
  • Performance Monitoring & Tuning

    • Monitor and tune performance across all platforms to ensure optimal performance.
    • Use profiling tools (e.g., EXPLAIN, query plans, system logs) to identify and resolve bottlenecks.
  • Security & Compliance

    • Implement access controls, encryption, and database hardening techniques.
    • Manage user roles and privileges across MariaDB, MongoDB, and Databricks.
    • Ensure compliance with data governance policies (e.g., GDPR, HIPAA).
  • Backup & Recovery

    • Implement and maintain backup/recovery solutions for all database platforms.
    • Periodically test restore procedures for business continuity.
  • Data Integration & ETL Support

    • Support and optimize ETL pipelines between MongoDB, MariaDB, and Databricks.
    • Work with data engineers to integrate data sources for analytics.
  • Monitoring & Incident Response

    • Set up and monitor database alerts.
    • Troubleshoot incidents, resolve outages, and perform root cause analysis.
  • MariaDB-Specific Responsibilities

    • Administer MariaDB instances (standalone, replication, Galera Cluster).
    • Optimize SQL queries and indexing strategies.
    • Maintain stored procedures, functions, and triggers.
    • Manage schema migrations and upgrades with minimal downtime.
    • Ensure ACID compliance and transaction management.
  • MongoDB-Specific Responsibilities

    • Manage replica sets and sharded clusters.
    • Perform capacity planning for large document collections.
    • Tune document models and access patterns for performance.
    • Set up and monitor MongoDB Ops Manager / Atlas (if used).
    • Automate backup and archival strategies for NoSQL data.
  • Databricks-Specific Responsibilities

    • Manage Databricks workspace permissions and clusters.
    • Collaborate with data engineers to optimize Spark jobs and Delta Lake usage.
    • Ensure proper data ingestion, storage, and transformation in Databricks.
    • Support CI/CD deployment of notebooks and jobs.
    • Integrate Databricks with external data sources (MariaDB, MongoDB, S3, ADLS).
  • Collaboration & Documentation

    • Collaborate with developers, data scientists, and DevOps engineers.
    • Maintain up-to-date documentation on data architecture, procedures, and standards.
    • Provide training or onboarding support for other teams on database tools.

REQUIREMENTS

  • Bachelor’s or master’s degree in computer science, Data Science, or a related field.
  • 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management.
  • Proficiency in programming languages like Python/PySpark and Java or /Scala
  • Expertise in big data technologies such as Hadoop, Spark, Kafka, etc.
  • In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB MySQL, NoSQL databases).
  • Experience and expertise in building complex end-to-end data pipelines.
  • Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks
  • Ability to lead ETL migration from Talend to Databricks pyspark
  • Demonstrated building capabilities, reusable utilities, and tools for speeding complex business processes.
  • Ability to work in an Agile environment (Scrum, Lean, Kanban, etc)
  • Ability to mentor junior team members.
  • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse).
  • Strong leadership, problem-solving, and decision-making skills.
  • Excellent communication and collaboration abilities.
  • Familiarity or certification in Databricks is a plus.

Preferred Skills and Tools

  • MariaDB Tools: mysqldump, mysqladmin, Percona Toolkit
  • MongoDB Tools: mongodump, mongotop, mongoexport, Atlas UI
  • Databricks Tools: Jobs UI, Databricks CLI, REST API, SQL Analytics
  • Scripting: Bash, Python, PowerShell
  • Monitoring: Prometheus, Grafana, CloudWatch, DataDog
  • Version Control & CI/CD: Git, Jenkins, Terraform (for infrastructure-as-code)
  • Preferred Cloud provider: AWS

#LI-REMOTE

#LI-GK1

We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate.

California applicants can find a copy of Oportun’s CCPA Notice here:  https://oportun.com/privacy/california-privacy-notice/.

We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3).

Share this job:
Please let Oportun know you found this job on Remote First Jobs 🙏

Benefits of using Remote First Jobs

Discover Hidden Jobs

Unique jobs you won't find on other job boards.

Advanced Filters

Filter by category, benefits, seniority, and more.

Priority Job Alerts

Get timely alerts for new job openings every day.

Manage Your Job Hunt

Save jobs you like and keep a simple list of your applications.

Search remote, work from home, 100% online jobs

We help you connect with top remote-first companies.

Search jobs

Hiring remote talent? Post a job

Frequently Asked Questions

What makes Remote First Jobs different from other job boards?

Unlike other job boards that only show jobs from companies that pay to post, we actively scan over 20,000 companies to find remote positions. This means you get access to thousands more jobs, including ones from companies that don't typically post on traditional job boards. Our platform is dedicated to fully remote positions, focusing on companies that have adopted remote work as their standard practice.

How often are new jobs added?

New jobs are constantly being added as our system checks company websites every day. We process thousands of jobs daily to ensure you have access to the most up-to-date remote job listings. Our algorithms scan over 20,000 different sources daily, adding jobs to the board the moment they appear.

Can I trust the job listings on Remote First Jobs?

Yes! We verify all job listings and companies to ensure they're legitimate. Our system automatically filters out spam, junk, and fake jobs to ensure you only see real remote opportunities.

Can I suggest companies to be added to your search?

Yes! We're always looking to expand our listings and appreciate suggestions from our community. If you know of companies offering remote positions that should be included in our search, please let us know. We actively work to increase our coverage of remote job opportunities.

How do I apply for jobs?

When you find a job you're interested in, simply click the 'Apply Now' button on the job listing. This will take you directly to the company's application page. We kindly ask you to mention that you found the position through Remote First Jobs when applying, as it helps us grow and improve our service 🙏

Apply