Job description
About the Role
Join our innovative team as a Generative AI Engineer, where you’ll design and maintain cutting-edge data pipelines to fuel actionable insights. You’ll collaborate with data analysts and business stakeholders to enhance operational efficiency and elevate customer satisfaction through advanced data engineering and AI-driven solutions. This remote role offers the opportunity to make a meaningful impact in a dynamic, fast-paced environment.
Job Title: Generative AI Engineer
Location: Remote (US-based, excluding CA and HI) – must work EST hours
Employment Type: Contract-to-Hire
Pay Rate: $45–47/hour (target)
Key Responsibilities
- Data Pipeline Development: Build and maintain robust ETL (Extract, Transform, Load) pipelines to process data from diverse sources, including CRM and call center systems, ensuring seamless integration into data warehouses and lakes.
- Data Modeling: Create dimensional and analytical data models to support business reporting and analytics needs, accurately reflecting business domains.
- Data Integration: Automate data ingestion and transformation processes, resolving inconsistencies to ensure data integrity across multiple sources.
- Data Quality Assurance: Implement checks to monitor and maintain data accuracy and completeness, proactively addressing quality issues.
- Performance Optimization: Enhance data pipelines and queries for efficient processing, ensuring timely delivery of insights.
- Collaboration: Partner with data analysts, business stakeholders, and data science teams to translate business needs into technical solutions, supporting advanced analytics and machine learning initiatives.
Qualifications
- Experience: 3–5 years in data engineering, with strong expertise in ETL processes, data warehousing, and contact center AI/ML applications.
- Technical Skills:
- Proficiency in SQL, Python (for ETL, data integration, and modeling), and Java.
- Expertise with data integration tools (e.g., Informatica, Talend, or Apache Airflow).
- Strong knowledge of relational and NoSQL databases, cloud platforms (AWS or Google Cloud), and big data technologies (Hadoop, Spark).
- Experience with data manipulation tools (e.g., Pandas) and version control systems.
- Familiarity with AI/ML fundamentals, model deployment, and data preparation.
- Industry Knowledge: Understanding of data privacy, regulatory compliance, and customer feedback measurement in AI projects.
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.