We have a contract through the end of 2025 for a Data Engineer II, possibility to extend or convert. Must have 2+ years of experience with requirement gathering and documentation, DBT usage experience and Snowflake in a production environment. 100% Remote.
- Must Haves:Demonstrated usage of dbt (data build tool) (2+ yrs)
- Snowflake in a production environment (2+ yrs)
- QA Testing, Data Validation & Troubleshooting (Intermediate)
- Gathering and Documentation of Requirements (Intermediate)
- Nice to haves:Experience with usage of git for version control.
- Broad understanding of differences in SQL variations across different platforms (Teradata preferred)
- Healthcare
- Disqualifiers:No Snowflake experience
- Responsibilities:There will be no OT for this project. The day to day will include touchpoints with the whole team (8 other engineers) to go over current work as part of standard 3x weekly Stand Up.
- The role will start with gathering and refining business and technical requirements for the integration of a new platform into a data mart, including translating legacy data structures into standardized formats. This involves meeting with stakeholders, documenting rules in a BRD, and aligning critical data elements.
- On a day-to-day basis, the engineer will develop and maintain code, write Snowflake stored procedures, and load files into the data warehouse. They will also perform QA testing and troubleshooting to proactively identify and resolve data issues. Much of the work will be done in close partnership with a Lead Engineer, following established patterns in GitLab, dbt, and Snowflake to deliver scalable, production-ready solutions.
- Our teams are extremely collaborative in nature and almost all projects involve 2 or more engineers to avoid creation of siloes, built in cross training, and ensuring diversity of opinion/experience is incorporated into solutioning.
- The candidate for this role will have a chance to use a modern tech stack to develop a highly utilized data construct that will have a direct influence on increasing the quality of care for members, which are comprised of more than 1 in 15 individuals across the country.
- About this role:This role will join the Fraud, Waste & Abuse (FWA) Core Analytic Table (CAT) team to support the integration of a new platform into a data mart as part of a strategic initiative to unify and scale provider payment operations under capitation agreements.
- The engineer will collaborate with health plan partners, finance stakeholders, business SMEs, and data engineering teams to translate legacy data structures into standardized, enterprise-compliant formats. Responsibilities include mapping critical data elements, documenting transformation logic, updating code, staging files, and writing Snowflake stored procedures to support complex business logic. The role will also lead and support testing and validation efforts to ensure data accuracy and downstream readiness for downstream consumption.
- This work enables a consistent, scalable approach to capitation-based payment calculations—promoting operational transparency, regulatory compliance, and efficient financial management across the organization.
- The CAT team is unique, straddling the line between traditional Data Engineering functions and the Analytics teams looking to leverage data resources available to them.
- We focus on building robust, highly trusted data pipelines that feed constructs specifically oriented towards providing maximum business value.
Job Description:
Job Profile Summary
Position Purpose:
Develops and operationalizes data pipelines to make data available for consumption (reports and advanced analytics), including data ingestion, data transformation, data validation / quality, data pipeline optimization, and orchestration. Engages with the DevSecOps Engineer during continuous integration and continuous deployment.
Education/Experience:
A Bachelor's degree in a quantitative or business field (e.g., statistics, mathematics, engineering, computer science).
Requires 2 – 4 years of related experience.
Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position.
Technical Skills:
One or more of the following skills are desired.
Experience with Big Data; Data Processing dbt (Data Build Tool)
Experience with Other: diagnosing system issues, engaging in data validation, and providing quality assurance testing
Experience with Data Manipulation; Data Mining;
Experience with Other: • Experience working in a production cloud infrastructure
Experience with one or more of the following C# (Programming Language); Java (Programming Language); Programming Concepts; Programming Tools; Python (Programming Language); SQL (Programming Language)
Knowledge of Microsoft SQL Servers; SQL (Programming Language)
Soft Skills:
Intermediate - Seeks to acquire knowledge in area of specialty
Intermediate - Ability to identify basic problems and procedural irregularities, collect data, establish facts, and draw valid conclusions
Intermediate - Ability to work independently
Are you looking for remote jobs near your area? At Yulys, thousands of employers are looking for exceptional talent like yours. Find a perfect job now.