Role Overview We are seeking a Senior AI & Data Engineer to bridge the gap between scalable data architecture and Generative AI. This role is designed for an engineer who views data as a product and possesses a proven track record of building production-grade AI-powered applications and agentic workflows. Unlike traditional data modeling roles, you will be responsible for architecting the underlying systems that power our data ecosystem - integrating LLMs, RAG (Retrieval-Augmented Generation) solutions, and autonomous agents into enterprise-scale platforms on Google Cloud (Google Cloud Platform).
Technical Requirements Must-Have (Minimum Qualifications):
- Total Experience: 6 8 years in Software or Data Engineering, with a strong emphasis on Python-based application development.
- AI Specialization: 2 4 years of hands-on experience building with LLMs, GPT-based APIs, or Vector Databases , Fine Tuning LLMs,
- Cloud Mastery: Extensive experience with Google Cloud Platform (preferred), including BigQuery, Vertex AI, and Cloud Run, Postgres.
- Systems Design: Proven ability to design OLTP and OLAP patterns and document complex information flows.
- Tooling: Expert proficiency in Python, SQL, and Terraform.
- Modern AI Architectures: Experience with MCP (Model Context Protocol), RAG, LangChain, Google Agent Development Kit (ADK)
- Other : Rest API development using Flask/Flask API
Key Responsibilities:
- AI Orchestration & Agentic Workflows: Architect and deploy agentic AI workflows and RAG solutions. Move beyond simple prompting to build systems capable of multi-step reasoning and tool-use.
- Data Product Engineering: Design and implement high-performance analytical data products using streaming (Pub/Sub, Dataflow) and batch patterns, ensuring data is "AI-ready."
- Infrastructure as Code (IaC): Own and scale cloud infrastructure using Terraform. Prioritize security, cost-optimization, and automated environment provisioning.
- LLMOps & Pipeline Monitoring: Build robust pipelines that monitor not just data quality, but AI model performance, latency, and drift in production.
- Engineering Excellence (DevOps/TDD): Operate within an Agile environment using Test-Driven Development (TDD). Maintain rigorous code quality standards via SonarQube, Checkmarx, and Cycode integrated with CI/CD pipelines and GitHub Actions.
- Governance & Lineage: Implement enterprise-grade data protection and sharing models. Map complex data lineages to ensure compliance and traceability.
- Production Reliability: Provide Tier 3 support for production AI services, ensuring uptime and performance according to established SLAs.
Skills Required:
IT Solutions, Google Cloud Platform, Cloud Computing, Artificial Intelligence & Expert Systems Senior Specialist Exp: 7+ experience in relevant field. Experience Preferred: SQL, Python
Technical Requirements - Must-Have (Minimum Qualifications):
Total Experience: 6 8 years in Software or Data Engineering, with a strong emphasis on Python-based application development. AI Specialization: 2 4 years of hands-on experience building with LLMs, GPT-based APIs, or Vector Databases , Fine Tuning LLMs, Cloud Mastery: Extensive experience with Google Cloud Platform (preferred), including BigQuery, Vertex AI, and Cloud Run, Postgres. Systems Design: Proven ability to design OLTP and OLAP patterns and document complex information flows. Tooling: Expert proficiency in Python, SQL, and Terraform. Modern AI Architectures: Experience with MCP (Model Context Protocol), RAG, LangChain, Google Agent Development Kit (ADK) Other : Rest API development using Flask/Flask API
Education Required:
Bachelor's Degree
For applications and inquiries, contact: hirings@openkyber.com
Are you looking for more jobs nearby? Find your favorite jobs now by visiting our online jobs page.