Manager, Engineering - AI

Reltio

AI Infrastructure

Agentic Frameworks

Tech Stack

About the Role

About the Role

We seek a hands-on AI Engineering Manager to lead R&D on LLM-based agentic workflows, data intelligence, and AI-first products. You'll guide a team of Data Scientists, ML Engineers, UI/Integration Engineers, and Quality Engineers, owning technical design, end-to-end execution, and delivery of agentic systems. This role blends engineering management with deep data science expertise, modern AI tooling, and agent orchestration.

Key Responsibilities

  • Deliver hands-on engineering management for an AI team of Data Scientists, ML Engineers, UI/Integration Engineers, and ML SDETs.
  • Provide technical leadership, mentoring, and project oversight, scoping ML initiatives, allocating resources, and driving prototype to MVP to production with minimal handoffs.
  • Own agentic AI systems that plan, reason, use tools/APIs, maintain memory/context, self-correct, and execute multi-step workflows autonomously.
  • Design, implement, and productionize end-to-end RAG pipelines, balancing chunking, embeddings, indexing, latency, and cost.
  • Lead foundation model pipelines for embedding generation, prompt engineering, and LLM-driven decisioning.
  • Collaborate with Product, Engineering, and stakeholders to integrate agentic workflows using LangChain, Semantic Kernel, or custom orchestrators.
  • Build/refine LLM/ML models for entity resolution, semantic search, matching, and data unification.
  • Design A/B experiments to evaluate agent responses, grounding, and quality; evolve memory architectures, vector search, and context management.
  • Partner on integrations with enterprise APIs, backend systems, and cloud ML infra (e.g., AWS Bedrock).

Must-have Skills

  • Proven track record leading and motivating high-performing AI/ML teams in ambiguous, fast-paced environments.
  • Deep hands-on experience building and productionizing agentic/multi-step AI workflows.
  • Expertise in LLMs, RAG/retrieval architectures, foundation models (embeddings, transformers), vector search, semantic similarity, and prompt engineering.
  • Ownership of production ML code in Python, including cloud deployments (AWS Bedrock, SageMaker, Azure AI Studio).
  • Experience with graph memory, MCP protocol (or equivalent), and knowledge reasoning engines.
  • Experience with Kubernetes, vector databases, and modern AI infrastructure.
Apply Now
Apply Now

More jobs like this

Explore related roles

Get jobs like this weekly