- Jobs
- Atari
- Principal AI Systems Engineer
Principal AI Systems Engineer
Agentic Frameworks
About the Role
About Us:
Founded in 1972, Atari is one of the world's most iconic consumer brands and a pioneer in the video game industry. Over the past two years, we've been building Atari India, a growing team that plays a critical role in supporting our global operations.
Position: Principal AI Systems Engineer
Experience: 8+ Years
Location: Netaji Subhash Place, Pitampura, New Delhi
Employment Type: Full-Time (Hybrid)
Reports to: Senior Director of Technology, India
Shift Hours: 9 AM–6 PM IST
About the Role
Architect, build, and own AI systems that automate expert-intensive technical workflows end-to-end — from CLI frameworks, MCP servers, and agent tooling through to production deployment, business outcome tracking, and continuous improvement.
Responsibilities
System Architecture
- Own end-to-end architecture of AI automation systems: workflow decomposition, component communication, human checkpoints, and failure behaviour
- Design and build internal CLI frameworks, reusable libraries, and agent scaffolding
- Author and maintain agent instruction files (SKILL.md, CLAUDE.md, system prompts) and MCP server definitions
- Configure Claude Code and Codex CLI environments: MCP wiring, tool permissions, slash commands, and engineering standards
Pipeline Development
- Build production-grade AI pipelines in Python: orchestration, structured prompting, context assembly, schema validation, and retry strategies
- Integrate AI systems with external tooling — version control, build pipelines, SDKs, compliance databases, internal APIs
- Design context assembly: how domain knowledge, runtime state, retrieved documents, and tool outputs compose into the precise input each pipeline stage needs
- Build and operate multi-agent systems: orchestrator-worker patterns, agent memory, structured handoffs, and conflict resolution
Prompt & Context Engineering
- Design, version, and maintain system prompts and agent instructions as first-class engineering artefacts
- Own output schema design and prompt regression testing with a maintained ground-truth eval set
- Engineer context windows with precision — balancing accuracy, token cost, and latency through compression and selective retrieval
- Partner with the RAG Engineer to define retrieval requirements
Evaluation & Reliability
- Build and own the evaluation framework: test suites, regression benchmarks, LLM-as-judge pipelines, and per-stage quality metrics
- Implement production monitoring using LangFuse, Arize, or equivalent — latency, token usage, success rates, and output quality drift
- Run structured failure analysis and implement targeted fixes across context assembly, orchestration, and tool integration
Governance & Technical Leadership
- Implement full audit trails — inputs, tools called, outputs, and human review triggers
- Set the technical standard for AI development across the organization — architecture patterns, eval practices, and quality gates
Requirements
- Proven track record of building production AI automation systems from scratch — end-to-end from architecture through deployment
- Hands-on expertise with Claude Code, Codex CLI, Cursor, or equivalent — including MCP server configuration and agent instruction authoring
- Experience designing and deploying MCP servers and custom tools: tool schema, authentication, and permission boundaries
- LLM orchestration frameworks (LangChain, LangGraph, LlamaIndex, AutoGen)
- AI evaluation framework design with regression testing
- Production Python engineering with proper logging and error handling
- Cloud platform experience (AWS, Azure, GCP)
- AI metrics definition and tracking
Bonus Qualifications
- Gaming industry experience (pipelines, engines, platform certification)
- Game engine scripting or asset pipeline knowledge