- Jobs
- Mistral AI
- Applied AI Engineer, Prototyping
Applied AI Engineer, Prototyping
AI Infrastructure
About the Role
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.
About the Team
The Proto Team is the technical pre-sales arm of the GTM organization. We operate at the intersection of product and customer, translating what Mistral's technology can do into real-world solutions that deliver value fast. We work across three core areas: Deliver for customers (build high-impact solutions on short timelines), Ship for Mistral internally (full-stack applications that unblock teams), and Drive product innovation (test early, validate real use cases, push systems to their limits).
About the Role
As an Applied AI Engineer in the Proto Team, you operate as a technical lead building production-grade AI systems in 4 to 8 weeks. You work across GTM, engineering, and applied AI, translating ambiguous business problems into working software. You own architecture, make technical decisions, and ship full-stack systems end to end.
What you will do
- Build and deliver full-stack AI solutions for global customers, owning the end-to-end execution from scoping to deployment as technical lead.
- Engage directly with customers to understand use cases, define requirements, and translate them into robust technical architectures and working systems.
- Collaborate across GTM, product, and engineering to ship solutions and contribute to internal tools, product improvements, and open-source initiatives.
- Solve complex applied AI problems across industries, working on real-world GenAI use cases and providing hands-on technical guidance throughout engagements.
About You
Technical Competencies
- 2+ years experience as a hands-on engineer shipping AI-powered products (ML, software, or full-stack).
- Strong track record of building and deploying production systems end to end, not just prototypes.
- Comfortable working across the modern AI stack: LLMs, RAG, agentic systems, and their deployment in real-world applications.
- Strong software engineering fundamentals in Python, with experience building scalable backend systems (e.g., FastAPI, Pydantic).
- Working understanding of frontend development (e.g., React or Vue) to build usable interfaces when needed.
- Broad technical range: able to move fluidly between system design, infrastructure (e.g., Kubernetes), and applied LLM problem-solving, with a strong bias toward shipping.
Ideally you have:
- Built or won hackathons / AI competitions.
- Experience with Docker, Kubernetes, cloud platforms (AWS or GCP), and infrastructure tooling such as Terraform.
- Contributed to open-source projects, ideally in the LLM or applied AI space.
- Strong full-stack + AI experience, having built and shipped end-to-end applications (e.g., FastAPI, Next.js).