- Jobs
- SimplifyNext
- AI Engineer
AI Engineer
AI Infrastructure
About the Role
Design and build sophisticated AI agents capable of independent operation, complex decision-making, and self-correction. The role involves building real-world AI and machine learning solutions across ASEAN.
We're not hiring someone to run models. We're hiring someone who builds systems that think.
Responsibilities:
- Agentic AI Systems Development
- Design and build autonomous AI agents with sophisticated decision-making capabilities.
- Implement orchestration workflows using LangChain, LangGraph, and Microsoft Bot Framework.
- Create Retrieval-Augmented Generation (RAG) systems to minimize hallucinations.
- Develop Memory, Reasoning, and Planning (MRP) features and Agent-to-Agent (A2A) communication protocols.
- Work with LLM serving solutions like Ollama and vLLM.
- Build with strong fundamentals in agentic AI — you understand how to design agents that are reliable, observable, and able to recover from failure.
- Deployment & Operations
- Deploy AI systems across AWS, Azure, or GCP with focus on high availability, security, and scalability.
- Containerize applications using Docker and manage Kubernetes orchestrations.
- Build CI/CD pipelines with Argo Workflows and Argo CD.
- Create well-documented APIs for client integration.
- Model Training & Optimization
- Train and fine-tune custom models using TensorFlow and PyTorch.
- Conduct structured experiments including hyperparameter tuning.
- Work with diverse data types: text, image, time-series, and graph data.
Required Qualifications:
- Shipped AI systems to production users (not prototypes)
- Strong agentic AI design fundamentals
- Hands-on cloud platform experience (AWS, Azure, or GCP)
- Python engineering skills with Git proficiency
- Docker and Kubernetes deployment experience
- Communication skills for both technical and non-technical audiences
Preferred Qualifications:
- 3-7 years AI/ML engineering experience
- LangChain, LangGraph, Ollama, vLLM implementation
- RAG system optimization background
- MRP concepts and A2A protocol design knowledge
- MLOps tooling expertise (Argo Workflows, Argo CD)
- TensorFlow/PyTorch proficiency
- Knowledge graphs or semantic web exposure
Key Tech Stack:
- AI/LLM Frameworks: LangChain, LangGraph, Microsoft Bot Framework, Ollama, vLLM
- Cloud Platforms: AWS, Azure, GCP
- Containerization & Orchestration: Docker, Kubernetes
- CI/CD Tools: Argo Workflows, Argo CD
- ML Frameworks: TensorFlow, PyTorch
Benefits / Culture:
- Full certification sponsorship
- Structured learning paths and mentorship
- End-to-end problem ownership
- Work on high-impact public sector and enterprise initiatives
- Engineering-first culture with world-class practitioners
- Regional scope across ASEAN and Asia Pacific
Location: Singapore (onsite).