- Jobs
- Agiloft
- AI Data Platform Lead
AI Data Platform Lead
About the Role
As the most trusted global leader in data-first contract lifecycle management (CLM) software, Agiloft helps organizations manage the end-to-end process of proposing, negotiating, signing, and leveraging contracts using our flexible Data-first Agreement Platform (DAP). Top analysts like Gartner, Forrester, and IDC agree, all showing Agiloft as a leader in the CLM space.
Position Overview
The AI Data Platform Lead is the foundational technical role within AI Operations responsible for designing, building, and governing the cross-departmental data infrastructure that powers Agiloft's AI transformation. This role owns the full data engineering scope required to make the Data Warehouse Foundation serve not only business intelligence and reporting, but the complete spectrum of AI use cases: GPT assistants, AI agents, predictive analytics, real-time operational intelligence, and the contextual intelligence layer that underpins the organization's intelligent operating model.
The AI Data Platform Lead reports to the VP of AI Operations and is a core member of the AI Operations team. This role is distinct from and complementary to the Principal Data and Integrations Architect, who owns the infrastructure layer. The AI Data Platform Lead operates at the layer above infrastructure: owning what the data means, how it is modeled for AI and analytics consumption, whether it is trustworthy and fit for purpose, and how it connects to the intelligence layer that GPT assistants, agents, and predictive models depend on.
Job Responsibilities
- Own the end-to-end data architecture for the Data Warehouse Foundation, designing for AI-first consumption across GPT assistants, AI agents, predictive models, and operational intelligence
- Lead data modeling across all 11 departments, designing canonical enterprise data models that serve cross-functional AI and analytics use cases without duplication or fragmentation
- Design and implement the contextual intelligence layer including RAG architecture, vector store strategy, knowledge base ingestion pipelines, and document and unstructured data processing
- Build and maintain the agentic data integration layer: real-time and near-real-time data access patterns, agent memory and state persistence design, orchestration data requirements, and agent output integration back into the warehouse
- Own the AI/ML feature layer: feature engineering strategy and standards, training data pipeline design, feature store architecture, and model output integration
- Design and govern the operational data and GPT context layer
- Lead the Data Warehouse Foundation build in partnership with the external consulting team
- Design and manage data ingestion, ELT/ETL, and orchestration pipelines across all source systems
- Establish and enforce AI data engineering standards across the organization
- Own data access policy design and least-privilege access controls in partnership with Security
- Define data quality standards and monitoring processes for AI-consumed data
- Partner with the Principal Data and Integrations Architect on infrastructure design
- Manage the AI Ops data architecture roadmap
- Collaborate with the AI Agent Engineer and GPT & AI Systems Lead to ensure data infrastructure supports agent orchestration, retrieval-augmented generation, and multi-step reasoning workflows
Required Qualifications
- Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related technical field required
- 7-10 years of experience in data engineering, data architecture, or a related technical function, with at least 3 years focused on AI or ML data infrastructure
- Deep expertise in modern data stack technologies. Snowflake required; experience with dbt, Airflow or equivalent orchestration, and ELT/ETL pipeline design
- Demonstrated experience designing data architecture for AI consumption, including vector databases, embedding pipelines, RAG systems, or feature stores
- Strong data modeling skills across multiple paradigms
- Experience building and operating real-time or near-real-time data pipelines for operational AI use cases
- Proficiency in Python and SQL; experience with cloud data infrastructure on AWS required
- Experience designing data access patterns and governance controls for AI systems
- SaaS industry experience required
Preferred Qualifications
- Experience in private equity-backed SaaS organizations
- Experience with agentic AI frameworks: LangGraph, Mastra, or equivalent
- Experience building or operating RAG architectures at production scale
- Experience with agent memory architectures and state persistence design for multi-step AI workflows
- Familiarity with AI governance and compliance requirements for data used in automated decision-making
- Experience with Tines or equivalent no-code/low-code orchestration platforms for simple agent pipelines
- Exposure to contract lifecycle management, legal tech, or professional services data domains
Benefits
Medical, dental, and vision insurance; short-term and long-term disability; life insurance and AD&D; 401(k) with company match; flexible vacation; paid parental leave; voluntary benefits including pet insurance.