Creating Contract-First Decision Systems with PydanticAI for Robust Enterprise AI

0

Introduction

If you’re looking to enhance your enterprise AI by building a contract-first decision system, you’re in the right place. This guide will show you how to effectively use PydanticAI to achieve risk-aware, compliant decision-making processes. By treating structured schemas as important contracts, we can ensure that our AI operates within necessary guidelines. This not only helps in maintaining compliance but also fosters trust among stakeholders by ensuring that the decision-making process is transparent and accountable.

Understanding Contract-First Design

In the realm of enterprise AI, a contract-first design approach emphasizes the importance of non-negotiable governance contracts. By using structured schemas, we can define a clear decision model that incorporates policy compliance, assesses risks, calibrates confidence, and outlines actionable next steps. This structured approach also helps better communication among teams, as everyone involved can refer back to the defined contracts for clarity and alignment.

The Importance of Decision Models

A strict decision model is major for ensuring that outputs aren’t only syntactically correct but also logically consistent and compliant with business policies. This is where Pydantic’s validators and self-correction mechanisms come into play. By integrating these features, organizations can create solid decision-making frameworks that are adaptable to varying scenarios while still adhering to established guidelines.

Setting Up Your Environment

To begin, let’s set up your environment by installing the necessary libraries and configuring async execution. This setup is critical for running PydanticAI effectively, ensuring that everything is optimized for making decisions. Proper configuration not only enhances performance but also reduces the likelihood of errors during execution, enabling smoother workflows.

!pip install -U pydantic-ai pydantic openai nest_asyncio
import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import List, Literal
import nest_asyncio
nest_asyncio.apply()
from pydantic import BaseModel, Field, field_validator
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
    try:
        from google.colab import userdata
        OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
    except Exception:
        OPENAI_API_KEY = None
if not OPENAI_API_KEY:
    OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()

Defining the Risk and Decision Models

After setting up your environment, the next step is to define your risk and decision models. Using Pydantic’s BaseModel, you can create a schema that accurately captures the requirements and ensures compliance. By clearly defining risks and decision outputs, organizations can create a common understanding of what constitutes acceptable and unacceptable outcomes within their decision-making processes.

class RiskItem(BaseModel):
    risk: str = Field(..., min_length=8)
    severity: Literal["low", "medium", "high"]
    mitigation: str = Field(..., min_length=12)

class DecisionOutput(BaseModel):
    decision: Literal["approve", "approve_with_conditions", "reject"]
    confidence: float = Field(..., ge=0.0, le=1.0)
    rationale: str = Field(..., min_length=80)
    identified_risks: List[RiskItem] = Field(..., min_length=2)
    compliance_passed: bool
    conditions: List[str] = Field(default_factory=list)
    next_steps: List[str] = Field(..., min_length=3)
    timestamp_unix: int = Field(default_factory=lambda: int(time.time()))

Implementing Validators

Validators play a significant role in the decision model as they enforce business logic. For instance, we can ensure that the confidence level aligns with the severity of identified risks. This alignment is critical as it helps prevent decisions that could lead to significant negative consequences, thereby safeguarding the organization’s interests.

@field_validator("confidence")
@classmethod
def confidence_vs_risk(cls, v, info):
    risks = info.data.get("identified_risks") or []
    if any(r.severity == "high" for r in risks) and v > 0.70:
        raise ValueError("confidence too high given high-severity risks")
    return v

Incorporating Enterprise Context

Next, we inject enterprise context through a typed dependency object. This helps to ensure that the decision-making process considers specific company policies and risk thresholds. By embedding this context within the decision model, organizations can tailor their risk assessments and decisions to align with their unique operational realities.

@dataclass
class DecisionContext:
    company_policy: str
    risk_threshold: float = 0.6

model = OpenAIChatModel(
    "gpt-5",
    provider=OpenAIProvider(api_key=OPENAI_API_KEY),
)
agents = Agent(
    model=model,
    deps_type=DecisionContext,
    output_type=DecisionOutput,
    system_prompt="you're a corporate decision analysis agent. You must evaluate risk, compliance, and uncertainty. All outputs must strictly satisfy the DecisionOutput schema."
)

Validating Outputs

Output validation is a critical step in maintaining the integrity of the decision-making process. We can enforce rules that ensure identified risks are meaningful and that compliance controls are explicitly referenced in the rationale. This level of scrutiny helps to build confidence in the outputs generated by the AI, as stakeholders can trust that the decisions made are based on a thorough analysis of risks and compliance factors.

@agent.output_validator
def ensure_risk_quality(result: DecisionOutput) -> DecisionOutput:
    if len(result.identified_risks) < 2:
        raise ValueError("minimum two risks required")
    if not any(r.severity in ("medium", "high") for r in result.identified_risks):
        raise ValueError("at least one medium or high risk required")
    return result

Running the Decision Process

Finally, let's run the decision model on a realistic request. This practical application of the decision model illustrates how it can be utilized in real-world scenarios, enabling organizations to make informed choices based on thorough risk assessments.

async def run_decision():
    global CURRENT_DEPS
    CURRENT_DEPS = DecisionContext(
        company_policy=(
            "No deployment of systems handling personal data or transaction metadata "
            "without encryption, audit logging, and least-privilege access control."
        )
    )
    prompt = "Decision request:\nDeploy an AI-powered customer analytics dashboard using a third-party cloud vendor. The system processes user behavior and transaction metadata. Audit logging isn't implemented and customer-managed keys are uncertain."
    result = await agent.run(prompt, deps=CURRENT_DEPS)
    return result.output

decision = asyncio.run(run_decision())
from pprint import pprint
pprint(decision.model_dump())

Conclusion

By following these steps, you can transition from generic AI outputs to structured, governed decision systems using PydanticAI. This method not only aligns decisions with policy requirements but also maintains the integrity of the decision-making process, allowing for safe failures and self-corrections. As organizations embrace this approach, they can expect to see improved compliance, reduced risk exposure, and enhanced decision quality, ultimately leading to better business outcomes.

FAQs

what's PydanticAI?

PydanticAI is a framework that combines Pydantic with AI models to create structured, reliable decision-making processes that are compliant with business logic. Its design allows for the integration of complex decision-making factors while ensuring clarity and accountability.

How does a contract-first approach benefit AI decision systems?

A contract-first approach ensures that decisions adhere to predefined schemas, reducing the chances of logical inconsistencies and compliance issues. This structured methodology also promotes a culture of accountability and transparency, which is vital for organizations operating in regulated environments.

What libraries do I need to install for this setup?

You need to install Pydantic-AI, Pydantic, OpenAI, and Nest AsyncIO to effectively set up your environment. These libraries work together to create a solid framework for developing AI-driven decision systems.

Can I customize the decision models?

Absolutely! You can modify the schemas according to your business needs, ensuring that the models fit your specific requirements. This flexibility is key for adapting the decision-making framework to various operational contexts and industry standards.

Is it possible to integrate this with existing systems?

Yes, you can integrate this decision-making framework with your existing systems by aligning the output format with your current data handling processes. This integration capability allows organizations to tap into their current investments while enhancing their decision-making capabilities through AI.

Keep Reading: New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy, Creating Safe AI Systems with LangGraph: A Two-Phase Commit Guide

You might also like
Leave A Reply

Your email address will not be published.