Skip to content

Building an Agentic AI SQL Modernization Pipeline

(Hands-on Lab Experience)

Today I had the opportunity to participate in a hands-on lab event where I built something I had wanted to experiment with for a long time:

An Agentic AI-based SQL modernization system running on Azure.

Not just a single model.
Not just prompt engineering.
But a 3-agent orchestration pipeline with validation logic, optimization, and persistence.

And I loved every minute of it.

The Problem: Legacy SQL is Messy

We started with a very real enterprise problem:

Legacy SQL dialects (Oracle, Teradata, DB2, Netezza…)
– proprietary functions
– undocumented logic
– outdated syntax
– performance risks
– manual, error-prone migration processes

Modernizing thousands of queries manually is slow and expensive.

So the question became:

What if we design an Agentic AI system that does this automatically?

The Idea: Agentic AI Instead of One Big Prompt

Instead of asking one model to do everything, we designed a multi-agent workflow:

  1. Translation Agent – converts legacy SQL (e.g., Oracle) to Azure SQL (T-SQL)
  2. Validation Agent – checks syntax + semantic correctness
  3. Optimization Agent – suggests performance improvements (indexes, best practices)

Each agent has a specialized role.
They are orchestrated in sequence.

Translation → Validation → Optimization

This is not just transformation.
This is controlled, intelligent execution.

What I Actually Built (Step by Step)

If I had to rebuild this tomorrow, this would be my mental roadmap:


1️⃣ Azure Foundations

Before touching the agents, I needed infrastructure.

  • Azure subscription access
  • Azure AI Foundry project
  • Resource deployment
  • Endpoint configuration
  • Cosmos DB account

Why Cosmos DB?

Because every translation result, validation output, and optimization suggestion needed to be stored for traceability and auditability.

This made it enterprise-ready.


2️⃣ Creating the Agents

Inside Azure AI Foundry, I created:

  • A Translation Agent
  • A Validation Agent
  • An Optimization Agent

Each had:

  • Clear role definition
  • Strict output requirements
  • Structured JSON response expectations
  • Defined hand-off logic

One of the most interesting parts?

The system did not always behave as expected initially.

So I had to:

  • Adjust prompts
  • Refine output constraints
  • Modify validation expectations

And I could immediately see the effect of those changes.

That feedback loop was incredibly valuable.


3️⃣ Designing the Pipeline Logic

The critical rule:

If validation fails → STOP
If validation passes → hand off to optimization

That small orchestration logic transforms a simple LLM call into a real Agentic system.

This was the moment where it stopped being “just AI” and started feeling like architecture.


4️⃣ Connecting Everything in VS Code

Then came the practical engineering part:

  • Azure CLI authentication
  • Retrieving Agent IDs
  • Configuring .env file
  • Installing dependencies
  • Running the Streamlit app locally

Using:

  • azure-ai-agents
  • azure-ai-projects
  • azure-identity
  • azure-cosmos
  • Streamlit

When the app finally ran locally and I saw:

✔ Translation
✔ Validation
✔ Optimization
✔ Results stored in Cosmos DB

That was a very satisfying moment.


5️⃣ The Final Result

I ended up with:

A local web application where I can:

  • Paste Oracle SQL
  • Upload SQL files
  • See translation results
  • See validation feedback
  • See optimization recommendations
  • Review history stored in Cosmos DB

This is no longer a demo script.

This is a cloud-ready modernization engine.

What I Learned

This lab gave me hands-on experience with:

  • Agentic AI design principles
  • Multi-agent orchestration
  • LLM-based dialect translation
  • Semantic validation logic
  • Cloud-native persistence (Cosmos DB)
  • Azure authentication and SDK integration
  • Container-ready architecture mindset

But more importantly:

I experienced what it feels like to design a system where
AI is not just answering a question,
but participating in a structured workflow.

Personal Takeaway

The most exciting part for me was this:

I didn’t just observe Agentic AI.

I built one.

Even in a lab environment, I:

  • Designed it
  • Tuned it
  • Fixed it
  • Validated it
  • Ran it locally

And I could immediately see how changing one part of the pipeline impacted the whole system.

That kind of experimentation is priceless.

I genuinely wish more sandbox environments like this were available to people who want to gain real, practical experience with modern AI systems.

Because this is not the future.

This is already happening.

And I want to build more of it.