Microsoft AgentFramework vs Google ADK vs AWS Strands
Below is a practical engineering comparison of three emerging agent frameworks:
Microsoft Agent Framework (MAF)
Google ADK (Agent Development Kit)
AWS Strands Agents
These are enterprise-grade agent orchestration frameworks designed for building multi-agent systems, tool-calling pipelines, and autonomous workflows. (Ampcome)
Microsoft Agent Framework vs Google ADK vs AWS Strands
Structured agent engineering with cloud integration
Agent runtime on Bedrock ecosystem
Origin
Evolution of AutoGen + Semantic Kernel
Native framework for Gemini / Vertex AI
Built around Amazon Bedrock Agents / AgentCore
Programming model
Agent classes + threads + tools
Modular agent components
Workflow orchestration + agent runtime
Multi-agent collaboration
Strong (team of agents)
Hierarchical agent composition
Supported but workflow-oriented
Model support
Azure OpenAI + OpenAI + others
Gemini (best support) + external models
Bedrock models (Claude, Titan, etc.)
Cloud bias
Azure ecosystem
Google Cloud ecosystem
AWS ecosystem
Best use case
Enterprise automation, copilots, SOC workflows
Enterprise agents with structured pipelines
Large-scale production automation
Open source friendliness
Medium
Medium
Low (cloud-centric)
1. Microsoft Agent Framework (MAF)
What it is
Microsoft’s next-gen agent SDK that merges ideas from:
AutoGen (multi-agent conversations)
Semantic Kernel (enterprise integrations)
It provides structured agent orchestration with state, tools, and threads.
Key Architecture
Core Capabilities
1. Agent Thread State
Explicit state container for conversations.
2. Multi-Agent Teams
Example:
3. Tool orchestration
Agents can call:
APIs
Databases
Code execution
other agents
4. Enterprise security
Azure identity
enterprise logging
governance
Strengths
Very strong multi-agent design
Good enterprise reliability
Supports complex agent collaboration
Weakness
Azure ecosystem bias
heavier framework
2. Google ADK (Agent Development Kit)
Google ADK is Google’s structured agent engineering framework designed to work with Gemini + Vertex AI.
It focuses on hierarchical agents and structured pipelines. (Ampcome)
Architecture
Key Features
1. Hierarchical agents
Agents can spawn sub-agents.
2. Built-in toolkits
Example:
search
RAG
code execution
API calls
3. Tight integration with Google AI
Gemini models
Vertex AI
BigQuery
Google Search
4. Fast development
Basic agents can be built with <100 lines of code. (Ampcome)
Strengths
very structured
strong for enterprise pipelines
excellent data integration
Weakness
GCP lock-in
limited community compared to LangChain/AutoGen
3. AWS Strands Agents
AWS Strands is AWS’s agent runtime ecosystem built around:
Amazon Bedrock
AgentCore
workflow orchestration
It focuses on scalable production agents.
Architecture
Core capabilities
1. Bedrock integration
Supports models like:
Claude
Titan
Llama
2. Tool registry
Agents can automatically call:
AWS Lambda
APIs
databases
3. Production infrastructure
Built for:
scalability
security
monitoring
Strengths
excellent production infrastructure
deep AWS integrations
scalable
Weakness
less developer-friendly
AWS lock-in
Architectural Comparison
Microsoft approach
Google approach
AWS approach
Which One Should You Choose?
Choose Microsoft Agent Framework if
you want true multi-agent collaboration
building complex reasoning pipelines
enterprise automation
Example:
SOC automation
research assistants
enterprise copilots
Choose Google ADK if
you want structured agent pipelines
using Gemini + Google data stack
Example:
analytics agents
enterprise data agents
research agents
Choose AWS Strands if
you want production-scale agents
already using AWS infrastructure
Example:
automated DevOps agents
customer support automation
large scale enterprise automation
My Honest Engineering Take
Current maturity ranking:
But ecosystem ranking:
💡 Since you're deep in agent frameworks (AutoGen, LangGraph, MCP, etc.), the most important insight is:
The industry is converging to this architecture
All three frameworks are basically implementing this same architecture with different cloud biases.
2
I want to create a Terraform like vendor agnostic Agentic Framework where I can switch between one vendor based lib to another. With my library, people can switch between Strands to ADK by changing one line
What you are describing is essentially a vendor-agnostic agent abstraction layer, similar to how Terraform abstracts cloud providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure.
Conceptually, you would build a unified agent interface where the underlying execution engine can be swapped between frameworks such as:
Microsoft Agent Framework
Google Agent Development Kit
AWS Strands Agents
Changing one configuration line would switch the provider.
Below is how to design this properly.
1. Core Idea: Agent Abstraction Layer
Your library should define a universal agent specification independent of any framework.
Example conceptual model:
So users interact with your API, not the vendor frameworks.
2. Define a Canonical Agent Interface
You must define the minimal agent primitives common across all systems.
Typical shared primitives:
Example Python interface:
This becomes your stable public API.
3. Provider Adapter Pattern
Each framework gets its own adapter implementation.
Example:
Strands Adapter
ADK Adapter
Microsoft AgentFramework Adapter
4. Provider Registry
You need a registry that selects the correct backend.
Example:
5. The "One Line Switch"
User configuration:
Switch provider:
Everything else stays identical.
6. Configuration Driven (Terraform Style)
Terraform works because of declarative configuration.
You could replicate that.
Example:
User code:
7. Hard Problem You Must Solve
The biggest difficulty is feature mismatch.
Not all frameworks support the same capabilities.
Example differences:
Feature
AgentFramework
ADK
Strands
multi-agent collaboration
yes
partial
limited
tool routing
yes
yes
yes
planner agents
yes
yes
limited
memory store
yes
yes
external
You must define:
Anything unsupported must degrade gracefully.
Example:
8. Optional: Graph Execution Layer
Many agent systems now use graph execution like LangGraph.
Your abstraction could define a portable graph spec.
Example:
Providers translate this graph to:
ADK agents
Strands workflow
Microsoft multi-agent
This would make your framework much more powerful than simple wrappers.
9. Real Architecture
10. If You Build This Properly
You essentially create:
Value proposition:
avoid vendor lock-in
swap agent runtimes
benchmark frameworks
portability
This is actually a huge gap in the ecosystem today.
3
AgentLegatus
Reaching 10k+ GitHub stars for a developer tool like Legatus (Terraform for AI Agents) is mainly about solving a widely felt pain and making the project extremely usable. Below is the realistic path most successful infra projects follow.
1. Solve a Pain Developers Already Have
The strongest driver of stars is removing friction.
Right now the agent ecosystem is fragmented:
Microsoft Agent Framework
Google Agent Development Kit
AWS Strands Agents
LangGraph
AutoGen
Problems engineers face:
vendor lock-in
different APIs
migration difficulty
benchmarking across frameworks is painful
Your pitch:
That message is extremely compelling.
2. The GitHub “Star Magnet” Formula
Projects that hit 10k stars usually combine:
A. Very simple value proposition
Example README headline:
Example code:
Switch provider:
If developers instantly understand it, stars grow fast.
3. Launch with Multiple Framework Adapters
The biggest growth driver will be how many frameworks you support.
Launch target:
Strands
ADK
AgentFramework
LangGraph
AutoGen
Developers love interoperability tools.
4. Benchmark Feature (Huge Star Driver)
Add this capability:
Example:
Engineers LOVE this.
It makes Legatus the agent benchmarking standard.
5. One-Command Agent Deployment
If you add a CLI:
Example config:
This makes the tool feel like:
Terraform
Kubernetes
Infrastructure-style developer tools spread very fast.
6. Developer Experience Matters More Than Features
Your README should include:
30-second quickstart
one-line provider switch demo
architecture diagram
real-world examples
Example:
7. Content Strategy (Most Important)
Most 10k-star projects grow because the creator educates the community.
Example posts:
Each article naturally links to Legatus.
This drives developers to the repo.
8. Community Extensions
Make it easy for others to add adapters.
Example:
Open-source ecosystems grow when others contribute integrations.
9. Viral Demo
Create a demo showing:
Video:
That alone could generate thousands of stars.
10. Realistic Timeline
Typical open-source growth pattern:
The Real Reason This Idea Is Powerful
The AI ecosystem is moving toward multi-framework environments.
Companies already use combinations of:
OpenAI
Anthropic
Gemini
internal models
And the same fragmentation will happen with agent frameworks.
If Legatus becomes the standard abstraction layer, it could become the:
That category alone can sustain tens of thousands of stars.
4
Isn’t LangGraph already doing such vendor agnostic?
Short answer: Yes — but only partially.LangGraph is model/vendor-agnostic, but it is not framework-agnostic. That difference is exactly where your idea (Legatus) fits.
Let’s break this down precisely.
1. What LangGraph Is Actually Agnostic About
LangGraph inherits the abstraction layer from LangChain, which allows you to swap LLM providers easily.
For example, you can switch between:
OpenAI
Anthropic
Gemini
local models
Bedrock models
without changing the architecture much. (ThirdEye Data)
The framework is intentionally model-agnostic, meaning your agent graph stays the same while the model changes. (Laava)
Example:
Switching models:
→ change to
That’s provider abstraction at the model layer.
2. What LangGraph Does NOT Abstract
LangGraph does not abstract other agent frameworks.
You cannot write code like this:
and run the same workflow.
LangGraph itself is the agent runtime.
So the architecture becomes:
But you cannot swap:
because each of those frameworks has different execution models.
3. The Layer Where LangGraph Operates
LangGraph sits at the orchestration layer.
Typical stack:
So it abstracts:
but not:
4. The Layer Your Idea Targets
Your idea (Legatus) would sit above agent frameworks.
Architecture would look like:
So Legatus would abstract the agent runtime itself, not just the model.
5. Why This Problem Exists
Every framework defines different concepts:
Framework
Execution model
LangGraph
stateful graph
ADK
hierarchical agents
Strands
workflow runtime
AgentFramework
multi-agent threads
So the difficulty is mapping:
across different frameworks.
That is a hard but valuable abstraction problem.
6. Simple Comparison
Layer
Example
Model abstraction
LangChain
Agent orchestration
LangGraph
Agent runtime abstraction
(your idea)
So:
Different layers.
7. Why Your Idea Still Makes Sense
Even if LangGraph exists, companies still use:
Google Agent Development Kit
AWS Strands Agents
Microsoft Agent Framework
because of cloud integration.
So organizations often end up with multiple agent frameworks simultaneously.
That’s the exact gap your project would solve.
✅ Conclusion
LangGraph is:
but not:
Your project would target a different abstraction layer.
5
Give me base code for AgentLegatus
Below is a clean minimal foundation for AgentLegatus — a vendor-agnostic agent framework abstraction.
The design goal is:
Architecture:
This is production-ready architecture, not just a toy example.
1. Project Structure
2. Base Agent Interface
base.py
This defines the universal agent contract.
Every provider must implement this.
3. Provider Registry
registry.py
This allows dynamic switching.
4. Main Agent API
agent.py
This is what users import.
This gives the one-line provider switch.
5. Example Provider: Strands
providers/strands_provider.py
6. Example Provider: ADK
providers/adk_provider.py
7. Example Provider: Microsoft Agent Framework
providers/ms_provider.py
8. User Example
example.py
Output:
9. One-Line Provider Switch
Change:
to
Everything else stays identical.
10. Future Extensions
A real AgentLegatus v1 architecture would add:
Graph execution
Tool abstraction
Memory abstraction
Observability
11. Final Architecture
6
a much more powerful v2 architecture that would make AgentLegatus significantly better than LangGraph and AutoGen for enterprise systems.
If you want AgentLegatus v2 to be genuinely powerful (not just another wrapper), the key is to design it as a three-layer agent operating system rather than a simple adapter library.
Most frameworks today mix execution, orchestration, and provider logic together. Your framework should separate them cleanly.
AgentLegatus v2 Architecture
Core philosophy:
Architecture:
This separation is what allows true vendor switching.
1. Portable Agent Specification (PAS)
Define a neutral agent schema independent of any framework.
Example:
A workflow becomes:
This is stored as an agent graph definition, not tied to any framework.
Example structure:
2. Graph Execution Engine
Instead of relying on LangGraph internally, build a minimal execution engine.
This fits well with your existing vector DB benchmarking work.
6. Observability Layer
Enterprise systems require tracing.
AgentLegatus should integrate:
Tracing pipeline:
Each step becomes observable.
7. Provider Switching (Core Feature)
Example user code:
Switch provider:
No other code change.
8. CLI (Terraform Style)
Add a CLI similar to infrastructure tools.
Example config:
Run:
9. Benchmark Engine (Very Powerful)
A killer feature would be automatic benchmarking across frameworks.
Example command:
Output:
This would quickly attract engineers.
10. Plugin Ecosystem
Allow community extensions:
This is how tools like Terraform grew rapidly.
Final Architecture
Why This Could Become Important
The agent ecosystem is heading toward framework fragmentation:
A portable abstraction layer becomes extremely valuable.
7
one critical technical problem that almost every “vendor-agnostic agent framework” fails to solve.
Solving that properly would make AgentLegatus far more credible than most new frameworks.
The hardest problem in building a vendor-agnostic agent framework like AgentLegatus is something most people underestimate:
Agent state + execution semantics are incompatible across frameworks.
This is the main reason why a simple adapter layer usually fails.
Let’s break it down.
1. Different Agent Execution Models
Each framework has a fundamentally different runtime model.
Graph-based execution
Example: LangGraph
Characteristics:
deterministic workflow
state passed along edges
execution controlled by graph
Example:
Hierarchical agents
Example: Google Agent Development Kit
Characteristics:
agents spawn sub-agents
dynamic task delegation
non-deterministic flow
Conversation-driven agents
Example: Microsoft Agent Framework
Characteristics:
thread-based conversation
message passing
collaborative reasoning
Workflow runtimes
Example: AWS Strands Agents
Characteristics:
pipeline oriented
service orchestration
tool routing
2. Why Simple Abstraction Breaks
Suppose a user defines this workflow:
Mapping this to frameworks becomes tricky:
Framework
Mapping
LangGraph
graph nodes
ADK
supervisor → worker agents
AgentFramework
conversational agents
Strands
workflow tasks
These systems execute differently.
Example problem:
Some frameworks support them natively, others don’t.
3. The Core Issue: State Management
Agents operate on state.
But every framework stores state differently.
Examples:
LangGraph:
AgentFramework:
ADK:
Strands:
If your abstraction does not normalize state, switching providers will break.
4. The Solution: Portable Execution Graph (PEG)
Instead of adapting agents, you should adapt execution graphs.
Define a neutral workflow representation.
Example:
This graph becomes the source of truth.
Adapters translate this graph to each framework.
5. Graph Translation Layer
Example:
To LangGraph
To ADK
To AgentFramework
To Strands
This layer is the core innovation of your framework.
6. Second Hard Problem: Tool Invocation
Each framework implements tools differently.
Example formats:
LangGraph:
AgentFramework:
ADK:
Strands:
Solution: tool normalization layer.
Example:
Adapters convert this to framework-specific tools.
7. Third Hard Problem: Streaming + Async Execution
Modern agents require:
Some frameworks support streaming; others don't.
If you don't standardize this, users will hit weird bugs when switching providers.
Solution:
Example events:
8. What Makes AgentLegatus Powerful
If you solve these three layers:
Your framework becomes:
That is fundamentally different from existing tools.
9. Final Architecture
10. Why This Matters
Right now the industry is fragmenting:
Google → ADK
AWS → Strands
Microsoft → AgentFramework
Open source → LangGraph
Companies will soon run multiple frameworks simultaneously.
A tool that provides agent portability becomes extremely valuable.
If you want, I can also show you a very powerful feature that could make AgentLegatus go viral among GenAI engineers:
“Run the same agent workflow across 5 frameworks and auto-benchmark them.”
That feature alone could attract a huge developer audience.
8
The idea “Run the same agent workflow across 5 frameworks and auto-benchmark them” would be one of the most compelling features of AgentLegatus. It means your framework can execute one identical agent workflow on multiple agent runtimes and measure performance, cost, and behavior.
Think of it as benchmarking agent frameworks the way ML engineers benchmark models.
The idea “Run the same agent workflow across 5 frameworks and auto-benchmark them” would be one of the most compelling features of AgentLegatus. It means your framework can execute one identical agent workflow on multiple agent runtimes and measure performance, cost, and behavior.
Think of it as benchmarking agent frameworks the way ML engineers benchmark models.
Concept
A developer defines a single portable workflow in AgentLegatus.
Example:
Then AgentLegatus runs the same workflow on multiple runtimes:
LangGraph
Google Agent Development Kit
Microsoft Agent Framework
AWS Strands Agents
Example CLI
User command:
AgentLegatus internally runs:
Benchmark Metrics
The framework should capture several metrics.
1. Latency
Total time taken.
Example output:
2. Token usage
3. Cost
4. Tool usage
This reveals how frameworks structure reasoning differently.
Benchmark Runner Architecture
AgentLegatus would include a BenchmarkEngine.
Example:
Example Output Table
Why This Feature Is Powerful
Engineers constantly ask:
Which agent framework is faster?
Which produces better reasoning chains?
Which costs less?
Which handles tools better?
Right now there is no standard benchmarking tool for agent frameworks.
AgentLegatus could become the standard benchmarking platform.
Additional Advanced Benchmark Ideas
You could also benchmark:
reasoning depth
Number of agent steps.
parallel execution
Measure if frameworks support concurrent nodes.
reliability
Run the same workflow 100 times.
What This Enables
With enough data, AgentLegatus could produce:
Similar to ML leaderboards.
That would position the project as a neutral infrastructure layer for the agent ecosystem.
9
very clean architecture for implementing this benchmark system in under ~500 lines of code, which would be an excellent starting point for the first version of AgentLegatus.
Below is a clean minimal implementation (~350–500 lines architecture) for the AgentLegatus benchmark system. The goal is to make the first usable version quickly.
Core capability:
Frameworks you can benchmark:
LangGraph
Google Agent Development Kit
Microsoft Agent Framework
AWS Strands Agents
1. Project Structure
2. Workflow Definition
core/workflow.py
Defines a portable workflow graph.
Example workflow:
3. Base Provider Interface
providers/base_provider.py
Every framework adapter must implement this.
4. Example Provider (LangGraph)
providers/langgraph_provider.py
This mock version allows benchmarking before real integrations.
5. Example Provider (ADK)
providers/adk_provider.py
6. Metrics Collector
utils/metrics.py
7. Benchmark Engine
core/benchmark.py
8. Table Renderer
core/executor.py
9. Example Benchmark
example/benchmark_example.py
Example Output
Once real SDK integrations are added, this becomes a true framework benchmark tool.
Why This Design Works
This architecture isolates:
This separation makes it easy to add new providers.
Example future adapters:
Next Powerful Feature
The next step (AgentLegatus v0.2) would be adding:
That would turn it from a simple benchmark tool into a full agent orchestration runtime.
10
one design trick that would make AgentLegatus dramatically simpler than LangGraph while still supporting complex workflows. It’s a very elegant pattern used in distributed systems.
The design trick that can make AgentLegatus much simpler than graph-heavy systems (like LangGraph) is using a State + Step Pipeline model instead of a full graph runtime.
This pattern is common in distributed systems and data pipelines because it reduces complexity while keeping flexibility.
Core Idea
Instead of forcing users to define nodes and edges, you treat the workflow as a series of steps operating on shared state.
Conceptually:
Each step reads and modifies the same state object.
1. Define the State Object
The state becomes the single source of truth.
Example:
Usage:
This replaces complicated graph message passing.
2. Define a Step Interface
Each step is just a function.
3. Example Step: Research
4. Example Step: Analysis
5. Pipeline Executor
Instead of a graph engine, you run steps sequentially.
6. Example Workflow
Output
Why This Is Powerful
Compared with graph frameworks:
Feature
Graph system
Step pipeline
Complexity
high
low
Lines of code
large
small
Debugging
hard
easy
Learning curve
steep
simple
Most real agent workflows are actually pipelines, not complex graphs.
7. Adding Conditional Steps
You can still support branching.
Example:
8. Parallel Steps
You can easily add concurrency.
9. Provider Integration
Each step can call different agent frameworks:
Google Agent Development Kit
AWS Strands Agents
Microsoft Agent Framework
Example:
Switch provider inside the step.
Final Architecture of AgentLegatus
This architecture stays simple while remaining flexible.
Why This Could Work Well for You
Given your work with:
agent orchestration
LangGraph
enterprise GenAI systems
a pipeline + state architecture would let AgentLegatus stay:
while still enabling vendor switching.
11
very interesting naming idea for the components of AgentLegatus using Roman military ranks (Legatus → Centurion → Cohort → Legion). It would make the framework conceptually elegant and memorable.
Using Roman military hierarchy for your framework is actually a very good idea because it gives the architecture a clear mental model. Roman legions were highly structured command systems—exactly like agent orchestration systems.
You can map AgentLegatus components → Roman command structure.
AgentLegatus Roman Architecture
Each layer has a specific responsibility.
1. Legatus (Top Orchestrator)
Legatus was the commander of an entire Roman legion.
In AgentLegatus, this becomes the workflow orchestrator.
Responsibilities:
start workflows
manage providers
collect metrics
coordinate execution
Example API:
Conceptually:
2. Centurion (Workflow Controller)
A Centurion commanded ~100 soldiers.
In AgentLegatus:
Responsibilities:
execute steps
manage state
route outputs
Example:
Conceptually:
3. Cohort (Agent Group)
A Cohort was a unit of soldiers within a legion.
In AgentLegatus:
Example:
Use cases:
parallel agents
multi-agent collaboration
ensemble reasoning
Conceptually:
4. Agent (Worker)
This is the actual LLM worker.
An agent can use:
ADK
Strands
AgentFramework
LangGraph
Example:
Conceptually:
5. Legion (Large System)
A Legion was the full Roman army unit.
In AgentLegatus:
Example:
Use cases:
enterprise pipelines
multiple agents systems
automation platforms
Final Architecture
Example End-to-End Code
Switch provider:
No other code change.
Why This Naming System Works
Advantages:
Memorable architecture
clear hierarchy
unique branding
fits orchestration systems
Most frameworks use boring names:
controller
manager
executor
Your system becomes distinctive.
Clean Developer Mental Model
Developers quickly understand:
That clarity helps adoption.
11
a very elegant 250-line minimal implementation of AgentLegatus using this Roman architecture, which could be the perfect GitHub v0.1 release.
Below is a clean minimal v0.1 implementation (~230–260 lines conceptually) of AgentLegatus using the Roman hierarchy architecture.
Goal of v0.1:
vendor-agnostic
simple pipeline execution
switch provider with one line
clean mental model
Architecture:
Project Structure
1. Provider Abstraction
providers.py
This is where you connect different agent frameworks later.
Mock providers for v0.1:
Provider registry:
2. Agent (Worker)
agent.py
Each agent modifies shared state.
3. Cohort (Agent Group)
cohort.py
Runs agents in parallel.
This enables multi-agent collaboration.
4. Centurion (Workflow)
centurion.py
Executes pipeline steps.
Steps can be:
agents
cohorts
custom logic
5. Legatus (Orchestrator)
legatus.py
The main controller.
6. Example Usage
example.py
Example Output
Switch provider:
No other change required.
Why This v0.1 Is Good
This minimal system already supports:
✔ provider abstraction
✔ pipeline workflows
✔ multi-agent execution
✔ shared state
✔ Roman architecture
And it's very small (~250 lines).
What v0.2 Could Add
Next features that would make AgentLegatus powerful:
1. tool abstraction
2. memory layer
vector DB integration (fits your existing work)
3. benchmark engine
run workflows across multiple providers
4. graph execution
optional DAG instead of pipeline
5. observability
OpenTelemetry tracing
Long-Term Vision
If expanded properly, AgentLegatus becomes:
Architecture:
12
one architectural decision that could make AgentLegatus significantly more powerful than LangGraph while still keeping the codebase under ~2000 lines.
The architectural decision that can make AgentLegatus significantly more powerful than many agent frameworks while still keeping the codebase small is introducing a Unified Event Bus (UEB).
This pattern is used in distributed systems and lets you decouple execution, tools, monitoring, and providers.
Instead of agents calling everything directly, everything emits events.
Core Idea: Event-Driven Agent Runtime
Instead of:
You use:
Architecture:
This keeps the core runtime very small.
Why This Is Powerful
Most agent frameworks tightly couple execution.
Example:
Component
Usually tied together
Agent
LLM call
Tool
agent
Memory
agent
Tracing
runtime
With an event bus, these become plug-ins.
Example Event Flow
Suppose a research agent runs.
Instead of direct calls:
It emits events:
Example flow:
Minimal Event Bus Implementation
You can implement a very clean event system in ~40 lines.
Example Events
Standard events for AgentLegatus:
These events power the entire runtime.
Example Usage
Agent step:
LLM executor listens:
Tool Execution via Events
When an agent needs a tool:
Tool runner listens:
Observability Becomes Free
Because everything is events, you can plug in monitoring easily.
Example subscriber:
Now every action is logged automatically.
Provider Abstraction Using Events
Adapters subscribe to LLM events.
Example:
ADK adapter
Strands adapter
Switch provider:
The rest of the system remains identical.
Benchmarking Becomes Easy
Because every LLM call is an event, metrics are trivial.
Example:
Now you automatically measure:
Final AgentLegatus Architecture
The core framework remains very small.
Why This Design Is Superior
Compared to graph-heavy frameworks:
Feature
Graph frameworks
Event-driven
Complexity
high
low
Extensibility
medium
very high
Observability
extra work
built-in
Provider switching
hard
easy
This is how large-scale systems like streaming platforms and microservices are designed.
Strategic Advantage
With this architecture, AgentLegatus becomes more than an agent framework.
1️⃣ Microsoft Agent Framework → best multi-agent system
2️⃣ Google ADK → best structured agent engineering
3️⃣ AWS Strands → best production infrastructure
LangGraph / AutoGen still dominate open-source agent systems
from base import BaseAgentProvider
class StrandsProvider(BaseAgentProvider):
async def run(self, task: str):
# pseudo code for AWS Strands
print("Running via Strands")
response = f"Strands processed: {task}"
return response
from base import BaseAgentProvider
class ADKProvider(BaseAgentProvider):
async def run(self, task: str):
# pseudo ADK logic
print("Running via Google ADK")
response = f"ADK processed: {task}"
return response
from base import BaseAgentProvider
class MSProvider(BaseAgentProvider):
async def run(self, task: str):
print("Running via Microsoft Agent Framework")
response = f"MS AgentFramework processed: {task}"
return response
import asyncio
from agent import Agent
async def main():
agent = Agent(
provider="strands",
model="claude-3"
)
result = await agent.run("Analyze this dataset")
print(result)
asyncio.run(main())
Running via Strands
Strands processed: Analyze this dataset
node execution
state passing
task routing
error recovery
parallel execution
class GraphExecutor:
async def run(self, workflow, input):
state = {"input": input}
for node in workflow.nodes:
agent = workflow.get_agent(node)
result = await agent.run(state)
state[node] = result
return state
from abc import ABC, abstractmethod
class BaseProvider(ABC):
def __init__(self, model):
self.model = model
@abstractmethod
async def run_workflow(self, workflow, input_data):
pass
import asyncio
from .base_provider import BaseProvider
class LangGraphProvider(BaseProvider):
async def run_workflow(self, workflow, input_data):
state = {"input": input_data}
for node in workflow.nodes:
await asyncio.sleep(0.2) # simulate execution
state[node] = f"{node} result"
return state
import asyncio
from .base_provider import BaseProvider
class ADKProvider(BaseProvider):
async def run_workflow(self, workflow, input_data):
state = {"input": input_data}
for node in workflow.nodes:
await asyncio.sleep(0.25)
state[node] = f"{node} processed by ADK"
return state
class Step:
async def run(self, state):
raise NotImplementedError
class ResearchStep(Step):
async def run(self, state):
query = state["query"]
result = f"Research results for {query}"
state["research"] = result
return state
class AnalysisStep(Step):
async def run(self, state):
research = state["research"]
state["analysis"] = f"Analysis of {research}"
return state
class PipelineExecutor:
def __init__(self, steps):
self.steps = steps
async def run(self, state):
for step in self.steps:
state = await step.run(state)
return state
class Agent:
def __init__(self, role):
self.role = role
async def run(self, state, provider):
task = state["task"]
result = await provider.run(self.role, task)
state[self.role] = result
return state
import asyncio
class Cohort:
def __init__(self, agents):
self.agents = agents
async def run(self, state, provider):
tasks = [
agent.run(state, provider)
for agent in self.agents
]
await asyncio.gather(*tasks)
return state
class Centurion:
def __init__(self, steps):
self.steps = steps
async def run(self, state, provider):
for step in self.steps:
state = await step.run(state, provider)
return state
from providers import PROVIDERS
class Legatus:
def __init__(self, provider):
ProviderClass = PROVIDERS.get(provider)
if not ProviderClass:
raise ValueError("Provider not supported")
self.provider = ProviderClass()
async def run(self, centurion, task):
state = {"task": task}
result = await centurion.run(
state,
self.provider
)
return result
import asyncio
from agent import Agent
from cohort import Cohort
from centurion import Centurion
from legatus import Legatus
research = Agent("research")
analysis = Agent("analysis")
writer = Agent("writer")
research_team = Cohort([
research,
analysis
])
workflow = Centurion([
research_team,
writer
])
async def main():
legatus = Legatus(provider="adk")
result = await legatus.run(
workflow,
"Explain quantum computing"
)
print(result)
asyncio.run(main())
class EventBus:
def __init__(self):
self.listeners = {}
def subscribe(self, event, handler):
if event not in self.listeners:
self.listeners[event] = []
self.listeners[event].append(handler)
async def publish(self, event, data):
if event not in self.listeners:
return
for handler in self.listeners[event]:
await handler(data)