Skip to content
Technically Business Central
MCP CLI AI AL tooling

AL CLI & MCP: Sub-Millisecond Code Intelligence for Humans and Agents

21 commands backed by an in-memory symbol index. Event tracing, call graphs, impact analysis, and an agentic debugger that Microsoft's MCP doesn't cover.

B

Brad Fullwood

14 min read

Microsoft’s AL Language extension (version 18.0, March 2026) includes an MCP server via launchmcpserver in the AL Development Tools. It covers the build-publish-debug loop through 7 tools.

We built something different. The al CLI and al-mcp server sit on top of our own Rust language server (al-lsp), which holds the entire symbol index in memory. Every .app package in your dependency tree is memory-mapped at startup and parsed into a typed, concurrent index. Symbol lookups across 30,000+ objects complete in under 500 microseconds. The Insight Engine on top builds call graphs, event graphs, object graphs, and table relation graphs that let you trace execution across extension boundaries.

You use the CLI. Your agent uses the MCP server. They both hit the same daemon and get the same response. For the full architecture details on how al-lsp works, see Inside the Zed AL Extension: Architecture and Performance.

Feature comparison

CapabilityMicrosoftOurs
Build projectal_buildal build
Publish / RAD publishal_publishal publish
Download symbolsal_downloadsymbolsal download-symbols
Symbol searchal_symbolsearchal search
Set breakpointsal_setbreakpointal debug breakpoint
Debug sessional_debugal debug (7 sub-commands)
Snapshot debuggingal_snapshotdebuggingal snapshot
Object lookup-al object Table Customer
Targeted source extraction-al source "Sales-Post" --proc Post
Event publishers-al events OnAfterPost*
Event subscribers-al subscribers OnAfterPostSalesDoc
Natural language event suggestion-al suggest-event "validate credit"
Call graph traversal-al callgraph "Sales-Post" --depth 3
Cross-extension event tracing-al trace src/Post.al 120
Event interception points-al intercept "Sales-Post"
Table read/write analysis-al tables "Sales-Post" --proc Post
Dead code detection-al dead-code
Dependency impact analysis-al impact "Customer.Credit Limit"
Permission set analysis-al permissions --analyze
Agentic debugger-al debug eval, al debug history
Lint diagnostics-al lint
Code metrics-al metrics

Every al command also exists as an MCP tool (al/search, al/events, etc.) with identical JSON output. Add --json to any CLI command and you get the same response the MCP server returns.

What this looks like in practice

Finding events

$ al search "OnAfterPostSalesDocument"
Events:
  Sales-Post (Codeunit 80) — OnAfterPostSalesDocument [IntegrationEvent]

Subscribers:
  My Extension (Codeunit 50300) — HandleAfterPostSalesDocument
  ISV Package A <no source>

An agent calling al/search gets the same data as structured JSON. About 150 tokens instead of reading source files at 5,000-8,000 tokens each.

Impact analysis before changing a field

$ al impact "Sales Header"."Sell-to Customer No."
Direct consumers:
  Sales Line (Table 37) — via CALCFIELD
  Customer Sales (Query 50101) — JOIN condition
  Post Sales Document (Codeunit 50200) — parameter passed to ValidateCustomer

Indirect (via events):
  OnAfterValidateSellToCustomerNo subscribed by 3 codeunits

Run this before modifying a widely-used field. You get a concrete list instead of grepping through source.

Dead code detection

$ al dead-code --json | jq '.[] | select(.confidence == "high")'
{
  "object": "MyUtility (Codeunit 50100)",
  "symbol": "OldMigrationHelper",
  "reason": "no references found in workspace or known dependencies",
  "confidence": "high"
}

Conservative by design — it knows references can come from packages without source. But it reliably surfaces internal dead code.

Build with structured errors

{
  "errors": [
    {
      "code": "AL0432",
      "message": "Type 'Integer' does not match type 'Text[50]'",
      "file": "src/MyTable.Table.al",
      "line": 47,
      "column": 9
    }
  ],
  "warnings": [],
  "duration_ms": 812
}

Event suggestion from natural language

$ al suggest-event "validate customer credit limit before posting a sales order"
[
  {
    "event": "OnBeforePostSalesDoc",
    "publisher": "Codeunit \"Sales-Post\"",
    "score": 0.91,
    "rationale": "Fires before posting begins; raising an error cancels posting",
    "subscriber_template": "[EventSubscriber(ObjectType::Codeunit, Codeunit::\"Sales-Post\", 'OnBeforePostSalesDoc', '', false, false)]\nlocal procedure OnBeforePostSalesDoc(var SalesHeader: Record \"Sales Header\")"
  }
]

Copy-paste the subscriber template and it compiles. Exact parameter names from the event definition.

The agentic debugger

Microsoft’s al_debug starts a session. Ours runs the whole thing programmatically:

$ al debug start
$ al debug breakpoint src/SalesPost.al 120 --condition "Rec.Amount > 1000"
$ al debug state
{
  "file": "src/SalesPost.al",
  "line": 120,
  "procedure": "PostDocument",
  "variables": {
    "SalesHeader.Status": "Open",
    "SalesHeader.Amount": 1500.00
  }
}
$ al debug eval "Customer.\"Credit Limit (LCY)\""
{ "result": 50000.00, "type": "Decimal" }
$ al debug history --var "SalesHeader.Status"
[
  { "hit": 1, "value": "Open", "line": 115 },
  { "hit": 2, "value": "Pending Approval", "line": 120 },
  { "hit": 3, "value": "Released", "line": 134 }
]

An agent can use al trace to find the interesting lines, set conditional breakpoints there, then evaluate expressions and watch variables change across passes. No human interaction needed.

Why the speed difference exists

Microsoft’s MCP server calls the AL compiler toolchain per request. Ours queries a warm, in-memory index.

The al-lsp daemon loads every .app package via memmap2 (zero-copy memory mapping) at startup, parses the symbol JSON into a DashMap-backed index with sharded, lock-free concurrent reads. Object composition (base table + all its extensions) is precomputed at index time. The Insight Engine’s four graphs are built incrementally and cached.

A cold first query takes 1-2 seconds while the index builds. Everything after that is sub-millisecond. The daemon stays warm and auto-shuts down after 30 minutes idle.

For the full details on how al-lsp works internally — the .NET bridge, daemon transport, crate layout — see Inside the Zed AL Extension: Architecture and Performance.

Setting it up

The daemon starts automatically on first use:

# CLI — just works
$ al search "Customer"

# MCP — configure in your AI tool
{
  "mcpServers": {
    "al": {
      "command": "al-mcp",
      "args": ["--project", "/path/to/your/al-workspace"]
    }
  }
}

The CLI is useful on its own even if you never use the MCP server. I run al dead-code before releases and al impact before touching anything widely-used. The MCP integration is there when you want agents to do that work, but the commands stand on their own.

Back to Blog
Share:

Related Posts