4 Ways AI Agents Connect to InsumerAPI
Connect to on-chain verification via MCP server, LangChain toolkit, OpenAI GPT, or direct REST. Code examples for each. No balances exposed by default. Optional Merkle proofs.
An AI agent can create its own API key, verify token holdings across 32 blockchains, generate discount codes for customers, buy more credits with USDC, and onboard new merchants. No human touches anything at any step. Here is how it works across four integration surfaces.
What an agent does with on-chain verification
A shopping agent is helping a user find deals. The user holds UNI tokens in their wallet. The agent queries the InsumerAPI merchant directory and finds three stores that offer discounts to UNI holders. It checks the user's wallet against each merchant, finds 15% off at one of them, and generates a signed discount code. The user gets a code, the merchant gets a customer, and the agent never saw the user's actual token balance. Just a yes or no.
That is one use case. Here are more:
- Compliance agents verify that a wallet meets minimum balance thresholds for accredited investor checks, lending eligibility, or DAO governance. The API returns met or not met. No balance amounts leak.
- Business automation agents onboard merchants programmatically. Create the merchant, configure token discount tiers, set up NFT collections, verify the business's website via DNS or meta tag, and publish to the directory. A business goes from zero to live without a human logging into a dashboard.
- Agent-to-agent trust. One agent runs an attestation and gets an ECDSA-signed result. A second agent verifies that signature offline using the public key. Cryptographic proof of on-chain state, passed between agents, with no second API call needed.
All of these work because the API was designed for agents, not humans. Boolean results, not balance dumps. Signed proofs, not trust-me responses. Programmatic everything, not web forms.
The full autonomous cycle
An agent does not need a human to visit a website, fill out a form, or copy-paste a key. The entire lifecycle is programmatic:
1. Create a key. The agent calls the key creation endpoint with an email and app name. It gets back an API key with 10 free verification credits. One HTTP request. The email is the only human-linked input, and the agent can use any valid address.
2. Discover. The agent queries the merchant directory and token registry. These are free read endpoints with no credit cost.
3. Verify. The agent runs on-chain attestations or generates signed discount codes. Each verification costs one credit.
4. Verify the domain. If the agent is onboarding a merchant, it requests a verification token, places it via DNS TXT record, HTML meta tag, or file, then triggers the check. The merchant's website is now verified.
5. Refuel. When credits run low, the agent sends USDC on any supported chain to the platform wallet, then calls the buy-credits endpoint with the transaction hash. 25 credits per USDC. No approval, no invoice, no human.
6. Repeat. The agent keeps verifying, keeps buying credits, keeps operating. Indefinitely.
This is the loop that makes autonomous agents viable. Discovery, verification, and payment all happen through the same API. The agent funds itself with on-chain USDC and never runs out of credits unless it chooses to stop.
Four ways to connect
The AI agent ecosystem is fragmented. Claude Desktop uses MCP. LangChain agents use Python tools. OpenAI GPTs use Actions. Custom agents call REST endpoints directly. InsumerAPI covers all four. Same 25 endpoints, same ECDSA-signed results, same 32 blockchains. The only difference is the transport layer.
1. MCP Server
Best for: Claude Desktop, Cursor, Windsurf, and any MCP-compatible agent framework.
The Model Context Protocol is an open standard for connecting AI models to external tools. An MCP server exposes typed tools that the model can call directly. No HTTP code in your agent. No JSON parsing. The model sees the tools natively and knows what arguments to pass.
Install:
`npx -y mcp-server-insumer`
Add this to your Claude Desktop config:
`{
"mcpServers": {
"insumer": {
"command": "npx",
"args": ["-y", "mcp-server-insumer"],
"env": {
"INSUMER_API_KEY": "insr_live_..."
}
}
}
}`25 tools covering every endpoint: verification, discovery, wallet trust profiles, EAS attestations, Farcaster identity, credits, and full merchant onboarding.
[View on npm](https://www.npmjs.com/package/mcp-server-insumer)
2. LangChain SDK
Best for: Python agents built on LangChain, LangGraph, or any framework that uses LangChain tools.
Install:
`pip install langchain-insumer`
25 tools covering the full API: attest, wallet trust, batch trust, compliance templates, verify, list merchants, merchant management, credits, ACP/UCP commerce, and more. Each tool has a description that the LLM reads to decide when to call it. Pass the API key via the `INSUMER_API_KEY` environment variable.
[View on PyPI](https://pypi.org/project/langchain-insumer/)
3. OpenAI GPT
Best for: Custom GPTs in ChatGPT, or any agent that consumes OpenAPI specs.
InsumerAPI publishes a full OpenAPI 3.1 spec at insumermodel.com/openapi.yaml. Paste the URL into GPT Actions, set the API key as a custom header, and the GPT can call any endpoint. A pre-built GPT called InsumerAPI Verify is already live in the GPT Store.
4. Direct REST API
Best for: Any language, any framework, any custom agent that can make HTTP requests.
The REST API is the foundation that all three integrations above are built on. Any agent that can make an HTTP request can call it. Every request needs an `X-API-Key` header. The response format is consistent: `{ "ok": true, "data": {...} }` on success, `{ "ok": false, "error": {...} }` on failure.
For agents that auto-discover APIs, InsumerAPI publishes four machine-readable files at the site root:
- [llms.txt](https://insumermodel.com/llms.txt)llms.txt**. Concise API overview. What the API does, how to authenticate, which endpoints exist.
- [openapi.yaml](https://insumermodel.com/openapi.yaml)openapi.yaml**. Full OpenAPI 3.1 spec with schemas, examples, and error codes.
- [ai-plugin.json](https://insumermodel.com/.well-known/ai-plugin.json)ai-plugin.json**. ChatGPT plugin discovery manifest.
- [llms-full.txt](https://insumermodel.com/llms-full.txt)llms-full.txt**. Extended reference with curl examples for every endpoint.
An agent that crawls a site and reads `llms.txt` will find the API, understand what it does, and know how to get started. No documentation lookup required.
Which one should you use?
- Claude Desktop, Cursor, Windsurf: MCP server. One command, 25 typed tools.
- Python with LangChain: The LangChain SDK. 25 tools covering all API endpoints, native integration.
- ChatGPT custom GPT: OpenAI Actions with the OpenAPI spec.
- Everything else: Direct REST. Any language, any HTTP client.
For the full verification API guide covering all 25 endpoints, integration patterns, and pricing, see the AI Agent Verification API overview. You can also try a live XRPL attestation demo to see signed results in action.
No portal visit required
Every integration starts with an API key. An agent creates its own by calling a single endpoint. Ten free verification credits are included. When those run out, the agent sends USDC on-chain and buys more. The entire cycle is programmatic. No browser, no signup form, no human approval.
Pick the integration that matches your stack and let your agent verify on-chain holdings across 32 blockchains.

