v0.5 released — Write operations shipping

Zapier for
AI Agents

Your agent needs Shopify data? Give it a URL. Liquid handles everything — discovery, auth, field mapping, retries, self-healing. No connectors to build.

⚡ Deterministic at runtime 🔄 Self-healing adapters 🤖 MCP + A2A + REST 📖 Open source (AGPL-3.0)
5
Discovery methods
4
LLM providers
3
Access protocols
APIs supported
The problem

Building API integrations for agents is broken

Every new API means writing a connector. Every schema change means a 3am pager. Every agent needs its own integration layer.

❌ Without Liquid

  • Write a connector for each API (Shopify, Jira, Stripe…)
  • Maintain field mappings manually as APIs evolve
  • Debug auth edge cases for every service
  • Rebuild everything when the upstream schema changes
  • Duplicate this work in every agent you ship

✓ With Liquid

  • Point to a URL, get a typed adapter back
  • AI discovers endpoints, auth, and schemas automatically
  • Integrations monitor themselves and auto-repair on break
  • Share adapter registry across users — 1000th user pays 0 credits
  • One integration works across all your agents
Quickstart

Connect any API in 3 lines

# Read data
adapter = await liquid.get_or_create(
    url="https://api.shopify.com",
    target_model={"amount": "float", "email": "str"},
)
orders = await liquid.fetch(adapter)

# Write data back
result = await liquid.execute(
    adapter, action_id="create_order",
    data={"amount": 99.99, "email": "j@example.com"}
)
# ✓ Done. No connector code. No mapping boilerplate.
Under the hood

How it works

AI participates at setup. Runtime is deterministic. Integrations self-heal.

1

Discover

MCP → OpenAPI → GraphQL → REST heuristic → Browser. Cheapest method first.

2

Map

AI proposes field mappings with confidence scores. Human approves or auto-approves above threshold.

3

Read

Fetch data through the adapter. Pure HTTP + transforms — no LLM at runtime, no cost per call.

4

Write

Create, update, delete. Validation before send, idempotency keys, retry with backoff.

Built for

Agents that need real work done

From customer support bots to autonomous ops agents — Liquid turns any API into a tool call.

🛒

E-commerce agent

Read orders from Shopify, update inventory in real time, create refunds on customer requests.

liquid.execute("POST /refunds", data)
📞

Support agent

Triage Zendesk tickets, escalate to Jira, post summaries to Slack — all through one interface.

liquid.execute("POST /issues", data)
💰

Finance agent

Monitor Stripe payouts, reconcile in QuickBooks, flag anomalies, create invoices on schedule.

liquid.execute("POST /invoices", data)
📊

Data agent

Pull metrics from Mixpanel, write to Snowflake, sync with Notion dashboards nightly.

liquid.execute_batch(action, items)
🚀

DevOps agent

Create Linear issues from Sentry errors, update GitHub PRs, notify on-call via PagerDuty.

liquid.execute("POST /issues", data)
✉️

Marketing agent

Sync HubSpot contacts, send personalized campaigns via SendGrid, track opens, update CRM.

liquid.execute_batch(action, leads)
Why Liquid

Everything agents need

🔄

Bidirectional

Not just read. Create, update, delete resources — with validation and retry baked in.

Batch operations

Hundreds of writes in one call. Concurrency limits, rate-limit aware scheduling.

🔍

Smart registry

4-level search: exact → contains → full-text → fuzzy. Adapter reuse across users.

🩺

Self-healing

Periodic health checks. Auto-repair when upstream schemas change. No 3am pages.

🔐

Encrypted vault

AES-256-GCM per-user credential isolation. Never see plaintext tokens.

🤖

LLM agnostic

Anthropic, Gemini, OpenAI, Grok. Switch providers with one env var.

📖

Open source

Core library is AGPL-3.0. Run locally, self-host, or use our cloud.

🎯

Deterministic runtime

AI at setup only. Runtime is pure HTTP — no surprise bills, no nondeterministic behavior.

Multi-protocol

Three ways to connect

Use the protocol that fits your agent. Same backend, same adapters, same credits.

PRIMARY

REST API

Classic HTTP. Language-agnostic. Full Swagger docs at /docs.

POST /v1/connect
POST /v1/execute
POST /v1/propose-actions
CLAUDE / IDE

MCP Server

Native tools for Model Context Protocol agents. Stdio transport.

liquid_connect
liquid_fetch
liquid_execute_batch
GOOGLE / A2A

A2A Agent

Agent-to-Agent protocol. Auto-discoverable via agent card.

GET /.well-known/agent.json
POST /tasks/send
Works everywhere

Any API. No connector required.

If it has an OpenAPI spec, GraphQL introspection, or even just docs — Liquid can connect to it.

🛒
Shopify
💳
Stripe
📋
Jira
💬
Slack
📧
SendGrid
🔔
Zendesk
📝
Notion
📊
Mixpanel
❄️
Snowflake
🐙
GitHub
🦩
Linear
And more

Liquid discovers APIs on the fly — no pre-built connectors needed.

How we compare

Liquid vs the alternatives

Feature Liquid Zapier Firecrawl DIY
API discovery ~
Write operations
Self-healing on schema change
Native MCP + A2A
Open source core n/a
Pay per call n/a
Works with any API ~
Pricing

Simple, usage-based

Start free. Scale as you grow. No seat fees, no minimums.

Free

$0
100 credits / month
  • 2 concurrent requests
  • Community support
  • All discovery methods
  • MCP + A2A included
Start free

Starter

$19/mo
5,000 credits / month
  • 5 concurrent requests
  • Email support
  • Shared knowledge base
  • Batch operations
Upgrade

Scale

$499/mo
500,000 credits / month
  • 100 concurrent requests
  • Dedicated support
  • Custom SLA
  • SSO + audit logs
Contact us

Credit costs per operation

Operation Credits Type
discover5Setup
connect (new)10Setup
connect (cached)0Free
fetch1Read
execute2Write
execute_batch1 / itemWrite
propose_actions3AI
repair8AI
FAQ

Questions?

Is Liquid really deterministic at runtime?

Yes. AI participates only during setup: API discovery, field mapping, and action proposals. Once an adapter is configured, all runtime calls are pure HTTP with deterministic transforms. No LLM is invoked per fetch/execute. This keeps costs predictable and behavior reproducible.

What happens when an upstream API changes its schema?

Liquid runs periodic health checks on your adapters. When it detects a broken endpoint or changed schema, it triggers auto-repair — re-discovers the API, diffs the old and new schemas, and selectively remaps only the affected fields. Existing mappings that still work are preserved. You get notified; no 3am pages.

How does the shared knowledge base work?

When one user maps Shopify's orders[].total_price → amount, that pattern gets anonymized and added to the knowledge base. The 1000th user connecting to Shopify pays 0 credits for the discovery — Liquid reuses the known mapping. Your adapter configs and credentials remain private per user.

Can I self-host instead of using the cloud?

Yes. The core library (liquid-api on PyPI) is AGPL-3.0 and runs anywhere Python 3.12+ runs. Liquid Cloud adds managed infrastructure — Postgres registry, encrypted vault, shared knowledge, billing, health monitoring, MCP/A2A servers. Use whichever fits your needs.

Which LLM providers are supported?

Anthropic Claude, Google Gemini, OpenAI, and xAI Grok. Switch with one env var (LLM_PROVIDER). On Growth plan and above you can bring your own key; on lower tiers the cloud provides the model.

How secure are stored credentials?

API credentials are encrypted with AES-256-GCM before being written to the database. Each user has isolated vault entries. We never log or return plaintext tokens. The encryption key is stored separately from the database, per production best practices.

What's the difference between MCP and A2A?

MCP (Model Context Protocol) is Anthropic's standard for LLM tools — works natively with Claude Desktop and many IDEs. A2A (Agent-to-Agent) is Google's protocol for agent interoperability. Both give your agent access to Liquid's capabilities; pick whichever your agent framework supports.

What does a credit cost, exactly?

One credit is one billable unit. An HTTP fetch costs 1 credit. Write operations cost 2. The first discovery of a new API costs 5–10 (setup). Fetching cached adapters is free. Prices are listed per operation above. Monthly credits reset; purchased packs don't expire.

Give your agent hands and eyes

100 credits free. No card. 3 minutes to your first working integration.