Your agent needs Shopify data? Give it a URL. Liquid handles everything — discovery, auth, field mapping, retries, self-healing. No connectors to build.
Every new API means writing a connector. Every schema change means a 3am pager. Every agent needs its own integration layer.
# Read data adapter = await liquid.get_or_create( url="https://api.shopify.com", target_model={"amount": "float", "email": "str"}, ) orders = await liquid.fetch(adapter) # Write data back result = await liquid.execute( adapter, action_id="create_order", data={"amount": 99.99, "email": "j@example.com"} ) # ✓ Done. No connector code. No mapping boilerplate.
# Connect to any API curl -X POST https://liquid.ertad.family/v1/connect \ -H "Authorization: Bearer lq-YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{"url": "https://api.shopify.com", "target_model": {"amount": "float"}}' # Fetch data curl -X POST https://liquid.ertad.family/v1/fetch \ -H "Authorization: Bearer lq-YOUR_KEY" \ -d '{"adapter_id": "abc123"}' # Write data curl -X POST https://liquid.ertad.family/v1/execute \ -H "Authorization: Bearer lq-YOUR_KEY" \ -d '{"adapter_id": "abc123", "action_id": "create_order", "data": {"amount": 99.99}}'
# Add to your MCP-compatible agent (Claude Desktop, etc.) { "mcpServers": { "liquid": { "command": "python", "args": ["-m", "mcp_server.server", "lq-YOUR_KEY"] } } } # Your agent now has these tools: # liquid_connect — discover and connect to APIs # liquid_fetch — read data # liquid_execute — write data # liquid_execute_batch — batch writes # liquid_propose_actions — AI-propose write mappings
AI participates at setup. Runtime is deterministic. Integrations self-heal.
MCP → OpenAPI → GraphQL → REST heuristic → Browser. Cheapest method first.
AI proposes field mappings with confidence scores. Human approves or auto-approves above threshold.
Fetch data through the adapter. Pure HTTP + transforms — no LLM at runtime, no cost per call.
Create, update, delete. Validation before send, idempotency keys, retry with backoff.
From customer support bots to autonomous ops agents — Liquid turns any API into a tool call.
Read orders from Shopify, update inventory in real time, create refunds on customer requests.
liquid.execute("POST /refunds", data)
Triage Zendesk tickets, escalate to Jira, post summaries to Slack — all through one interface.
liquid.execute("POST /issues", data)
Monitor Stripe payouts, reconcile in QuickBooks, flag anomalies, create invoices on schedule.
liquid.execute("POST /invoices", data)
Pull metrics from Mixpanel, write to Snowflake, sync with Notion dashboards nightly.
liquid.execute_batch(action, items)
Create Linear issues from Sentry errors, update GitHub PRs, notify on-call via PagerDuty.
liquid.execute("POST /issues", data)
Sync HubSpot contacts, send personalized campaigns via SendGrid, track opens, update CRM.
liquid.execute_batch(action, leads)
Not just read. Create, update, delete resources — with validation and retry baked in.
Hundreds of writes in one call. Concurrency limits, rate-limit aware scheduling.
4-level search: exact → contains → full-text → fuzzy. Adapter reuse across users.
Periodic health checks. Auto-repair when upstream schemas change. No 3am pages.
AES-256-GCM per-user credential isolation. Never see plaintext tokens.
Anthropic, Gemini, OpenAI, Grok. Switch providers with one env var.
Core library is AGPL-3.0. Run locally, self-host, or use our cloud.
AI at setup only. Runtime is pure HTTP — no surprise bills, no nondeterministic behavior.
Use the protocol that fits your agent. Same backend, same adapters, same credits.
Classic HTTP. Language-agnostic. Full Swagger docs at /docs.
POST /v1/connect POST /v1/execute POST /v1/propose-actions
Native tools for Model Context Protocol agents. Stdio transport.
liquid_connect liquid_fetch liquid_execute_batch
Agent-to-Agent protocol. Auto-discoverable via agent card.
GET /.well-known/agent.json POST /tasks/send
If it has an OpenAPI spec, GraphQL introspection, or even just docs — Liquid can connect to it.
Liquid discovers APIs on the fly — no pre-built connectors needed.
| Feature | Liquid | Zapier | Firecrawl | DIY |
|---|---|---|---|---|
| API discovery | ✓ | ✗ | ~ | ✗ |
| Write operations | ✓ | ✓ | ✗ | ✓ |
| Self-healing on schema change | ✓ | ✗ | ✗ | ✗ |
| Native MCP + A2A | ✓ | ✗ | ✗ | ✗ |
| Open source core | ✓ | ✗ | ✓ | n/a |
| Pay per call | ✓ | ✗ | ✓ | n/a |
| Works with any API | ✓ | ~ | ✗ | ✓ |
Start free. Scale as you grow. No seat fees, no minimums.
| Operation | Credits | Type |
|---|---|---|
discover | 5 | Setup |
connect (new) | 10 | Setup |
connect (cached) | 0 | Free |
fetch | 1 | Read |
execute | 2 | Write |
execute_batch | 1 / item | Write |
propose_actions | 3 | AI |
repair | 8 | AI |
Yes. AI participates only during setup: API discovery, field mapping, and action proposals. Once an adapter is configured, all runtime calls are pure HTTP with deterministic transforms. No LLM is invoked per fetch/execute. This keeps costs predictable and behavior reproducible.
Liquid runs periodic health checks on your adapters. When it detects a broken endpoint or changed schema, it triggers auto-repair — re-discovers the API, diffs the old and new schemas, and selectively remaps only the affected fields. Existing mappings that still work are preserved. You get notified; no 3am pages.
When one user maps Shopify's orders[].total_price → amount, that pattern gets anonymized and added to the knowledge base. The 1000th user connecting to Shopify pays 0 credits for the discovery — Liquid reuses the known mapping. Your adapter configs and credentials remain private per user.
Yes. The core library (liquid-api on PyPI) is AGPL-3.0 and runs anywhere Python 3.12+ runs. Liquid Cloud adds managed infrastructure — Postgres registry, encrypted vault, shared knowledge, billing, health monitoring, MCP/A2A servers. Use whichever fits your needs.
Anthropic Claude, Google Gemini, OpenAI, and xAI Grok. Switch with one env var (LLM_PROVIDER). On Growth plan and above you can bring your own key; on lower tiers the cloud provides the model.
API credentials are encrypted with AES-256-GCM before being written to the database. Each user has isolated vault entries. We never log or return plaintext tokens. The encryption key is stored separately from the database, per production best practices.
MCP (Model Context Protocol) is Anthropic's standard for LLM tools — works natively with Claude Desktop and many IDEs. A2A (Agent-to-Agent) is Google's protocol for agent interoperability. Both give your agent access to Liquid's capabilities; pick whichever your agent framework supports.
One credit is one billable unit. An HTTP fetch costs 1 credit. Write operations cost 2. The first discovery of a new API costs 5–10 (setup). Fetching cached adapters is free. Prices are listed per operation above. Monthly credits reset; purchased packs don't expire.
100 credits free. No card. 3 minutes to your first working integration.