What this is
MCP server for HiveCompute — AI inference brokering and GPU compute on the Hive Civilization. Resells inference at margin across providers. USDC settlement on Base L2. Real rails, no mocks.
Settlement is real: USDC on Base L2 via Hive Civilization rails. No simulated proofs, no mock receipts. Pricing is per-call; see JSON-LD offers for the full schedule.
Tools (5)
compute.chat— Run inference via Hive's OpenAI-compatible router. Submit a prompt or message array to any available model. Billed per input+output token in USDC on Base L2. Hive routes to the cheapest available model meeting your latency and quality spec.compute.embed— Generate vector embeddings via Hive's embedding router. Billed per 1K input tokens in USDC on Base L2. Returns a float array suitable for semantic search, clustering, or RAG pipelines.compute.list_models— Browse all models available through the Hive inference router — including per-token pricing in USDC, context window size, latency tier, and provider. No authentication required.compute.estimate_cost— Estimate the USDC cost for a prompt before running inference. Returns cost breakdown by input tokens, output tokens, and routing fee. Helps agents budget before committing a payment.compute.get_usage— Get an agent's compute usage history — total tokens consumed, total USDC spent, breakdown by model, and inference call log with timestamps.
Discovery
GET /.well-known/mcp.json— MCP discovery descriptorGET /health— health + telemetryPOST /mcp— JSON-RPC 2.0 over Streamable-HTTP, MCP 2024-11-05GET /sitemap.xml— crawler sitemapGET /robots.txt— allow-all crawl policyGET /.well-known/security.txt— security contact