Category

LLM inference & hosting — aggregators, low-latency hardware, pricing

Token-metered inference services, aggregator gateways, custom-silicon providers, and serverless GPU platforms at a glance.

This is the feeder line for apps that don’t want to babysit GPUs. Compare **unit price** (token or second), **tail latency** (P95 on the same model), **model catalog**, **data routing**, and **OpenAI-compatible endpoints**. Ultra-low-latency use cases (voice agents, interactive IDEs) look at Groq/Cerebras/SambaNova. Multi-vendor experimenting leans on OpenRouter/LiteLLM. Custom weights land on Replicate/Modal/Baseten/Together/Fireworks.

Editorial / GSC add-ons

Aggregator gateways vs direct vendor contracts

Gateways win on speed-of-switch and A/B pricing; they lose on extra data hop and longer SLA chain. Critical enterprise paths usually graduate to direct contracts.

Are Groq and Cerebras actually cheaper than GPU clouds?

On latency-sensitive loads the $/token and tail-latency curve are often better, but model catalog and burst quotas are narrower—load test with real traffic before cutover.

Where do I deploy a fine-tuned model?

Replicate, Modal, Baseten, Together, and Fireworks all offer custom weights with metered billing. Watch cold-start tail latency and how reserved hardware is billed.

Tools in this category

Summaries and official links live on each tool page—browse related picks in-category.

Groq

Groq: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
Replicate

Replicate: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
fal

fal: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
Together AI

Together AI: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
Fireworks AI

Fireworks AI: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
OpenRouter

OpenRouter: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
硅基流动 SiliconFlow

硅基流动 SiliconFlow: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
Cerebras Inference

Wafer-scale inference service from Cerebras claiming extreme token throughput on popular open LLMs—great for latency-sensitive interactive apps; verify model list and quotas on the site.

Inference / Hosting
SambaNova Cloud

SambaNova Cloud: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
Baseten

Baseten: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
Modal

Modal: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.

Inference / Hosting
LiteLLM

Open-source proxy gateway exposing 100+ LLM vendors through one OpenAI-compatible API—routing, budgets, fallbacks, and logging without reinventing plumbing.

Inference / Hosting