Category
LLM inference & hosting — aggregators, low-latency hardware, pricing
Token-metered inference services, aggregator gateways, custom-silicon providers, and serverless GPU platforms at a glance.
This is the feeder line for apps that don’t want to babysit GPUs. Compare **unit price** (token or second), **tail latency** (P95 on the same model), **model catalog**, **data routing**, and **OpenAI-compatible endpoints**. Ultra-low-latency use cases (voice agents, interactive IDEs) look at Groq/Cerebras/SambaNova. Multi-vendor experimenting leans on OpenRouter/LiteLLM. Custom weights land on Replicate/Modal/Baseten/Together/Fireworks.
Editorial / GSC add-ons
Aggregator gateways vs direct vendor contracts
Gateways win on speed-of-switch and A/B pricing; they lose on extra data hop and longer SLA chain. Critical enterprise paths usually graduate to direct contracts.
Are Groq and Cerebras actually cheaper than GPU clouds?
On latency-sensitive loads the $/token and tail-latency curve are often better, but model catalog and burst quotas are narrower—load test with real traffic before cutover.
Where do I deploy a fine-tuned model?
Replicate, Modal, Baseten, Together, and Fireworks all offer custom weights with metered billing. Watch cold-start tail latency and how reserved hardware is billed.
Tools in this category
Summaries and official links live on each tool page—browse related picks in-category.
Groq: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Replicate: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
fal: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Together AI: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Fireworks AI: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
OpenRouter: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
硅基流动 SiliconFlow: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Wafer-scale inference service from Cerebras claiming extreme token throughput on popular open LLMs—great for latency-sensitive interactive apps; verify model list and quotas on the site.
SambaNova Cloud: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Baseten: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Modal: popular AI product—see the official site for features, pricing, supported regions, data handling, and latest model lineup.
Open-source proxy gateway exposing 100+ LLM vendors through one OpenAI-compatible API—routing, budgets, fallbacks, and logging without reinventing plumbing.