U

Unified LLM Gateway

Requesty is an AI gateway that optimizes cost and performance for developers utilizing multiple language models (LLMs).

19sub-processors
HeadquartersNot specified
Size201-1000
MarketB2B
PublicPrivate
ProductsAI Gateway, Real-time Analytics, Model Library, Prompt Library, Routing Policies

All Sub-Processors

NameCategory
Alibaba CloudAI & Machine Learning
Amazon Web Services Inc. (AWS)Cloud Infrastructure
AnthropicAI & Machine Learning
AWS BedrockAI & Machine Learning
DeepInfraAI & Machine Learning
DeepSeekAI & Machine Learning
Google LLC (Gemini API)AI & Machine Learning
Google LLC (Vertex AI)AI & Machine Learning
GroqAI & Machine Learning
Microsoft Azure AIAI & Machine Learning
Mistral AIAI & Machine Learning
Nebius AIAI & Machine Learning
NetMind.AIAI & Machine Learning
Novita AIAI & Machine Learning
OpenAIAI & Machine Learning
ParasailAI & Machine Learning
Perplexity AIAI & Machine Learning
Together AIAI & Machine Learning
xAIAI & Machine Learning

Data Processing Locations

US
9 sub-processors
US / EU
3 sub-processors
EU
2 sub-processors
Singapore
1 sub-processor
Frankfurt, Germany (EU Central 1)
1 sub-processor
China
1 sub-processor
Global
1 sub-processor
UK
1 sub-processor

Frequently Asked Questions

How many sub-processors does Unified LLM Gateway use?
Unified LLM Gateway uses 19 sub-processors (third-party data processors) as disclosed on their public sub-processor page.
What are Unified LLM Gateway's main sub-processors?
Unified LLM Gateway's sub-processors include Alibaba Cloud (Large-language-model inference), Amazon Web Services Inc. (AWS) (Primary cloud infrastructure (compute, storage, networking) hosting the Requesty platform and databases), Anthropic (Large-language-model inference), AWS Bedrock (Large-language-model inference), DeepInfra (Large-language-model inference), and 14 more.
Where does Unified LLM Gateway process data?
Unified LLM Gateway's sub-processors are located in US, US / EU, EU, Singapore, Frankfurt, Germany (EU Central 1), China, Global, UK.