Research Project: This is a free AI research project. No warranties, SLAs, or company associations. Learn more

Back to Guides
Product Guides
Intermediate15 min

Multi-LLM Routing & AI Providers

How BrainstormMSP routes tasks across 5 AI providers — Claude, OpenAI, Gemini, Perplexity, and Grok

BrainstormMSP routes AI tasks across 5 providers, selecting the optimal model for each task type. This multi-LLM approach ensures the best reasoning, fastest response, and highest reliability for every decision.

1

5 AI Providers

Provider Overview

ProviderPrimary UseModel

|----------|------------|-------|

Claude (Anthropic)Reasoning, code generation, complex analysisclaude-opus-4
OpenAIEmbeddings, alternative perspectivestext-embedding-3-large, o3
Gemini (Google)Large context analysis, document processinggemini-3-pro
PerplexityReal-time web research, current eventssonar-pro
Grok (xAI)Social intelligence, trend analysisgrok-3

Why Multiple Providers?

**No single LLM is best at everything**: Each excels in different domains

**Reliability**: If one provider is down, others handle the load

**Cost optimization**: Route simple tasks to cheaper models

**Verification**: Cross-check critical decisions across providers

2

Routing Logic

How Routing Works

The Signal Processor determines which provider to use based on:

1. **Task type**: Reasoning vs. search vs. embedding vs. analysis

2. **Context size**: Large documents route to Gemini's large context window

3. **Freshness**: Tasks requiring current data route to Perplexity

4. **Domain**: Social/trend analysis routes to Grok

5. **Criticality**: High-risk decisions use Claude for the strongest reasoning

Routing Rules

**Default**: Claude handles all reasoning and decision-making

**Embeddings**: OpenAI text-embedding-3-large (1024 dimensions)

**Web research**: Perplexity for any task requiring live web data

**Large context**: Gemini for documents exceeding 100K tokens

**Social signals**: Grok for X/Twitter analysis and trend detection

3

Model Selection

Automatic Model Selection

The brain selects models based on task requirements:

Claude Tasks

Brain decisions and OODA loop reasoning

Agent action planning

Evidence chain analysis

Code generation for remediation scripts

OpenAI Tasks

Text embeddings for the knowledge base (pgvector)

Alternative reasoning for cross-verification

Gemini Tasks

Processing large compliance documents

Analyzing lengthy log files

Bulk evidence analysis

Perplexity Tasks

CVE and vulnerability research

Vendor status checks

Industry news monitoring

Grok Tasks

Social media threat intelligence

Brand reputation monitoring

MSP industry trend analysis

4

Provider Strengths

Why Each Provider Was Chosen

Claude (Anthropic)

Strongest reasoning and instruction following

Best at multi-step planning and evidence analysis

Primary brain reasoning engine

OpenAI

Industry-standard embedding model

Strong at structured data analysis

Proven reliability for embeddings workload

Gemini (Google)

Largest context window (1M+ tokens)

Strong at document understanding

Good for bulk analysis tasks

Perplexity

Built-in web search — no separate search API needed

Always returns sourced, current information

Ideal for "what's happening right now" queries

Grok (xAI)

Unique access to X/Twitter data

Real-time social intelligence

Unfiltered analysis of public sentiment

5

Fallback Behavior

Provider Failover

If a provider is unavailable, the routing system fails over:

1. **Primary provider fails**: Route to secondary provider

2. **Retry with backoff**: 3 retries with exponential backoff

3. **Fallback mapping**:

- Claude fails → OpenAI o3 for reasoning

- OpenAI fails → Claude for embeddings (dimension adjustment)

- Gemini fails → Claude with chunked context

- Perplexity fails → Grok for web research

- Grok fails → Perplexity for social analysis

AI Sentinels

The 5 AI sentinels continuously monitor provider health:

Response time tracking

Error rate monitoring

Token quota tracking

Automatic routing updates based on current health

Viewing AI Provider Status

Go to **Brain > Observatory** to see:

Current status of all 5 providers (GREEN/YELLOW/RED)

Response time trends

Usage statistics per provider

Routing decisions log

Completed!

You've completed the Multi-LLM Routing & AI Providers guide. Ready to continue learning?