▌
provider specs
what you need to know before integrating
| provider | pricing model | est. cost/query | latency | api pattern | citations | special capabilities |
|---|---|---|---|---|---|---|
OpenAI o3 o3-deep-research | per-token | $0.50-2.00/query | 10-30+ min | Responses API, async background | yes | MCP serverscode interpreterfile searchweb search |
OpenAI o4-mini o4-mini-deep-research | per-token | $0.05-0.30/query | 3-10 min | Responses API, async background | yes | MCP serverscode interpreterfile searchweb search |
Perplexity Sonar sonar-deep-research | per-token + per-search | $0.15-0.50/query | 1-3 min | Chat Completions (OpenAI-compatible), synchronous | yes (inline + source URLs) | fastest deep researchcitation-focusedsearch grounding |
Gemini Deep Research deep-research-pro-preview-12-2025 | per-token + per-search | $0.50-3.00/query | 5-60 min | Interactions API, async background | yes | file search (experimental)code execution1M token contextgoogle search grounding |
Parallel Pro pro | per-request | $0.10/query | 3-9 min | Task API, async polling | yes (with excerpts) | structured JSON outputauto-schemadeclarative research |
Parallel Pro-Fast pro-fast | per-request | $0.10/query | 30s-5 min | Task API, async polling | yes (with excerpts) | structured JSON outputauto-schemalower latency variant |
Parallel Ultra ultra | per-request | $0.30/query | 5-25 min | Task API, async polling | yes (with excerpts + confidence scores) | structured JSON outputauto-schemadeep research modehighest accuracy |
Parallel Ultra-Fast ultra-fast | per-request | $0.30/query | 1-10 min | Task API, async polling | yes (with excerpts + confidence scores) | structured JSON outputauto-schemalower latency variant |
benchmarks
standardized evaluations across providers. note: most scores are self-reported by providers — our arena generates independent community rankings.
| provider | DeepSearchQA | BrowseComp | arena elo | source |
|---|---|---|---|---|
| Parallel Ultra2x(not in arena) | 72.6% | — | view rankings → | Parallel (self-reported) |
| Parallel Ultra | 68.5% | — | view rankings → | Parallel (self-reported) |
| Gemini Deep Research | 64.3% | 59.2% | view rankings → | Google (self-reported) |
| Parallel Pro | 62% | — | view rankings → | Parallel (self-reported) |
| OpenAI o3 | — | ~51% | view rankings → | OpenAI (self-reported) |
| Perplexity Sonar | 25% | — | view rankings → | Parallel (third-party eval) |
| OpenAI o4-mini | — | — | view rankings → | — |