The Research Agent lets you ask a question and watch an LLM search the web, read pages, and compile a research report — all streamed live to your screen. Every search query, every page fetched, and every reasoning step is visible in real-time.
How It Works
- You ask a question — anything you'd normally research across multiple websites.
- The agent searches — it calls the link.sc Search API to find relevant results.
- The agent reads — it fetches and reads promising pages using the link.sc Fetch API.
- The agent reasons — it decides whether to search again, read more pages, or compile its answer.
- You get a report — a structured answer with source attribution, delivered as a stream.
Every step is streamed as a Server-Sent Event (SSE), so you can watch the agent think in real-time.
Try It in the Playground
The fastest way to try the Research Agent is the Playground. Select the Research tab, enter a question, and click Run.
You'll see a live timeline showing:
- Search events — what the agent is searching for
- Fetch events — which pages it's reading
- Decision events — what it found and how it's reasoning
- The final report — with total steps, duration, and token usage
API Reference
Endpoint
Code
This endpoint requires session authentication (you must be logged into the dashboard). It is not available via API key.
Request
Code
| Field | Type | Required | Description |
|---|---|---|---|
query | string | Yes | The research question. Max 500 characters. |
Response
The response is a text/event-stream (SSE). Each event is a JSON object:
Code
Event Types
| Type | Description |
|---|---|
thinking | The agent is reasoning about what to do next |
search | The agent is executing a web search |
fetch | The agent is fetching a web page |
decision | The agent evaluated results and made a decision |
result | Final research report with answer and metadata |
error | Something went wrong during a step |
limit | A guardrail was hit (step limit, timeout, or token budget) |
Guardrails
The Research Agent has built-in safety limits to prevent abuse and runaway costs:
| Guardrail | Limit |
|---|---|
| Authentication | Dashboard session required |
| Steps per session | 10 tool calls max |
| Sessions per hour | 5 per user |
| Query length | 500 characters max |
| Token budget | 32,000 tokens per session |
| Session timeout | 2 minutes |
| Content per page | 8,000 characters (truncated) |
| Content filtering | Harmful query patterns blocked |
All tool calls (searches and fetches) go through the standard link.sc API pipeline, so they count against your normal usage quota and benefit from the same proxy rotation, anti-bot bypass, and caching infrastructure.
Example: Consuming the SSE Stream
Code
Architecture
Under the hood, the Research Agent uses:
- Bifrost AI Gateway — standardizes LLM calls across OpenAI, Anthropic, and other providers
- link.sc Search API — for real-time web search (
web_searchtool) - link.sc Fetch API — for page content extraction (
fetch_urltool) - OpenAI function calling — the LLM decides which tools to use and when to stop
The agent runs a tool-calling loop: it sends your question to the LLM, the LLM requests tool calls (search or fetch), we execute them through link.sc's API, feed results back, and repeat until the LLM has enough information to answer.