> [!info] Course code
> Use the companion repository for runnable notebooks, figures, and implementation references for this lecture:
> - [apps/opentui_ai_sdk_chat/README.md](https://github.com/Montekkundan/llm/blob/main/apps/opentui_ai_sdk_chat/README.md)
> - [apps/opentui_ai_sdk_chat/src/index.tsx](https://github.com/Montekkundan/llm/blob/main/apps/opentui_ai_sdk_chat/src/index.tsx)
> - [picollm/accelerated/chat/web.py](https://github.com/Montekkundan/llm/blob/main/picollm/accelerated/chat/web.py)
> - [apps/vercel_ai_sdk_chat/README.md](https://github.com/Montekkundan/llm/blob/main/apps/vercel_ai_sdk_chat/README.md)
## What This Concept Is
Once you already have a backend that can serve chat responses, the next question is not only "can I use it in a browser?" It is also "can I use it in a fast local terminal interface?" This note explains that terminal client path.
It is a good reminder that the same model backend can support very different user interfaces.
## Foundation Terms You Need First
The **terminal UI** is the text-based client the user interacts with locally. The **transport** is the request layer that connects that client to the [[Glossary#Backend|backend]]. The **backend** is the [[Glossary#OpenAI-compatible API|OpenAI-compatible]] service actually running the model. The **client-server split** is the separation between interface code and model-serving code.
So the main idea in this note is not a new model architecture. It is a new surface over the same backend.
```mermaid
flowchart TD
A["Terminal UI"] --> B["OpenTUI client"]
B --> C["AI SDK transport"]
C --> D["OpenAI-compatible picoLLM backend"]
D --> E["picoLLM checkpoint and engine"]
```
## Course Framing
The terminal app is not a second model path. It is the same picoLLM backend viewed through a different client surface.
That means:
- `picollm` remains the primary implementation path
- OpenTUI is a product client example
- the web app and the terminal app are siblings, not competitors
- `nanochat`, Codex CLI, Claude Code, and similar tools are external orientation points, not the code you need to reproduce for this course
## Why This Matters
Modern terminal AI tools still have the same layered structure:
- a user interface
- a model client
- a model backend
- often a tool-execution layer on top
You should not confuse the terminal surface with the model itself.
## What this demo covers
This course demo intentionally focuses on the UI and model boundary first:
- OpenTUI renders the terminal UI
- AI SDK [[Glossary#Streaming|streams]] model output
- `picollm` exposes the OpenAI-compatible backend
It is also an easy comparison point for tools like:
- Claude Code
- Codex CLI
- Gemini CLI
- OpenCode
But this demo is still a terminal chat client, not yet a full coding agent.
## How It Connects Back
You should leave this note able to say:
- the model architecture was explained in the concept notes
- the training pipeline was explained in [[Real Chatbot Workflow]]
- the API contract was explained in [[FastAPI Chat App]]
- the terminal app is simply another consumer of that same backend contract
That is the point of the comparison: the client changes, but the backend contract does not.
## Recommended sequence
Use this order:
1. show that the same `picollm` backend can power multiple clients
2. compare the browser app and terminal app
3. explain that the UI layer changed, not the model contract
4. explain that coding agents add tool execution and workflow state on top of this base
That is the abstraction ladder to keep.
## Local run flow
Run the backend first.
This lecture uses the same accelerated OpenAI-compatible backend as the web app. That is intentional: the key point here is the client/server contract, not a second serving stack.
Run the accelerated chat server:
```bash
uv run python -m picollm.accelerated.chat.web \
--source sft \
--device-type cuda
```
That keeps both the browser app and the terminal app attached to the same product backend.
Then run the OpenTUI app:
```bash
cd /Users/montekkundan/Developer/ML/llm/apps/opentui_ai_sdk_chat
bun install
cp .env.example .env
bun run dev
```
## What to learn here
By the end of this lecture, you should be able to explain:
- why terminal AI tools still depend on a backend API contract
- why a terminal UI is only another client surface
- how streaming output works in a terminal just like it does in the browser
- why coding-agent products require more than just chat rendering
## Relationship to the rest of the course
Teach this after:
- [[Real Chatbot Workflow]]
- [[Vercel AI SDK Chat App]]
That way the model and serving path are already clear before the terminal variation appears.
<div style="display:flex; gap:1rem; margin:1.5rem 0; flex-wrap:wrap;">
<div style="flex:1; min-width:220px; border:1px solid var(--background-modifier-border); border-radius:12px; padding:1rem; background:var(--background-secondary);">
<div style="font-size:0.85em; color:var(--text-muted); margin-bottom:0.35rem;">Previous</div>
<div><a class="internal-link" data-href="Deployment" href="Deployment">Deployment</a></div>
</div>
<div style="flex:1; min-width:220px; border:1px solid var(--background-modifier-border); border-radius:12px; padding:1rem; background:var(--background-secondary);">
<div style="font-size:0.85em; color:var(--text-muted); margin-bottom:0.35rem;">Next</div>
<div><a class="internal-link" data-href="Real Chatbot Workflow" href="Real%20Chatbot%20Workflow">Real Chatbot Workflow</a></div>
</div>
</div>
## Further reading
- SST, "OpenTUI," 2025. https://github.com/sst/opentui
- Bun, "Documentation," 2025. https://bun.com/docs
- Vercel, "AI SDK UI transport," 2025. https://ai-sdk.dev/docs/ai-sdk-ui/transport
- Vercel, "OpenAI-compatible providers," 2025. https://ai-sdk.dev/providers/openai-compatible-providers