Skip to content

Commit

Permalink
Merge branch 'main' into masked_input
Browse files Browse the repository at this point in the history
  • Loading branch information
darrenburns authored Sep 3, 2024
2 parents a46e2df + 61530d3 commit e300d02
Show file tree
Hide file tree
Showing 26 changed files with 2,029 additions and 599 deletions.
18 changes: 17 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,27 @@ and this project adheres to [Semantic Versioning](http://semver.org/).

### Added

- Added `DOMNode.check_consume_key` https://github.com/Textualize/textual/pull/4940
- Added `MaskedInput` widget https://github.com/Textualize/textual/pull/4783

## [0.79.1] - 2024-08-31

### Fixed

- Fixed broken updates when non active screen changes https://github.com/Textualize/textual/pull/4957

## [0.79.0] - 2024-08-30

### Added

- Added `DOMNode.check_consume_key` https://github.com/Textualize/textual/pull/4940
- Added `App.ESCAPE_TO_MINIMIZE`, `App.screen_to_minimize`, and `Screen.ESCAPE_TO_MINIMIZE` https://github.com/Textualize/textual/pull/4951
- Added `DOMNode.query_exactly_one` https://github.com/Textualize/textual/pull/4950
- Added `SelectorSet.is_simple` https://github.com/Textualize/textual/pull/4950

### Changed

- KeyPanel will show multiple keys if bound to the same action https://github.com/Textualize/textual/pull/4940
- Breaking change: `DOMNode.query_one` will not `raise TooManyMatches` https://github.com/Textualize/textual/pull/4950

## [0.78.0] - 2024-08-27

Expand Down Expand Up @@ -2329,6 +2344,7 @@ https://textual.textualize.io/blog/2022/11/08/version-040/#version-040
- New handler system for messages that doesn't require inheritance
- Improved traceback handling

[0.79.0]: https://github.com/Textualize/textual/compare/v0.78.0...v0.79.0
[0.78.0]: https://github.com/Textualize/textual/compare/v0.77.0...v0.78.0
[0.77.0]: https://github.com/Textualize/textual/compare/v0.76.0...v0.77.0
[0.76.0]: https://github.com/Textualize/textual/compare/v0.75.1...v0.76.0
Expand Down
243 changes: 243 additions & 0 deletions docs/blog/posts/anatomy-of-a-textual-user-interface.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,243 @@
---
draft: false
date: 2024-09-15
categories:
- DevLog
authors:
- willmcgugan
---

# Anatomy of a Textual User Interface


I recently wrote a [TUI](https://en.wikipedia.org/wiki/Text-based_user_interface) to chat to an AI agent in the terminal.
I'm not the first to do this (shout out to [Elia](https://github.com/darrenburns/elia) and [Paita](https://github.com/villekr/paita)), but I *may* be the first to have it reply as if it were the AI from the Aliens movies?

Here's a video of it in action:



<iframe width="100%" style="aspect-ratio:1512 / 982" src="https://www.youtube.com/embed/hr5JvQS4d_w" title="Mother AI" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Now let's dissect the code like Bishop dissects a facehugger.

<!-- more -->

## All right, sweethearts, what are you waiting for? Breakfast in bed?

At the top of the file we have some boilerplate:

```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "llm",
# "textual",
# ]
# ///
from textual import on, work
from textual.app import App, ComposeResult
from textual.widgets import Header, Input, Footer, Markdown
from textual.containers import VerticalScroll
import llm

SYSTEM = """Formulate all responses as if you where the sentient AI named Mother from the Aliens movies."""
```

The text in the comment is a relatively new addition to the Python ecosystem.
It allows you to specify dependencies inline so that tools can setup an environment automatically.
The only tool that I know of it that uses it is [uv](https://docs.astral.sh/uv/guides/scripts/#running-scripts).

After this comment we have a bunch of imports: [textual](https://github.com/textualize/textual) for the UI, and [llm](https://llm.datasette.io/en/stable/) to talk to ChatGPT (also supports other LLMs).

Finally, we define `SYSTEM`, which is the *system prompt* for the LLM.

## Look, those two specimens are worth millions to the bio-weapons division.

Next up we have the following:

```python

class Prompt(Markdown):
pass


class Response(Markdown):
BORDER_TITLE = "Mother"
```

These two classes define the widgets which will display text the user enters and the response from the LLM.
They both extend the builtin [Markdown](https://textual.textualize.io/widgets/markdown/) widget, since LLMs like to talk in that format.

## Well, somebody's gonna have to go out there. Take a portable terminal, go out there and patch in manually.

Following on from the widgets we have the following:

```python
class MotherApp(App):
AUTO_FOCUS = "Input"

CSS = """
Prompt {
background: $primary 10%;
color: $text;
margin: 1;
margin-right: 8;
padding: 1 2 0 2;
}
Response {
border: wide $success;
background: $success 10%;
color: $text;
margin: 1;
margin-left: 8;
padding: 1 2 0 2;
}
"""
```

This defines an app, which is the top-level object for any Textual app.

The `AUTO_FOCUS` string is a classvar which causes a particular widget to receive input focus when the app starts. In this case it is the `Input` widget, which we will define later.

The classvar is followed by a string containing CSS.
Technically, TCSS or *Textual Cascading Style Sheets*, a variant of CSS for terminal interfaces.

This isn't a tutorial, so I'm not going to go in to a details, but we're essentially setting properties on widgets which define how they look.
Here I styled the prompt and response widgets to have a different color, and tried to give the response a retro tech look with a green background and border.

We could express these styles in code.
Something like this:

```python
self.styles.color = "red"
self.styles.margin = 8
```

Which is fine, but CSS shines when the UI get's more complex.

## Look, man. I only need to know one thing: where they are.

After the app constants, we have a method called `compose`:

```python
def compose(self) -> ComposeResult:
yield Header()
with VerticalScroll(id="chat-view"):
yield Response("INTERFACE 2037 READY FOR INQUIRY")
yield Input(placeholder="How can I help you?")
yield Footer()
```

This method adds the initial widgets to the UI.

`Header` and `Footer` are builtin widgets.

Sandwiched between them is a `VerticalScroll` *container* widget, which automatically adds a scrollbar (if required). It is pre-populated with a single `Response` widget to show a welcome message (the `with` syntax places a widget within a parent widget). Below that is an `Input` widget where we can enter text for the LLM.

This is all we need to define the *layout* of the TUI.
In Textual the layout is defined with styles (in the same was as color and margin).
Virtually any layout is possible, and you never have to do any math to calculate sizes of widgets&mdash;it is all done declaratively.

We could add a little CSS to tweak the layout, but the defaults work well here.
The header and footer are *docked* to an appropriate edge.
The `VerticalScroll` widget is styled to consume any available space, leaving room for widgets with a defined height (like our `Input`).

If you resize the terminal it will keep those relative proportions.

## Look into my eye.

The next method is an *event handler*.


```python
def on_mount(self) -> None:
self.model = llm.get_model("gpt-4o")
```

This method is called when the app receives a Mount event, which is one of the first events sent and is typically used for any setup operations.

It gets a `Model` object got our LLM of choice, which we will use later.

Note that the [llm](https://llm.datasette.io/en/stable/) library supports a [large number of models](https://llm.datasette.io/en/stable/openai-models.html), so feel free to replace the string with the model of your choice.

## We're in the pipe, five by five.

The next method is also a message handler:

```python
@on(Input.Submitted)
async def on_input(self, event: Input.Submitted) -> None:
chat_view = self.query_one("#chat-view")
event.input.clear()
await chat_view.mount(Prompt(event.value))
await chat_view.mount(response := Response())
response.anchor()
self.send_prompt(event.value, response)
```

The decorator tells Textual to handle the `Input.Submitted` event, which is sent when the user hits return in the Input.

!!! info "More on event handlers"

There are two ways to receive events in Textual: a naming convention or the decorator.
They aren't on the base class because the app and widgets can receive arbitrary events.

When that happens, this method clears the input and adds the prompt text to the `VerticalScroll`.
It also adds a `Response` widget to contain the LLM's response, and *anchors* it.
Anchoring a widget will keep it at the bottom of a scrollable view, which is just what we need for a chat interface.

Finally in that method we call `send_prompt`.

## We're on an express elevator to hell, going down!

Here is `send_prompt`:

```python
@work(thread=True)
def send_prompt(self, prompt: str, response: Response) -> None:
response_content = ""
llm_response = self.model.prompt(prompt, system=SYSTEM)
for chunk in llm_response:
response_content += chunk
self.call_from_thread(response.update, response_content)
```

You'll notice that it is decorated with `@work`, which turns this method in to a *worker*.
In this case, a *threaded* worker. Workers are a layer over async and threads, which takes some of the pain out of concurrency.

This worker is responsible for sending the prompt, and then reading the response piece-by-piece.
It calls the Markdown widget's `update` method which replaces its content with new Markdown code, to give that funky streaming text effect.


## Game over man, game over!

The last few lines creates an app instance and runs it:

```python
if __name__ == "__main__":
app = MotherApp()
app.run()
```

You may need to have your [API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key) set in an environment variable.
Or if you prefer, you could set in the `on_mount` function with the following:

```python
self.model.key = "... key here ..."
```

## Not bad, for a human.

Here's the [code for the Mother AI](https://gist.github.com/willmcgugan/648a537c9d47dafa59cb8ece281d8c2c).

Run the following in your shell of choice to launch mother.py (assumes you have [uv](https://docs.astral.sh/uv/) installed):

```base
uv run mother.py
```

## You know, we manufacture those, by the way.

Join our [Discord server](https://discord.gg/Enf6Z3qhVr) to discuss more 80s movies (or possibly TUIs).
39 changes: 39 additions & 0 deletions docs/guide/devtools.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,45 @@ For instance, the following will run the `textual colors` command:
textual run -c textual colors
```

## Serve

The devtools can also serve your application in a browser.
Effectively turning your terminal app in to a web application!

The `serve` sub-command is similar to `run`. Here's how you can serve an app launched from a Python file:

```
textual serve my_app.py
```

You can also serve a Textual app launched via a command. Here's an example:

```
textual serve "textual keys"
```

The syntax for launching an app in a module is slightly different from `run`.
You need to specify the full command, including `python`.
Here's how you would run the Textual demo:

```
textual serve "python -m textual"
```

Textual's builtin web-server is quite powerful.
You can serve multiple instances of your application at once!

!!! tip

Textual serve is also useful when developing your app.
If you make changes to your code, simply refresh the browser to update.

There are some additional switches for serving Textual apps. Run the following for a list:

```
textual serve -h
```

## Live editing

If you combine the `run` command with the `--dev` switch your app will run in *development mode*.
Expand Down
1 change: 0 additions & 1 deletion docs/guide/queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ send_button = self.query_one("#send")

This will retrieve a widget with an ID of `send`, if there is exactly one.
If there are no matching widgets, Textual will raise a [NoMatches][textual.css.query.NoMatches] exception.
If there is more than one match, Textual will raise a [TooManyMatches][textual.css.query.TooManyMatches] exception.

You can also add a second parameter for the expected type, which will ensure that you get the type you are expecting.

Expand Down
6 changes: 6 additions & 0 deletions docs/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,12 @@ Here's what the finished app will look like:
```{.textual path="docs/examples/tutorial/stopwatch.py" title="stopwatch.py" press="tab,enter,tab,enter,tab,enter,tab,enter"}
```

!!! info

Did you notice the `^p palette` at the bottom right hand corner?
This is the [Command Palette](./guide/command_palette.md).
You can think of it as a dedicated command prompt for your app.

### Try it out!

The following is *not* a screenshot, but a fully interactive Textual app running in your browser.
Expand Down
2 changes: 1 addition & 1 deletion examples/dictionary.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ class DictionaryApp(App):
CSS_PATH = "dictionary.tcss"

def compose(self) -> ComposeResult:
yield Input(placeholder="Search for a word")
yield Input(placeholder="Search for a word", id="dictionary-search")
with VerticalScroll(id="results-container"):
yield Markdown(id="results")

Expand Down
2 changes: 1 addition & 1 deletion examples/dictionary.tcss
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Screen {
background: $panel;
}

Input {
Input#dictionary-search {
dock: top;
margin: 1 0;
}
Expand Down
Loading

0 comments on commit e300d02

Please sign in to comment.