Write and execute Python scripts with the help of LLM
See Run a prompt to generate and execute jq programs using llm-jq for background on this project.
Install this plugin in the same environment as LLM.
llm install llm-py
Pipe JSON directly into llm py
and describe the result you would like:
curl -s https://api.github.com/repos/simonw/datasette/issues | \
llm py 'count by user login, top 3'
Output:
[
{
"login": "simonw",
"count": 11
},
{
"login": "king7532",
"count": 5
},
{
"login": "dependabot[bot]",
"count": 2
}
]
group_by(.user.login) | map({login: .[0].user.login, count: length}) | sort_by(-.count) | .[0:3]
The JSON is printed to standard output, the Python script is printed to standard error.
Options:
-s/--silent
: Do not print the Python script to standard error-o/--output
: Output just the Python script, do not run it-v/--verbose
: Show the prompt sent to the model and the response-m/--model X
: Use a model other than the configured LLM default model-l/--length X
: Use a length of the input other than 1024 as the example
By default, the first 1024 bytes of JSON will be sent to the model as an example along with your description. You can use -l
to send more or less example data.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-py
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
python -m pytest