diff --git a/README.md b/README.md index 6d13808d9..17c401d83 100644 --- a/README.md +++ b/README.md @@ -60,7 +60,7 @@ Download a llama model to try running the llama C++ integration. You can find a Double-click on Nitro to run it. After downloading your model, make sure it's saved to a specific path. Then, make an API call to load your model into Nitro. -***OPTIONAL: You can run Nitro on a different port like 5000 instead of 3928 by running it manually in terminal +***OPTIONAL***: You can run Nitro on a different port like 5000 instead of 3928 by running it manually in terminal ```zsh ./nitro 1 127.0.0.1 5000 ([thread_num] [host] [port]) ```