From 0517b5b01dca8149ad4075d7c7cde27006c33dc1 Mon Sep 17 00:00:00 2001 From: Surya Prakash Pathak Date: Thu, 21 Mar 2024 09:55:38 -0700 Subject: [PATCH] reverted Readme.md --- code-generation/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/code-generation/README.md b/code-generation/README.md index 5d4171289..26f9361a3 100644 --- a/code-generation/README.md +++ b/code-generation/README.md @@ -83,4 +83,4 @@ podman run --rm -it -p 8501:8501 -e MODEL_SERVICE_ENDPOINT=http://10.88.0.1:8001 Everything should now be up an running with the chat application available at [`http://localhost:8501`](http://localhost:8501). By using this recipe and getting this starting point established, users should now have an easier time customizing and building their own LLM enabled code generation applications. -_Note: Future recipes will demonstrate integration between locally hosted LLM's and developer productivity tools like VSCode. +_Note: Future recipes will demonstrate integration between locally hosted LLM's and developer productivity tools like VSCode._