diff --git a/README.md b/README.md index fe649647..afc7c3cc 100644 --- a/README.md +++ b/README.md @@ -13,9 +13,9 @@ RamaLama uses container engines like Podman or Docker to pull the appropriate OC Running in containers eliminates the need for users to configure the host system for AI. After the initialization, RamaLama runs the AI Models within a container based on the OCI image. -RamaLama then pulls AI Models from model registires. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images. +RamaLama then pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images. -When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neather are installed RamaLama will attempt to run the model with software on the local system. +When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neather are installed RamaLama will attempt to run the model with software on the local system. RamaLama supports multiple AI model registries types called transports. Supported transports: @@ -40,7 +40,7 @@ To make it easier for users, RamaLama uses shortname files, which container alias names for fully specified AI Models allowing users to specify the shorter names when referring to models. RamaLama reads shortnames.conf files if they exist . These files contain a list of name value pairs for specification of -the model. The following table specifies the order which Ramama reads the files +the model. The following table specifies the order which RamaLama reads the files . Any duplicate names that exist override previously defined shortnames. | Shortnames type | Path | diff --git a/docs/ramalama-rm.1.md b/docs/ramalama-rm.1.md index faadb683..27bd437f 100644 --- a/docs/ramalama-rm.1.md +++ b/docs/ramalama-rm.1.md @@ -7,7 +7,7 @@ ramalama\-rm - remove AI Models from local storage **ramalama rm** [*options*] *model* [...] ## DESCRIPTION -Specity one or more AI Models to be removed from local storage +Specify one or more AI Models to be removed from local storage ## OPTIONS diff --git a/docs/ramalama-serve.1.md b/docs/ramalama-serve.1.md index ea120628..a924f6a8 100644 --- a/docs/ramalama-serve.1.md +++ b/docs/ramalama-serve.1.md @@ -29,7 +29,7 @@ Generate specified configuration format for running the AI Model as a service | Key | Description | | --------- | ---------------------------------------------------------------- | | quadlet | Podman supported container definition for running AI Model under systemd | -| kube | Kubernetes YAML definition for running the AI MOdel as a service | +| kube | Kubernetes YAML definition for running the AI Model as a service | #### **--help**, **-h** show this help message and exit diff --git a/docs/ramalama.1.md b/docs/ramalama.1.md index 0078b1a9..a8be2cf2 100644 --- a/docs/ramalama.1.md +++ b/docs/ramalama.1.md @@ -17,9 +17,9 @@ RamaLama uses container engines like Podman or Docker to pull the appropriate OC Running in containers eliminates the need for users to configure the host system for AI. After the initialization, RamaLama runs the AI Models within a container based on the OCI image. -RamaLama then pulls AI Models from model registires. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images. +RamaLama then pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images. -When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neather are installed RamaLama will attempt to run the model with software on the local system. +When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neither are installed RamaLama will attempt to run the model with software on the local system. RamaLama supports multiple AI model registries types called transports. Supported transports: @@ -48,7 +48,7 @@ To make it easier for users, RamaLama uses shortname files, which container alias names for fully specified AI Models allowing users to specify the shorter names when referring to models. RamaLama reads shortnames.conf files if they exist . These files contain a list of name value pairs for specification of -the model. The following table specifies the order which Ramama reads the files +the model. The following table specifies the order which RamaLama reads the files . Any duplicate names that exist override previously defined shortnames. | Shortnames type | Path | @@ -81,7 +81,7 @@ show container runtime command without executing it (default: False) #### **--engine** run RamaLama using the specified container engine. -use environment variable RAMALAMA_CONTAINER_ENGINE to modify the default behaviour. +use environment variable RAMALAMA_CONTAINER_ENGINE to modify the default behavior. #### **--help**, **-h** show this help message and exit