Skip to content

Commit

Permalink
Merge pull request #348 from rhatdan/codespell
Browse files Browse the repository at this point in the history
Fix spelling mistakes in markdown
  • Loading branch information
rhatdan authored Oct 21, 2024
2 parents 9dfd781 + cabf518 commit a03bf61
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ RamaLama uses container engines like Podman or Docker to pull the appropriate OC

Running in containers eliminates the need for users to configure the host system for AI. After the initialization, RamaLama runs the AI Models within a container based on the OCI image.

RamaLama then pulls AI Models from model registires. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.
RamaLama then pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.

When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neather are installed RamaLama will attempt to run the model with software on the local system.
When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neather are installed RamaLama will attempt to run the model with software on the local system.

RamaLama supports multiple AI model registries types called transports.
Supported transports:
Expand All @@ -40,7 +40,7 @@ To make it easier for users, RamaLama uses shortname files, which container
alias names for fully specified AI Models allowing users to specify the shorter
names when referring to models. RamaLama reads shortnames.conf files if they
exist . These files contain a list of name value pairs for specification of
the model. The following table specifies the order which Ramama reads the files
the model. The following table specifies the order which RamaLama reads the files
. Any duplicate names that exist override previously defined shortnames.

| Shortnames type | Path |
Expand Down
2 changes: 1 addition & 1 deletion docs/ramalama-rm.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ ramalama\-rm - remove AI Models from local storage
**ramalama rm** [*options*] *model* [...]

## DESCRIPTION
Specity one or more AI Models to be removed from local storage
Specify one or more AI Models to be removed from local storage

## OPTIONS

Expand Down
2 changes: 1 addition & 1 deletion docs/ramalama-serve.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Generate specified configuration format for running the AI Model as a service
| Key | Description |
| --------- | ---------------------------------------------------------------- |
| quadlet | Podman supported container definition for running AI Model under systemd |
| kube | Kubernetes YAML definition for running the AI MOdel as a service |
| kube | Kubernetes YAML definition for running the AI Model as a service |

#### **--help**, **-h**
show this help message and exit
Expand Down
8 changes: 4 additions & 4 deletions docs/ramalama.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ RamaLama uses container engines like Podman or Docker to pull the appropriate OC

Running in containers eliminates the need for users to configure the host system for AI. After the initialization, RamaLama runs the AI Models within a container based on the OCI image.

RamaLama then pulls AI Models from model registires. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.
RamaLama then pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.

When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neather are installed RamaLama will attempt to run the model with software on the local system.
When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neither are installed RamaLama will attempt to run the model with software on the local system.

RamaLama supports multiple AI model registries types called transports. Supported transports:

Expand Down Expand Up @@ -48,7 +48,7 @@ To make it easier for users, RamaLama uses shortname files, which container
alias names for fully specified AI Models allowing users to specify the shorter
names when referring to models. RamaLama reads shortnames.conf files if they
exist . These files contain a list of name value pairs for specification of
the model. The following table specifies the order which Ramama reads the files
the model. The following table specifies the order which RamaLama reads the files
. Any duplicate names that exist override previously defined shortnames.

| Shortnames type | Path |
Expand Down Expand Up @@ -81,7 +81,7 @@ show container runtime command without executing it (default: False)

#### **--engine**
run RamaLama using the specified container engine.
use environment variable RAMALAMA_CONTAINER_ENGINE to modify the default behaviour.
use environment variable RAMALAMA_CONTAINER_ENGINE to modify the default behavior.

#### **--help**, **-h**
show this help message and exit
Expand Down

0 comments on commit a03bf61

Please sign in to comment.