diff --git a/README.md b/README.md
new file mode 100644
index 0000000..707de52
--- /dev/null
+++ b/README.md
@@ -0,0 +1,48 @@
+# The SHALB Helm Library
+
+Here we collect useful charts we designed in [SHALB](https://shalb.com) also used in our [cluster.dev](htts://cluster.dev) product for building cloud installers and creating infrastructure templates with Terraform modules and Helm charts.
+
+## Before you begin
+
+### Prerequisites
+
+- Kubernetes 1.23+
+- Helm 3.8.0+
+
+## Setup a Kubernetes Cluster
+
+The quickest way to setup a Kubernetes cluster in different clouds to install SHALB Charts is by using [cluster.dev](https://docs.cluster.dev).
+
+## Bootstrapping Kubernetes in Different Clouds
+
+Create fully featured Kubernetes clusters with required addons:
+
+| Cloud Provider | Kubernetes Type | Sample Link | Technologies |
+|----------------|-----------------|-------------------------|------------------|
+| AWS | EKS | [**AWS-EKS**](https://docs.cluster.dev/examples-aws-eks/) | |
+| AWS | K3s | [**AWS-K3s**](https://docs.cluster.dev/examples-aws-k3s/) | |
+| GCP | GKE | [**GCP-GKE**](https://docs.cluster.dev/examples-gcp-gke/) | |
+| AWS | K3s + Prometheus| [**AWS-K3s Prometheus**](https://docs.cluster.dev/examples-aws-k3s-prometheus/) | |
+| DO | K8s | [**DO-K8s**](https://docs.cluster.dev/examples-do-k8s/) | |
+
+## Using Helm
+
+To install Helm, refer to the [Helm install guide](https://github.com/helm/helm#install) and ensure that the helm binary is in the PATH of your shell.
+Once you have installed the Helm client, you can deploy a SHALB Helm Chart into a Kubernetes cluster.
+Please refer to the [Quick Start guide](https://helm.sh/docs/intro/quickstart/).
+
+## Using Helm with cluster.dev
+
+Example of how to deploy application with Helm and Terraform to Kubernetes:
+
+| Description | Sample Link | Technologies |
+|-----------------------------|---------------------------------------|------------------|
+| Kubernetes Terraform Helm | [**Quick Start with Kubernetes**](https://docs.cluster.dev/get-started-cdev-helm/) | |
+
+## License
+
+Copyright © 2023 SHALB.
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
\ No newline at end of file
diff --git a/huggingface-model/Chart.yaml b/huggingface-model/Chart.yaml
index c6c740a..ceda344 100644
--- a/huggingface-model/Chart.yaml
+++ b/huggingface-model/Chart.yaml
@@ -1,6 +1,6 @@
apiVersion: v2
name: huggingface-model
-description: Helm chart for deploy Hugging Face to kubernetes cluster. See [Hugging Face models](https://huggingface.co/models)
+description: Helm chart for deploy Hugging Face models and chat-ui to Kubernetes cluster. See [Hugging Face models](https://huggingface.co/models)
type: application
diff --git a/huggingface-model/README.md b/huggingface-model/README.md
index 85aae8a..20d4f73 100644
--- a/huggingface-model/README.md
+++ b/huggingface-model/README.md
@@ -1,6 +1,11 @@
-# Helm chart for deploy Hugging Face to kubernetes cluster
+# Helm chart for deploy HuggingFace models to Kubernetes cluster
-See [Hugging Face models](https://huggingface.co/models)
+Charts install [Text Generation Inference](https://github.com/huggingface/text-generation-inference) container and serves [Text Generation LLM models](https://huggingface.co/models?pipeline_tag=text-generation).
+It is possible to inject another image to serve inference with different approach.
+
+init-container is used to download model to PVC storage from HuggingFace directly or from s3-compatible(and from other storage).
+
+Also it would deploy [HuggingFace chat-ui](https://github.com/huggingface/chat-ui) image and configure it to use with deployed model to be able to chat with it in browser.
## Parameters
diff --git a/huggingface-model/values.yaml b/huggingface-model/values.yaml
index 6efd659..907c6f2 100644
--- a/huggingface-model/values.yaml
+++ b/huggingface-model/values.yaml
@@ -3,15 +3,15 @@
## ref: https://huggingface.co/models
## @param model.organization Models' company name on huggingface, required!
## @param model.name Models' name on huggingface, required!
-## e.g. to deploy model https://huggingface.co/segmind/SSD-1B use configuration below:
-## organization: segmind
-## name: SSD-1B
+## e.g. to deploy model https://huggingface.co/HuggingFaceH4/zephyr-7b-beta use configuration below:
+## organization: HuggingFaceH4
+## name: zephyr-7b-beta
##
model:
organization: ""
name: ""
-## Init configuration. By default, init clone model from huggingface git.
+## Init configuration. By default, init clone model from Huggingface git using git-lfs.
## The another way is to upload model to s3 bucket to reduce init delay and external traffic.
## @param init.s3.enabled Turn on/off s3 data source Default: disabled
## @param init.s3.bucketURL Full s3 URL included path to model's folder
@@ -19,9 +19,9 @@ model:
init:
s3:
enabled: false
- bucketURL: s3://k8s-model-zephyr/llm/deployment/segmind/SSD-1B
+ bucketURL: s3://k8s-model-zephyr/llm/deployment/HuggingFaceH4/zephyr-7b-beta
-## huggingface block configure running text-generation-launcher internal port and additional arguments
+## Huggingface block configure running text-generation-launcher internal port and additional arguments
## @param huggingface.containerPort Deployment/StatefulSet ContainerPort, optional
##
huggingface: