diff --git a/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst b/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst index 54c1d44c865c21..1f0e63c071d4bc 100644 --- a/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst +++ b/docs/articles_en/openvino-workflow/model-optimization-guide/weight-compression.rst @@ -25,7 +25,7 @@ from about 25GB to 4GB using 4-bit weight compression. compression may result in more accuracy reduction than with larger models. Therefore, weight compression is recommended for use with LLMs only. -LLMs and other models that require +LLMs and other GenAI models that require extensive memory to store the weights during inference can benefit from weight compression as it: