Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
xenova authored Dec 7, 2024
2 parents 5a61999 + 7dffb9a commit 2ad01ad
Show file tree
Hide file tree
Showing 341 changed files with 43,262 additions and 16,157 deletions.
3 changes: 0 additions & 3 deletions .github/FUNDING.yml

This file was deleted.

1 change: 0 additions & 1 deletion .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
with:
repo_owner: xenova
commit_sha: ${{ github.sha }}
package: transformers.js
path_to_docs: transformers.js/docs/source
Expand Down
1 change: 0 additions & 1 deletion .github/workflows/pr-documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
with:
repo_owner: xenova
commit_sha: ${{ github.sha }}
pr_number: ${{ github.event.number }}
package: transformers.js
Expand Down
15 changes: 8 additions & 7 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,20 @@ on:
pull_request:
branches:
- main

env:
TESTING_REMOTELY: true
types:
- opened
- reopened
- synchronize
- ready_for_review

jobs:
build:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest

strategy:
matrix:
node-version: [18.x, latest, node]
node-version: [18, 20, 22]

steps:
- uses: actions/checkout@v4
Expand All @@ -27,11 +30,9 @@ jobs:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npm run build
- run: pip install -r tests/requirements.txt

# Setup the testing environment
- run: npm run generate-tests
- run: git lfs install && GIT_CLONE_PROTECTION_ACTIVE=false git clone https://huggingface.co/Xenova/t5-small ./models/t5-small
- run: git lfs install && GIT_CLONE_PROTECTION_ACTIVE=false git clone https://huggingface.co/hf-internal-testing/tiny-random-T5ForConditionalGeneration ./models/hf-internal-testing/tiny-random-T5ForConditionalGeneration

# Actually run tests
- run: npm run test
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ __pycache__
.vscode
node_modules
.cache
.DS_STORE

# Do not track build artifacts/generated files
/dist
Expand Down
8 changes: 8 additions & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Ignore artifacts:
.github
dist
docs
examples
scripts
types
*.md
10 changes: 10 additions & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"overrides": [
{
"files": ["tests/**/*.js"],
"options": {
"printWidth": 10000000
}
}
]
}
177 changes: 119 additions & 58 deletions README.md

Large diffs are not rendered by default.

38 changes: 15 additions & 23 deletions docs/scripts/build_readme.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,41 +5,31 @@
<p align="center">
<br/>
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/raw/main/transformersjs-dark.svg" width="500" style="max-width: 100%;">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/raw/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
<img alt="transformers.js javascript library logo" src="https://huggingface.co/datasets/Xenova/transformers.js-docs/raw/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-dark.svg" width="500" style="max-width: 100%;">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
<img alt="transformers.js javascript library logo" src="https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/transformersjs-light.svg" width="500" style="max-width: 100%;">
</picture>
<br/>
</p>
<p align="center">
<a href="https://www.npmjs.com/package/@xenova/transformers">
<img alt="NPM" src="https://img.shields.io/npm/v/@xenova/transformers">
</a>
<a href="https://www.npmjs.com/package/@xenova/transformers">
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@xenova/transformers">
</a>
<a href="https://www.jsdelivr.com/package/npm/@xenova/transformers">
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@xenova/transformers">
</a>
<a href="https://github.com/xenova/transformers.js/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/xenova/transformers.js?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers.js/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers.js/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://www.npmjs.com/package/@huggingface/transformers"><img alt="NPM" src="https://img.shields.io/npm/v/@huggingface/transformers"></a>
<a href="https://www.npmjs.com/package/@huggingface/transformers"><img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@huggingface/transformers"></a>
<a href="https://www.jsdelivr.com/package/npm/@huggingface/transformers"><img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@huggingface/transformers"></a>
<a href="https://github.com/huggingface/transformers.js/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/huggingface/transformers.js?color=blue"></a>
<a href="https://huggingface.co/docs/transformers.js/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers.js/index.svg?down_color=red&down_message=offline&up_message=online"></a>
</p>
{intro}
## Quick tour
{quick_tour}
## Installation
{installation}
## Quick tour
{quick_tour}
## Examples
{examples}
Expand All @@ -52,7 +42,7 @@
Here is the list of all tasks and architectures currently supported by Transformers.js.
If you don't see your task/model listed here or it is not yet supported, feel free
to open up a feature request [here](https://github.com/xenova/transformers.js/issues/new/choose).
to open up a feature request [here](https://github.com/huggingface/transformers.js/issues/new/choose).
To find compatible models on the Hub, select the "transformers.js" library tag in the filter menu (or visit [this link](https://huggingface.co/models?library=transformers.js)).
You can refine your search by selecting the task you're interested in (e.g., [text-classification](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers.js)).
Expand All @@ -79,6 +69,8 @@
CUSTOM_LINK_MAP = {
'/custom_usage#convert-your-models-to-onnx': '#convert-your-models-to-onnx',
'./api/env': DOCS_BASE_URL + '/api/env',
'./guides/webgpu': DOCS_BASE_URL + '/guides/webgpu',
'./guides/dtypes': DOCS_BASE_URL + '/guides/dtypes',
}


Expand Down
12 changes: 8 additions & 4 deletions docs/snippets/0_introduction.snippet
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
<h3 align="center">
<p>State-of-the-art Machine Learning for the Web</p>
</h3>

Run 🤗 Transformers directly in your browser, with no need for a server!

Transformers.js is designed to be functionally equivalent to Hugging Face's [transformers](https://github.com/huggingface/transformers) python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
- 📝 **Natural Language Processing**: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
- 🖼️ **Computer Vision**: image classification, object detection, and segmentation.
- 🗣️ **Audio**: automatic speech recognition and audio classification.
- 🐙 **Multimodal**: zero-shot image classification.
- 🖼️ **Computer Vision**: image classification, object detection, segmentation, and depth estimation.
- 🗣️ **Audio**: automatic speech recognition, audio classification, and text-to-speech.
- 🐙 **Multimodal**: embeddings, zero-shot audio classification, zero-shot image classification, and zero-shot object detection.

Transformers.js uses [ONNX Runtime](https://onnxruntime.ai/) to run models in the browser. The best part about it, is that you can easily [convert](#convert-your-models-to-onnx) your pretrained PyTorch, TensorFlow, or JAX models to ONNX using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).

Expand Down
35 changes: 31 additions & 4 deletions docs/snippets/1_quick-tour.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ out = pipe('I love transformers!')
<td>

```javascript
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';
// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
const pipe = await pipeline('sentiment-analysis');
let out = await pipe('I love transformers!');
const out = await pipe('I love transformers!');
// [{'label': 'POSITIVE', 'score': 0.999817686}]
```

Expand All @@ -40,5 +40,32 @@ let out = await pipe('I love transformers!');
You can also use a different model by specifying the model id or path as the second argument to the `pipeline` function. For example:
```javascript
// Use a different model for sentiment-analysis
let pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
const pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
```

By default, when running in the browser, the model will be run on your CPU (via WASM). If you would like
to run the model on your GPU (via WebGPU), you can do this by setting `device: 'webgpu'`, for example:
```javascript
// Run the model on WebGPU
const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {
device: 'webgpu',
});
```

For more information, check out the [WebGPU guide](./guides/webgpu).

> [!WARNING]
> The WebGPU API is still experimental in many browsers, so if you run into any issues,
> please file a [bug report](https://github.com/huggingface/transformers.js/issues/new?title=%5BWebGPU%5D%20Error%20running%20MODEL_ID_GOES_HERE&assignees=&labels=bug,webgpu&projects=&template=1_bug-report.yml).

In resource-constrained environments, such as web browsers, it is advisable to use a quantized version of
the model to lower bandwidth and optimize performance. This can be achieved by adjusting the `dtype` option,
which allows you to select the appropriate data type for your model. While the available options may vary
depending on the specific model, typical choices include `"fp32"` (default for WebGPU), `"fp16"`, `"q8"`
(default for WASM), and `"q4"`. For more information, check out the [quantization guide](./guides/dtypes).
```javascript
// Run the model at 4-bit quantization
const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {
dtype: 'q4',
});
```
6 changes: 3 additions & 3 deletions docs/snippets/2_installation.snippet
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@

To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
To install via [NPM](https://www.npmjs.com/package/@huggingface/transformers), run:
```bash
npm i @xenova/transformers
npm i @huggingface/transformers
```

Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.1.1';
</script>
```
26 changes: 13 additions & 13 deletions docs/snippets/3_examples.snippet
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
Want to jump straight in? Get started with one of our sample applications/templates:
Want to jump straight in? Get started with one of our sample applications/templates, which can be found [here](https://github.com/huggingface/transformers.js-examples).

| Name | Description | Links |
|-------------------|----------------------------------|-------------------------------|
| Whisper Web | Speech recognition w/ Whisper | [code](https://github.com/xenova/whisper-web), [demo](https://huggingface.co/spaces/Xenova/whisper-web) |
| Doodle Dash | Real-time sketch-recognition game | [blog](https://huggingface.co/blog/ml-web-games), [code](https://github.com/xenova/doodle-dash), [demo](https://huggingface.co/spaces/Xenova/doodle-dash) |
| Code Playground | In-browser code completion website | [code](https://github.com/xenova/transformers.js/tree/main/examples/code-completion/), [demo](https://huggingface.co/spaces/Xenova/ai-code-playground) |
| Semantic Image Search (client-side) | Search for images with text | [code](https://github.com/xenova/transformers.js/tree/main/examples/semantic-image-search-client/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search-client) |
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [code](https://github.com/xenova/transformers.js/tree/main/examples/semantic-image-search/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search) |
| Vanilla JavaScript | In-browser object detection | [video](https://scrimba.com/scrim/cKm9bDAg), [code](https://github.com/xenova/transformers.js/tree/main/examples/vanilla-js/), [demo](https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector) |
| React | Multilingual translation website | [code](https://github.com/xenova/transformers.js/tree/main/examples/react-translator/), [demo](https://huggingface.co/spaces/Xenova/react-translator) |
| Text to speech (client-side) | In-browser speech synthesis | [code](https://github.com/xenova/transformers.js/tree/main/examples/text-to-speech-client/), [demo](https://huggingface.co/spaces/Xenova/text-to-speech-client) |
| Browser extension | Text classification extension | [code](https://github.com/xenova/transformers.js/tree/main/examples/extension/) |
| Electron | Text classification application | [code](https://github.com/xenova/transformers.js/tree/main/examples/electron/) |
| Next.js (client-side) | Sentiment analysis (in-browser inference) | [code](https://github.com/xenova/transformers.js/tree/main/examples/next-client/), [demo](https://huggingface.co/spaces/Xenova/next-example-app) |
| Next.js (server-side) | Sentiment analysis (Node.js inference) | [code](https://github.com/xenova/transformers.js/tree/main/examples/next-server/), [demo](https://huggingface.co/spaces/Xenova/next-server-example-app) |
| Node.js | Sentiment analysis API | [code](https://github.com/xenova/transformers.js/tree/main/examples/node/) |
| Demo site | A collection of demos | [code](https://github.com/xenova/transformers.js/tree/main/examples/demo-site/), [demo](https://xenova.github.io/transformers.js/) |
| Code Playground | In-browser code completion website | [code](https://github.com/huggingface/transformers.js/tree/main/examples/code-completion/), [demo](https://huggingface.co/spaces/Xenova/ai-code-playground) |
| Semantic Image Search (client-side) | Search for images with text | [code](https://github.com/huggingface/transformers.js/tree/main/examples/semantic-image-search-client/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search-client) |
| Semantic Image Search (server-side) | Search for images with text (Supabase) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/semantic-image-search/), [demo](https://huggingface.co/spaces/Xenova/semantic-image-search) |
| Vanilla JavaScript | In-browser object detection | [video](https://scrimba.com/scrim/cKm9bDAg), [code](https://github.com/huggingface/transformers.js/tree/main/examples/vanilla-js/), [demo](https://huggingface.co/spaces/Scrimba/vanilla-js-object-detector) |
| React | Multilingual translation website | [code](https://github.com/huggingface/transformers.js/tree/main/examples/react-translator/), [demo](https://huggingface.co/spaces/Xenova/react-translator) |
| Text to speech (client-side) | In-browser speech synthesis | [code](https://github.com/huggingface/transformers.js/tree/main/examples/text-to-speech-client/), [demo](https://huggingface.co/spaces/Xenova/text-to-speech-client) |
| Browser extension | Text classification extension | [code](https://github.com/huggingface/transformers.js/tree/main/examples/extension/) |
| Electron | Text classification application | [code](https://github.com/huggingface/transformers.js/tree/main/examples/electron/) |
| Next.js (client-side) | Sentiment analysis (in-browser inference) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/next-client/), [demo](https://huggingface.co/spaces/Xenova/next-example-app) |
| Next.js (server-side) | Sentiment analysis (Node.js inference) | [code](https://github.com/huggingface/transformers.js/tree/main/examples/next-server/), [demo](https://huggingface.co/spaces/Xenova/next-server-example-app) |
| Node.js | Sentiment analysis API | [code](https://github.com/huggingface/transformers.js/tree/main/examples/node/) |
| Demo site | A collection of demos | [code](https://github.com/huggingface/transformers.js/tree/main/examples/demo-site/), [demo](https://xenova.github.io/transformers.js/) |

Check out the Transformers.js [template](https://huggingface.co/new-space?template=static-templates%2Ftransformers.js) on Hugging Face to get started in one click!
7 changes: 3 additions & 4 deletions docs/snippets/4_custom-usage.snippet
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@


By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:

By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:

### Settings

```javascript
import { env } from '@xenova/transformers';
import { env } from '@huggingface/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
Expand All @@ -22,7 +21,7 @@ For a full list of available settings, check out the [API Reference](./api/env).

### Convert your models to ONNX

We recommend using our [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [🤗 Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.
We recommend using our [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [🤗 Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.

```bash
python -m scripts.convert --quantize --model_id <model_name_or_path>
Expand Down
Loading

0 comments on commit 2ad01ad

Please sign in to comment.