diff --git a/README.md b/README.md
index 4a7bea998..587b4945f 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,5 @@
-![Lightly Logo](docs/logos/lightly_logo_crop.png)
+![Lightly SSL self-supervised learning Logo](docs/logos/lightly_SSL_logo_crop.png)
![GitHub](https://img.shields.io/github/license/lightly-ai/lightly)
![Unit Tests](https://github.com/lightly-ai/lightly/workflows/Unit%20Tests/badge.svg)
@@ -7,19 +7,21 @@
[![Downloads](https://static.pepy.tech/badge/lightly)](https://pepy.tech/project/lightly)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
-Lightly is a computer vision framework for self-supervised learning.
+Lightly SSL is a computer vision framework for self-supervised learning.
- [Documentation](https://docs.lightly.ai/self-supervised-learning/)
- [Github](https://github.com/lightly-ai/lightly)
- [Discord](https://discord.gg/xvNJW94) (We have weekly paper sessions!)
-We also built a whole platform on top, with additional features for active learning
-and data curation. If you're interested in the platform, check out [lightly.ai](https://www.lightly.ai).
+We've also built a whole platform on top, with additional features for active learning
+and [data curation](https://docs.lightly.ai/docs/what-is-lightly). If you're interested in the
+Lightly Worker Solution to easily process millions of samples and run [powerful algorithms](https://docs.lightly.ai/docs/selection)
+on your data, check out [lightly.ai](https://www.lightly.ai). It's free to get started!
## Features
-This framework offers the following features:
+This self-supervised learning framework offers the following features:
- Modular framework, which exposes low-level building blocks such as loss functions and
model heads.
diff --git a/docs/logos/lightly_SSL_logo_crop.png b/docs/logos/lightly_SSL_logo_crop.png
new file mode 100644
index 000000000..62028eaf2
Binary files /dev/null and b/docs/logos/lightly_SSL_logo_crop.png differ
diff --git a/docs/logos/lightly_SSL_logo_crop_white_text.png b/docs/logos/lightly_SSL_logo_crop_white_text.png
new file mode 100644
index 000000000..90733d7d2
Binary files /dev/null and b/docs/logos/lightly_SSL_logo_crop_white_text.png differ
diff --git a/docs/source/_templates/footer.html b/docs/source/_templates/footer.html
index 86060f5ef..217e11623 100644
--- a/docs/source/_templates/footer.html
+++ b/docs/source/_templates/footer.html
@@ -26,7 +26,13 @@
{%- else %}
{% set copyright = copyright|e %}
- © {% trans %}Copyright{% endtrans %} {{ copyright_year }}, {{ copyright }}
+ © {% trans %}Copyright{% endtrans %} {{ copyright_year }}
+ | {{ copyright }}
+ |
+
+ Source Code
+
+ | Lightly Worker Solution documentation
{%- endif %}
{%- endif %}
diff --git a/docs/source/_templates/layout.html b/docs/source/_templates/layout.html
index bcf32e9f5..e39f8042f 100644
--- a/docs/source/_templates/layout.html
+++ b/docs/source/_templates/layout.html
@@ -8,6 +8,36 @@
We need this to override the footer
-->
{%- block content %}
+
+
{% else %}
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 0d649486c..eb2721c57 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -22,10 +22,10 @@
# -- Project information -----------------------------------------------------
project = "lightly"
-copyright_year = "2020"
+copyright_year = "2020-"
copyright = "Lightly AG"
website_url = "https://www.lightly.ai/"
-author = "Philipp Wirth, Igor Susmelj"
+author = "Lightly Team"
# The full version, including alpha/beta/rc tags
release = lightly.__version__
@@ -98,7 +98,7 @@
html_favicon = "favicon.png"
-html_logo = "../logos/lightly_logo_crop_white_text.png"
+html_logo = "../logos/lightly_SSL_logo_crop_white_text.png"
# Exposes variables so that they can be used by django
html_context = {
diff --git a/docs/source/docker/advanced/datapool.rst b/docs/source/docker/advanced/datapool.rst
index 6a1b6badb..389af3336 100644
--- a/docs/source/docker/advanced/datapool.rst
+++ b/docs/source/docker/advanced/datapool.rst
@@ -3,7 +3,7 @@
Datapool
=================
-Lightly has been designed in a way that you can incrementally build up a
+Lightly Worker has been designed in a way that you can incrementally build up a
dataset for your project. The software automatically keeps track of the
representations of previously selected samples and uses this information
to pick new samples in order to maximize the quality of the final dataset.
@@ -43,7 +43,7 @@ has the following advantages:
If you want to search all data in your bucket for new samples
instead of only newly added data,
then set :code:`'datasource.process_all': True` in your worker config. This has the
-same effect as creating a new Lightly dataset and running the Lightly Worker from scratch
+same effect as creating a new dataset and running the Lightly Worker from scratch
on the full dataset. We process all data instead of only the newly added ones.
@@ -67,7 +67,7 @@ first time.
|-- passageway1-c1.avi
`-- terrace1-c0.avi
-Let's create a Lightly dataset which uses that bucket (choose your tab - S3, GCS or Azure):
+Let's create a dataset which uses that bucket (choose your tab - S3, GCS or Azure):
.. tabs::
.. tab:: AWS S3 Datasource
diff --git a/docs/source/docker/advanced/datasource_metadata.rst b/docs/source/docker/advanced/datasource_metadata.rst
index d16b4356d..da7288393 100644
--- a/docs/source/docker/advanced/datasource_metadata.rst
+++ b/docs/source/docker/advanced/datasource_metadata.rst
@@ -3,7 +3,7 @@
Add Metadata to a Datasource
===============================
-Lightly can make use of metadata collected alongside your images or videos. Provided,
+Lightly Worker can make use of metadata collected alongside your images or videos. Provided,
metadata can be used to steer the selection process and to analyze the selected dataset
in the Lightly Platform.
@@ -45,7 +45,7 @@ Metadata Schema
The schema defines the format of the metadata and helps the Lightly Platform to correctly identify
and display different types of metadata.
-You can provide this information to Lightly by adding a `schema.json` to the
+You can provide this information to Lightly Worker by adding a `schema.json` to the
`.lightly/metadata` directory. The `schema.json` file must contain a list of
configuration entries. Each of the entries is a dictionary with the following keys:
@@ -105,9 +105,9 @@ of the images we have collected. A possible schema could look like this:
Metadata Files
--------------
-Lightly requires a single metadata file per image or video. If an image or video has no corresponding metadata file,
-Lightly assumes the default value from the `schema.json`. If a metadata file is provided for a full video,
-Lightly assumes that the metadata is valid for all frames in that video.
+Lightly Worker requires a single metadata file per image or video. If an image or video has no corresponding metadata file,
+Lightly Worker assumes the default value from the `schema.json`. If a metadata file is provided for a full video,
+Lightly Worker assumes that the metadata is valid for all frames in that video.
To provide metadata for an image or a video, place a metadata file with the same name
as the image or video in the `.lightly/metadata` directory but change the file extension to
@@ -130,8 +130,8 @@ as the image or video in the `.lightly/metadata` directory but change the file e
When working with videos it's also possible to provide metadata on a per-frame basis.
-Then, Lightly requires a metadata file per frame. If a frame has no corresponding metadata file,
-Lightly assumes the default value from the `schema.json`. Lightly uses a naming convention to
+Then, Lightly Worker requires a metadata file per frame. If a frame has no corresponding metadata file,
+Lightly Worker assumes the default value from the `schema.json`. Lightly Worker uses a naming convention to
identify frames: The filename of a frame consists of the video filename, the frame number
(padded to the length of the number of frames in the video), the video format separated
by hyphens. For example, for a video with 200 frames, the frame number will be padded
diff --git a/docs/source/docker/advanced/datasource_predictions.rst b/docs/source/docker/advanced/datasource_predictions.rst
index ae2160525..584bc4d93 100644
--- a/docs/source/docker/advanced/datasource_predictions.rst
+++ b/docs/source/docker/advanced/datasource_predictions.rst
@@ -3,9 +3,9 @@
Add Predictions to a Datasource
===============================
-Lightly can not only use images you provided in a datasource, but also predictions of a ML model on your images.
+Lightly Worker can not only use images you provided in a datasource, but also predictions of a ML model on your images.
They are used for active learning for selecting images based on the objects in them.
-Furthermore, object detection predictions can be used running Lightly on object level.
+Furthermore, object detection predictions can be used running Lightly Worker on object level.
By providing the predictions in the datasource,
you have full control over them and they scale well to millions of samples.
Furthermore, if you add new samples to your datasource, you can simultaneously
@@ -62,8 +62,8 @@ and an object detection task). All of the files are explained in the next sectio
Prediction Tasks
----------------
-To let Lightly know what kind of prediction tasks you want to work with, Lightly
-needs to know their names. It's very easy to let Lightly know which tasks exist:
+To let the Lightly Worker know what kind of prediction tasks you want to work with, the Lightly Worker
+needs to know their names. It's very easy to let the Lightly Worker know which tasks exist:
simply add a `tasks.json` in your lightly bucket stored at the subdirectory `.lightly/predictions/`.
The `tasks.json` file must include a list of your task names which must match name
@@ -116,7 +116,7 @@ we can specify which subfolders contain relevant predictions in the `tasks.json`
Prediction Schema
-----------------
-For Lightly it's required to store a prediction schema. The schema defines the
+It's required to store a prediction schema. The schema defines the
format of the predictions and helps the Lightly Platform to correctly identify
and display classes. It also helps to prevent errors as all predictions which
are loaded are validated against this schema.
@@ -127,7 +127,7 @@ all the categories and their corresponding ids. For other tasks, such as keypoin
detection, it can be useful to store additional information like which keypoints
are connected with each other by an edge.
-You can provide all this information to Lightly by adding a `schema.json` to the
+You can provide all this information to the Lightly Worker by adding a `schema.json` to the
directory of the respective task. The schema.json file must have a key `categories`
with a corresponding list of categories following the COCO annotation format.
It must also have a key `task_type` indicating the type of the predictions.
@@ -167,10 +167,10 @@ The three classes are sunny, clouded, and rainy.
Prediction Files
----------------
-Lightly requires a **single prediction file per image**. The file should be a .json
+The Lightly Worker requires a **single prediction file per image**. The file should be a .json
following the format defined under :ref:`prediction-format` and stored in the subdirectory
`.lightly/predictions/${TASK_NAME}` in the storage bucket the dataset was configured with.
-In order to make sure Lightly can match the predictions to the correct source image,
+In order to make sure the Lightly Worker can match the predictions to the correct source image,
it's necessary to follow the naming convention:
.. code-block:: bash
@@ -189,7 +189,7 @@ it's necessary to follow the naming convention:
Prediction Files for Videos
---------------------------
-When working with videos, Lightly requires a prediction file per frame. Lightly
+When working with videos, the Lightly Worker requires a prediction file per frame. Lightly
uses a naming convention to identify frames: The filename of a frame consists of
the video filename, the video format, and the frame number (padded to the length
of the number of frames in the video) separated by hyphens. For example, for a
@@ -363,7 +363,7 @@ belonging to that category. Optionally, a list of probabilities can be provided
containing a probability for each category, indicating the likeliness that the
segment belongs to that category.
-To kickstart using Lightly with semantic segmentation predictions we created an
+To kickstart using the Lightly Worker with semantic segmentation predictions we created an
example script that takes model predictions and converts them to the correct
format :download:`semantic_segmentation_inference.py
`
@@ -403,13 +403,13 @@ following function:
Segmentation models oftentimes output a probability for each pixel and category.
Storing such probabilities can quickly result in large file sizes if the input
-images have a high resolution. To reduce storage requirements, Lightly expects
+images have a high resolution. To reduce storage requirements, Lightly Worker expects
only a single score or probability per segmentation. If you have scores or
probabilities for each pixel in the image, you have to first aggregate them
into a single score/probability. We recommend to take either the median or mean
score/probability over all pixels within the segmentation mask. The example
below shows how pixelwise segmentation predictions can be converted to the
-format required by Lightly.
+format required by the Lightly Worker.
.. code-block:: python
@@ -522,7 +522,7 @@ Don't forget to change these 2 parameters at the top of the script.
Creating Prediction Files for Videos
-------------------------------------
-Lightly expects one prediction file per frame in a video. Predictions can be
+The Lightly Worker expects one prediction file per frame in a video. Predictions can be
created following the Python example code below. Make sure that `PyAV `_
is installed on your system for it to work correctly.
diff --git a/docs/source/docker/advanced/load_model_from_checkpoint.rst b/docs/source/docker/advanced/load_model_from_checkpoint.rst
index 09e6f4ed7..58c28c5e4 100644
--- a/docs/source/docker/advanced/load_model_from_checkpoint.rst
+++ b/docs/source/docker/advanced/load_model_from_checkpoint.rst
@@ -3,8 +3,8 @@
Load Model from Checkpoint
==========================
-The Lightly worker can be used to :ref:`train a self-supervised model on your data. `
-Lightly saves the weights of the model after training to a checkpoint file in
+The Lightly Worker can be used to :ref:`train a self-supervised model on your data. `
+Lightly Worker saves the weights of the model after training to a checkpoint file in
:code:`output_dir/lightly_epoch_X.ckpt`. This checkpoint can then be further
used to, for example, train a classifier model on your dataset. The code below
demonstrates how the checkpoint can be loaded:
diff --git a/docs/source/docker/advanced/object_level.rst b/docs/source/docker/advanced/object_level.rst
index d211ed85a..5b96289ec 100644
--- a/docs/source/docker/advanced/object_level.rst
+++ b/docs/source/docker/advanced/object_level.rst
@@ -2,7 +2,7 @@
Object Level
============
-Lightly does not only work on full images but also on an object level. This
+The Lightly Worker does not only work on full images but also on an object level. This
workflow is especially useful for datasets containing small objects or multiple
objects in each image and provides the following benefits over the full image
workflow:
@@ -21,7 +21,7 @@ workflow:
Prerequisites
-------------
-In order to use the object level workflow with Lightly, you will need the
+In order to use the object level workflow with the Lightly Worker, you will need the
following things:
- The installed Lightly Worker (see :ref:`docker-setup`)
@@ -31,13 +31,13 @@ following things:
.. note::
- If you don't have any predictions available, you can use the Lightly pretagging
+ If you don't have any predictions available, you can use the Lightly Worker pretagging
model. See :ref:`Pretagging ` for more information.
Predictions
-----------
-Lightly needs to know which objects to process. This information is provided
+The Lightly Worker needs to know which objects to process. This information is provided
by uploading a set of object predictions to the datasource (see :ref:`docker-datasource-predictions`).
Let's say we are working with a dataset containing different types of vehicles
and used an object detection model to find possible vehicle objects in the
@@ -170,7 +170,7 @@ code to sping up a Lightly Worker
Padding
-------
-Lightly makes it possible to add a padding around your bounding boxes. This allows
+The Lightly Worker makes it possible to add a padding around your bounding boxes. This allows
for better visualization of the cropped images in the web-app and can improve the
embeddings of the objects as the embedding model sees the objects in context. To add
padding, simply specify `object_level.padding=X` where `X` is the padding relative
@@ -239,9 +239,9 @@ properties of your dataset and reveal things like:
These hidden biases are hard to find in a dataset if you only rely on full
images or the coarse vehicle type predicted by the object detection model.
-Lightly helps you to identify them quickly and assists you in monitoring and
+The Lightly Worker helps you to identify them quickly and assists you in monitoring and
improving the quality of your dataset. After an initial exploration you can now
-take further steps to enhance the dataset using one of the workflows Lightly
+take further steps to enhance the dataset using one of the workflows the Lightly Worker
provides:
- Select a subset of your data using our :ref:`Sampling Algorithms `
@@ -252,7 +252,7 @@ provides:
Multiple Object Level Runs
--------------------------
You can run multiple object level workflows using the same dataset. To start a
-new run, please select your original full image dataset in the Lightly Web App
+new run, please select your original full image dataset in the Lightly Platform
and schedule a new run from there. If you are running the Lightly Worker from Python or
over the API, you have to set the `dataset_id` configuration option to the id of
the original full image dataset. In both cases make sure that the run is *not*
@@ -261,7 +261,7 @@ started from the crops dataset as this is not supported!
You can control to which crops dataset the newly selected object crops are
uploaded by setting the `object_level.crop_dataset_name` configuration option.
By default this option is not set and if you did not specify it in the first run,
-you can also omit it in future runs. In this case Lightly will automatically
+you can also omit it in future runs. In this case the Lightly Worker will automatically
find the existing crops dataset and add the new crops to it. If you want to
upload the crops to a new dataset or have set a custom crop dataset name in a
previous run, then set the `object_level.crop_dataset_name` option to a new
diff --git a/docs/source/docker/advanced/overview.rst b/docs/source/docker/advanced/overview.rst
index 14e592240..3057c5448 100644
--- a/docs/source/docker/advanced/overview.rst
+++ b/docs/source/docker/advanced/overview.rst
@@ -1,6 +1,6 @@
Advanced
===================================
-Here you learn more advanced usage patterns of Lightly Docker.
+Here you learn more advanced usage patterns of Lightly Worker.
.. toctree::
diff --git a/docs/source/docker/configuration/configuration.rst b/docs/source/docker/configuration/configuration.rst
index a8908dfb6..241e4766f 100644
--- a/docs/source/docker/configuration/configuration.rst
+++ b/docs/source/docker/configuration/configuration.rst
@@ -38,7 +38,7 @@ The following are parameters which can be passed to the container:
token: ''
worker:
- # If specified, the docker is started as a worker on the Lightly platform.
+ # If specified, the docker is started as a worker on the Lightly Platform.
worker_id: ''
# If True, the worker notifies that it is online even though another worker
# with the same worker_id is already online.
@@ -89,12 +89,12 @@ The following are parameters which can be passed to the container:
# shortest edge to x or to resize the image to (height, width), use =-1 for no
# resizing (default). This only affects the output size of the images dumped to
# the output folder with dump_dataset=True. To change the size of images
- # uploaded to the lightly platform or your cloud bucket please use the
+ # uploaded to the lightly Platform or your cloud bucket please use the
# lightly.resize option instead.
output_image_size: -1
output_image_format: 'png'
- # Upload the dataset to the Lightly platform.
+ # Upload the dataset to the Lightly Platform.
upload_dataset: False
# pretagging
@@ -134,14 +134,14 @@ The following are parameters which can be passed to the container:
name:
# If True keeps backup of all previous data pool states.
keep_history: True
- # Dataset id from Lightly platform where the datapool should be hosted.
+ # Dataset id from Lightly Platform where the datapool should be hosted.
dataset_id:
# datasource
# By default only new samples in the datasource are processed. Set process_all
# to True to reprocess all samples in the datasource.
datasource:
- # Dataset id from the Lightly platform.
+ # Dataset id from the Lightly Platform.
dataset_id:
# Set to True to reprocess all samples in the datasource.
process_all: False
@@ -192,7 +192,7 @@ The following are parameters which can be passed to the container:
# optional deterministic unique output subdirectory for run, in place of timestamp
run_directory:
-To get an overview of all possible configuration parameters of Lightly,
+To get an overview of all possible configuration parameters of the Lightly Worker,
please check out :ref:`ref-cli-config-default`
Choosing the Right Parameters
diff --git a/docs/source/docker/examples/datasets_in_the_wild.rst b/docs/source/docker/examples/datasets_in_the_wild.rst
index cfea4327f..90d2b1c3a 100644
--- a/docs/source/docker/examples/datasets_in_the_wild.rst
+++ b/docs/source/docker/examples/datasets_in_the_wild.rst
@@ -213,7 +213,7 @@ can process the video directly so we require only 6.4 MBytes of storage. This me
* - Metric
- ffmpeg extracted frames
- - Lightly using video
+ - Lightly Worker using video
- Reduction
* - Storage Consumption
- 447 MBytes + 6.4 MBytes
diff --git a/docs/source/docker/getting_started/first_steps.rst b/docs/source/docker/getting_started/first_steps.rst
index 715b55a40..e471bb2a4 100644
--- a/docs/source/docker/getting_started/first_steps.rst
+++ b/docs/source/docker/getting_started/first_steps.rst
@@ -26,8 +26,8 @@ The Lightly Worker follows a train, embed, select workflow:
The Lightly Worker can be easily triggered from your Python code. There are various parameters you can
-configure and we also expose the full configuration of the lightly self-supervised learning framework.
-You can use the Lightly Worker to train a self-supervised model instead of using the Lightly Python framework.
+configure and we also expose the full configuration of the Lightly self-supervised learning framework.
+You can use the Lightly Worker to train a self-supervised model instead of using the Lightly SSL framework.
Using Docker
-------------
@@ -56,8 +56,8 @@ Here, we quickly explain the most important parts of the typical **docker run**
Start the Lightly Worker Docker
--------------------------------
-Before we jump into the details of how to submit jobs, we need to start the Lightly image in
-worker mode (as outlined in :ref:`docker-setup`).
+Before we jump into the details of how to submit jobs, we need to start the
+Lightly Worker docker container in worker mode (as outlined in :ref:`docker-setup`).
**This is how you start your Lightly Worker:**
@@ -115,7 +115,7 @@ make sure to specify the `dataset_id` in the constructor.
INPUT bucket
^^^^^^^^^^^^
-The `INPUT` bucket is where Lightly reads your input data from. You must specify it and you must provide Lightly `LIST` and `READ` access to it.
+The `INPUT` bucket is where the Lightly Worker reads your input data from. You must specify it and you must provide Lightly `LIST` and `READ` access to it.
LIGHTLY bucket
^^^^^^^^^^^^^^
@@ -129,7 +129,7 @@ The `LIGHTLY` bucket is used for many purposes:
- Saving thumbnails of images for a more responsive Lightly Platform.
- Saving images of cropped out objects, if you use the object-level workflow. See also :ref:`docker-object-level`.
- Saving frames of videos, if your input consists of videos.
-- Providing the relevant filenames file if you want to to run the lightly worker only on a subset of input files: See also :ref:`specifying_relevant_files`.
+- Providing the relevant filenames file if you want to to run the Lightly Worker only on a subset of input files: See also :ref:`specifying_relevant_files`.
- Providing predictions for running the object level workflow or as additional information for the selection process. See also :ref:`docker-datasource-predictions`.
- Providing metadata as additional information for the selection process. See also :ref:`docker-datasource-metadata`.
@@ -351,8 +351,9 @@ epochs on the input images before embedding the images and selecting from them.
)
You may not always want to train for exactly 100 epochs with the default settings.
-The Lightly worker is a wrapper around the lightly Python package.
-Hence, for training and embedding the user can access all the settings from the lightly command-line tool.
+The Lightly Worker is a wrapper around the Lightly SSL Python package.
+Hence, for training and embedding the user can access and set all the settings
+known from the Lightly SSL Python package.
Here are some of the most common parameters for the **lightly_config**
you might want to change:
@@ -364,7 +365,7 @@ you might want to change:
.. code-block:: python
:emphasize-lines: 24, 35
- :caption: Accessing the lightly parameters from Python
+ :caption: Setting the Lightly SSL parameters from Python
scheduled_run_id = client.schedule_compute_worker_run(
worker_config={
diff --git a/docs/source/docker/getting_started/hardware_recommendations.rst b/docs/source/docker/getting_started/hardware_recommendations.rst
index 67edbda1e..a5d58b05b 100644
--- a/docs/source/docker/getting_started/hardware_recommendations.rst
+++ b/docs/source/docker/getting_started/hardware_recommendations.rst
@@ -3,7 +3,7 @@
Hardware recommendations
========================
-Lightly worker is usually run on dedicated hardware
+The Lightly Worker is usually run on dedicated hardware
or in the cloud on a compute instance
which is specifically spun up to run Lightly Worker standalone.
Our recommendations on the hardware requirements of this compute instance are
@@ -42,7 +42,7 @@ Finding the compute speed bottleneck
------------------------------------
Usually, the compute speed is limited by one of three potential bottlenecks.
-Different steps of the Lightly worker use these resources to a different extent.
+Different steps of the Lightly Worker use these resources to a different extent.
Thus the bottleneck changes throughout the run. The bottlenecks are:
- data read speed: I/O
diff --git a/docs/source/docker/getting_started/selection.rst b/docs/source/docker/getting_started/selection.rst
index 8df3c7186..12bc742ff 100644
--- a/docs/source/docker/getting_started/selection.rst
+++ b/docs/source/docker/getting_started/selection.rst
@@ -3,7 +3,7 @@
Selection
=========
-Lightly allows you to specify the subset to be selected based on several objectives.
+The Lightly Worker allows you to specify the subset to be selected based on several objectives.
E.g. you can specify that the images in the subset should be visually diverse, be images the model struggles with (active learning),
should only be sharp images, or have a certain distribution of classes, e.g. be 50% from sunny, 30% from cloudy and 20% from rainy weather.
@@ -13,12 +13,12 @@ Each of these objectives is defined by a `strategy`. A strategy consists of two
- The :code:`input` defines which data the objective is defined on. This data is either a scalar number or a vector for each sample in the dataset.
- The :code:`strategy` itself defines the objective to apply on the input data.
-Lightly allows you to specify several objectives at the same time. The algorithms try to fulfil all objectives simultaneously.
+The Lightly Worker allows you to specify several objectives at the same time. The algorithms try to fulfil all objectives simultaneously.
-Lightly's data selection algorithms support four types of input:
+Lightly Worker's data selection algorithms support four types of input:
- **Embeddings** computed using `our open source framework for self-supervised learning `_
-- **Lightly metadata** are metadata of images like the sharpness and computed out of the images themselves by Lightly.
+- **Lightly metadata** are metadata of images like the sharpness and computed out of the images themselves by the Lightly Worker.
- (Optional) :ref:`Model predictions ` such as classifications, object detections or segmentations
- (Optional) :ref:`Custom metadata ` can be anything you can encode in a json file (from numbers to categorical strings)
@@ -96,7 +96,7 @@ The input can be one of the following:
.. tab:: EMBEDDINGS
- The `lightly OSS framework for self supervised learning `_ is used to compute the embeddings.
+ The `Lightly OSS framework for self supervised learning `_ is used to compute the embeddings.
They are a vector of numbers for each sample.
You can define embeddings as input using:
@@ -213,7 +213,7 @@ The input can be one of the following:
- **Numerical** vs. **Categorical** values
- Not all metadata types can be used in all selection strategies. Lightly differentiates between numerical and categorical metadata.
+ Not all metadata types can be used in all selection strategies. The Lightly Worker differentiates between numerical and categorical metadata.
**Numerical** metadata are numbers (int, float), e.g. `lightly.sharpness` or `weather.temperature`. It is usually real-valued.
@@ -539,7 +539,7 @@ In the next step, all other strategies are applied in parallel.
from "my_weather_classification_task" for one strategy combined with predictions from
"my_object_detection_task" from another strategy.
-The Lightly optimizer tries to fulfil all strategies as good as possible.
+The Lightly Worker optimizer tries to fulfil all strategies as good as possible.
**Potential reasons why your objectives were not satisfied:**
- **Tradeoff between different objectives.**
@@ -558,12 +558,12 @@ The Lightly optimizer tries to fulfil all strategies as good as possible.
Selection on object level
-------------------------
-Lightly supports doing selection on :ref:`docker-object-level`.
+The Lightly Worker supports doing selection on :ref:`docker-object-level`.
While embeddings are fully available, there are some limitations regarding the usage of METADATA and predictions for SCORES and PREDICTIONS as input:
- When using the object level workflow, the object detections used to create the object crops out of the images are available and can be used for both the SCORES and PREDICTIONS input. However, predictions from other tasks are NOT available at the moment.
-- Lightly metadata is generated on the fly for the object crops and can thus be used for selection. However, other metadata is on image level and thus NOT available at the moment.
+- The Lightly Worker generates metadata on the fly for the object crops and can thus be used for selection. However, other metadata is on image level and thus NOT available at the moment.
If your use case would profit from using image-level data for object-level selection, please reach out to us.
diff --git a/docs/source/docker/getting_started/setup.rst b/docs/source/docker/getting_started/setup.rst
index 387a046eb..c91d1f0fd 100644
--- a/docs/source/docker/getting_started/setup.rst
+++ b/docs/source/docker/getting_started/setup.rst
@@ -7,7 +7,7 @@ Setup
Analytics
^^^^^^^^^
-The Lightly worker currently reports usage metrics to our analytics software
+The Lightly Worker currently reports usage metrics to our analytics software
(we use mixpanel) which uses https encrypted GET and POST requests to https://api.mixpanel.com.
The transmitted data includes information about crashes and the number of samples
that have been filtered. However, **the data does not include input / output samples**,
@@ -22,7 +22,7 @@ The licensing and account management is done through the :ref:`ref-authenticatio
obtained from the Lightly Platform (https://app.lightly.ai).
The token will be used to authenticate your account.
-The authentication happens at every run of the worker. Make sure the Lightly worker
+The authentication happens at every run of the worker. Make sure the Lightly Worker
has a working internet connection and has access to https://api.lightly.ai.
@@ -78,9 +78,9 @@ In short, installing the Docker image consists of the following steps:
a :code:`container-credentials.json` file from your account manager.
2. Authenticate your docker account
- To be able to download docker images of Lightly you need to log in with these credentials.
+ To be able to download docker images of the Lightly Worker you need to log in with these credentials.
- The following command will authenticate yourself to gain access to the Lightly docker images.
+ The following command will authenticate yourself to gain access to the Lightly Worker docker images.
We assume :code:`container-credentials.json` is in your current directory.
.. code-block:: console
@@ -123,7 +123,7 @@ In short, installing the Docker image consists of the following steps:
Update the Lightly Worker
^^^^^^^^^^^^^^^^^^^^^^^^^
-To update the Lightly worker we simply need to pull the latest docker image.
+To update the Lightly Worker we simply need to pull the latest docker image.
.. code-block:: console
@@ -140,7 +140,7 @@ Don't forget to tag the image again after pulling it.
instead of `latest`. We follow semantic versioning standards.
-Furthermore, we always recommend using the latest version of the lightly pip package
+Furthermore, we always recommend using the latest version of the Lightly SSL python package
alongside the latest version of the Lightly Worker. You can update the
pip package using the following command.
@@ -153,7 +153,7 @@ pip package using the following command.
Sanity Check
^^^^^^^^^^^^
-**Next**, verify that the Lightly worker is installed correctly by running the following command:
+**Next**, verify that the Lightly Worker is installed correctly by running the following command:
.. code-block:: console
@@ -172,7 +172,7 @@ You should see an output similar to this one:
Register the Lightly Worker
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-**Finally**, start the Lightly worker in waiting mode. In this mode, the worker will long-poll
+**Finally**, start the Lightly Worker in waiting mode. In this mode, the worker will long-poll
the Lightly API for new jobs to process. To do so, a worker first needs to be registered.
diff --git a/docs/source/docker/integration/overview.rst b/docs/source/docker/integration/overview.rst
index ef403acca..81e509a84 100644
--- a/docs/source/docker/integration/overview.rst
+++ b/docs/source/docker/integration/overview.rst
@@ -1,6 +1,6 @@
Integration
===================================
-Here you learn how to integrate the Lightly worker into data pre-processing pipelines.
+Here you learn how to integrate the Lightly Worker into data pre-processing pipelines.
.. toctree::
diff --git a/docs/source/docker/known_issues_faq.rst b/docs/source/docker/known_issues_faq.rst
index 3fbc8ef9c..175f7b311 100644
--- a/docs/source/docker/known_issues_faq.rst
+++ b/docs/source/docker/known_issues_faq.rst
@@ -143,7 +143,7 @@ workers for data fetching :code:`lightly.loader.num_workers` there might be not
To solve this problem we need to reduce the number of workers or
increase the shared memory for the docker runtime.
-Lightly determines the number of CPU cores available and sets the number
+Lightly Worker determines the number of CPU cores available and sets the number
of workers to the same number. If you have a machine with many cores but not so much
memory (e.g. less than 2 GB of memory per core) it can happen that you run out
of memory and you rather want to reduce
@@ -298,7 +298,7 @@ a section about the `credHelpers` they might overrule the authentication.
The `credHelpers` can overrule the key for certain URLs. This can lead to
permission errors pulling the docker image.
-The Lightly docker images are hosted in the European location. Therefore,
+The Lightly Worker docker images are hosted in the European location. Therefore,
it's important that pulling from the `eu.gcr.io` domain is using
the provided credentials.
@@ -314,7 +314,7 @@ There are two ways to solve the problem:
cat container-credentials.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- You can work with two configs. We recommend creating a dedicated folder
- for the Lightly docker config.
+ for the Lightly Worker docker config.
.. code-block:: console
@@ -324,5 +324,5 @@ There are two ways to solve the problem:
docker --config ~/.docker_lightly/ pull eu.gcr.io/boris-250909/lightly/worker:latest
-Whenever you're pulling a new image (e.g. updating Lightly) you would need to
+Whenever you're pulling a new image (e.g. updating Lightly Worker) you would need to
pass it the corresponding config using the `--config` parameter.
\ No newline at end of file
diff --git a/docs/source/docker/overview.rst b/docs/source/docker/overview.rst
index 0bf10e992..60f523cc9 100644
--- a/docs/source/docker/overview.rst
+++ b/docs/source/docker/overview.rst
@@ -39,7 +39,7 @@ We worked hard to make this happen and are very proud to present you with the fo
* Check for exact duplicates and report them
- * We expose the full lightly OSS framework config
+ * We expose the full Lightly SSL OSS framework config
* Automated reporting of the datasets for each run
diff --git a/docs/source/docker_archive/configuration/configuration.rst b/docs/source/docker_archive/configuration/configuration.rst
index 71d26c1a9..ba0409af9 100644
--- a/docs/source/docker_archive/configuration/configuration.rst
+++ b/docs/source/docker_archive/configuration/configuration.rst
@@ -9,7 +9,7 @@ Configuration
The old workflow described in these docs will not be supported with new Lightly Worker versions above 2.6.
Please switch to our `new documentation page `_ instead.
-As the lightly framework the docker solution can be configured using Hydra.
+As the Lightly SSL framework the docker solution can be configured using Hydra.
The example below shows how the `token` parameter can be set when running the docker container.
diff --git a/docs/source/docker_archive/getting_started/first_steps.rst b/docs/source/docker_archive/getting_started/first_steps.rst
index ec6283449..1f5da7b40 100644
--- a/docs/source/docker_archive/getting_started/first_steps.rst
+++ b/docs/source/docker_archive/getting_started/first_steps.rst
@@ -37,7 +37,7 @@ them.
The docker solution can be used as a command-line interface. You run the container, tell it where to find data, and where to store the result. That's it.
-There are various parameters you can pass to the container. We put a lot of effort to also expose the full lightly framework configuration.
+There are various parameters you can pass to the container. We put a lot of effort to also expose the full Lightly SSL framework configuration.
You could use the docker solution to train a self-supervised model instead of using the Python framework.
Before jumping into the detail let's have a look at some basics.
diff --git a/docs/source/docker_archive/known_issues_faq.rst b/docs/source/docker_archive/known_issues_faq.rst
index 8dcb4939b..350fcfef3 100644
--- a/docs/source/docker_archive/known_issues_faq.rst
+++ b/docs/source/docker_archive/known_issues_faq.rst
@@ -43,7 +43,7 @@ Try to install `nvidia-docker` following the guide
`here `_.
-Shared Memory Error when running Lightly Docker
+Shared Memory Error when running Lightly Worker
-----------------------------------------------
The following error message appears when the docker runtime has not enough
diff --git a/docs/source/docker_archive/overview.rst b/docs/source/docker_archive/overview.rst
index 2d6872898..968332729 100644
--- a/docs/source/docker_archive/overview.rst
+++ b/docs/source/docker_archive/overview.rst
@@ -8,7 +8,7 @@ Docker Archive
Please switch to our `new documentation page `_ instead.
We all know that sometimes when working with ML data we deal with really BIG datasets. The cloud solution is great for exploration, prototyping
-and an easy way to work with lightly. But there is more!
+and an easy way to work with Lightly. But there is more!
.. figure:: images/lightly_docker_overview.png
:align: center
@@ -50,7 +50,7 @@ We worked hard to make this happen and are very proud to present you with the fo
* Check for exact duplicates and report them
- * We expose the full lightly framework config
+ * We expose the full Lightly SSL framework config
* Automated reporting of the datasets for each run
diff --git a/docs/source/getting_started/advanced.rst b/docs/source/getting_started/advanced.rst
index 8a10f6f13..5be80439c 100644
--- a/docs/source/getting_started/advanced.rst
+++ b/docs/source/getting_started/advanced.rst
@@ -3,7 +3,7 @@
Advanced Concepts in Self-Supervised Learning
=============================================
-In this section, we will have a look at some more advanced topics around Lightly.
+In this section, we will have a look at some more advanced topics around Lightly SSL.
Augmentations
-------------
@@ -76,8 +76,8 @@ Some interesting papers regarding invariances in self-supervised learning:
Transforms
^^^^^^^^^^
-Lightly uses `Torchvision transforms `_
-to apply augmentations to images. The Lightly :py:mod:`~lightly.transforms` module
+Lightly SSL uses `Torchvision transforms `_
+to apply augmentations to images. The Lightly SSL :py:mod:`~lightly.transforms` module
exposes transforms for common self-supervised learning methods.
The most important difference compared to transforms for other tasks, such as
@@ -95,9 +95,9 @@ while :ref:`dino` uses two global and multiple, smaller local views per image.
Custom Transforms
^^^^^^^^^^^^^^^^^
-There are three ways how you can customize augmentations in Lightly:
+There are three ways how you can customize augmentations in Lightly SSL:
-1. Modify the parameters of the :py:mod:`~lightly.transforms` provided by Lightly:
+1. Modify the parameters of the :py:mod:`~lightly.transforms` provided by Lightly SSL:
.. code-block:: python
@@ -171,7 +171,7 @@ Previewing Augmentations
It often can be very useful to understand how the image augmentations we pick affect
the input dataset. We provide a few helper methods that make it very easy to
-preview augmentations using Lightly.
+preview augmentations using Lightly SSL.
.. literalinclude:: code_examples/plot_image_augmentations.py
@@ -212,7 +212,7 @@ our DINO model would see during training.
Models
------
-See the :ref:`models` section for a list of models that are available in Lightly.
+See the :ref:`models` section for a list of models that are available in Lightly SSL.
Do you know a model that should be on this list? Please add an `issue `_
on GitHub :)
@@ -222,14 +222,14 @@ other vision model. When creating a self-supervised learning model you pass it a
backbone. You need to make sure the backbone output dimension matches the input
dimension of the head component for the respective self-supervised model.
-Lightly has a built-in generator for ResNets. However, the model architecture slightly
+Lightly SSL has a built-in generator for ResNets. However, the model architecture slightly
differs from the official ResNet implementation. The difference is in the first few
-layers. Whereas the official ResNet starts with a 7x7 convolution the one from Lightly
+layers. Whereas the official ResNet starts with a 7x7 convolution the one from Lightly SSL
has a 3x3 convolution.
* The 3x3 convolution variant is more efficient (fewer parameters and faster
processing) and is better suited for small input images (32x32 pixels or 64x64 pixels).
- We recommend using the Lightly variant for cifar10 or running the model on a microcontroller
+ We recommend using the Lightly SSL variant for cifar10 or running the model on a microcontroller
(see https://github.com/ARM-software/EndpointAI/tree/master/ProofOfConcepts/Vision/OpenMvMaskDefaults)
* However, the 7x7 convolution variant is better suited for larger images
since the number of features is smaller due to the stride and additional
@@ -241,7 +241,7 @@ has a 3x3 convolution.
from torch import nn
- # Create a Lightly ResNet.
+ # Create a Lightly SSL ResNet.
from lightly.models import ResNetGenerator
resnet = ResNetGenerator('resnet-18')
# Ignore the classification layer as we want the features as output.
@@ -267,7 +267,7 @@ has a 3x3 convolution.
resnet_simclr = SimCLR(backbone, hidden_dim=512, out_dim=128)
-You can also use **custom backbones** with Lightly. We provide a
+You can also use **custom backbones** with Lightly SSL. We provide a
`colab notebook to show how you can use torchvision or timm models
`_.
diff --git a/docs/source/getting_started/benchmarks.rst b/docs/source/getting_started/benchmarks.rst
index a9463b643..5fdeca01a 100644
--- a/docs/source/getting_started/benchmarks.rst
+++ b/docs/source/getting_started/benchmarks.rst
@@ -204,7 +204,7 @@ You can reproduce the benchmarks using the following script:
Next Steps
----------
-Now that you understand the performance of the different lightly methods how about
+Now that you understand the performance of the different Lightly SSL methods how about
looking into a tutorial to implement your favorite model?
- :ref:`input-structure-label`
diff --git a/docs/source/getting_started/command_line_tool.rst b/docs/source/getting_started/command_line_tool.rst
index bf336ff35..93c6d4fcc 100644
--- a/docs/source/getting_started/command_line_tool.rst
+++ b/docs/source/getting_started/command_line_tool.rst
@@ -3,7 +3,7 @@
Command-line tool
=================
-The Lightly framework provides you with a command-line interface (CLI) to train
+The Lightly SSL framework provides you with a command-line interface (CLI) to train
self-supervised models and create embeddings without having to write a single
line of code.
@@ -24,16 +24,16 @@ the CLI.
-Check the installation of lightly
------------------------------------
-To see if the lightly command-line tool was installed correctly, you can run the
-following command which will print the installed lightly version:
+Check the installation of Lightly SSL
+-------------------------------------
+To see if the Lightly SSL command-line tool was installed correctly, you can run the
+following command which will print the version of the installed Lightly SSL package:
.. code-block:: bash
lightly-version
-If lightly was installed correctly, you should see something like this:
+If Lightly SSL was installed correctly, you should see something like this:
.. code-block:: bash
diff --git a/docs/source/getting_started/distributed_training.rst b/docs/source/getting_started/distributed_training.rst
index e1b140c28..30daefd7b 100644
--- a/docs/source/getting_started/distributed_training.rst
+++ b/docs/source/getting_started/distributed_training.rst
@@ -3,7 +3,7 @@
Distributed Training
====================
-Lightly supports training your model on multiple GPUs using Pytorch Lightning
+Lightly SSL supports training your model on multiple GPUs using Pytorch Lightning
and Distributed Data Parallel (DDP) training. You can find reference
implementations for all our models in the :ref:`models` section.
@@ -12,7 +12,7 @@ Training with multiple gpus is also available from the command line: :ref:`cli-t
For details on distributed training we recommend the following pages:
- `Pytorch Distributed Overview