Skip to content

Commit

Permalink
Jeremy lig 3948 make ssl docs more distinct (#1394)
Browse files Browse the repository at this point in the history
closes lig-3948
- replace logos to highlight we are talking about `Lightly SSL` and not about `"Lightly"`
- replace all self-references to `lightly` with `Lightly SSL` or clarify when needed that its `Lightly Worker`
- add banner referring to the `Lightly Worker` docs
- pimp footer and add link to github and worker docs
  • Loading branch information
japrescott authored Sep 14, 2023
1 parent 34390b5 commit 100afe6
Show file tree
Hide file tree
Showing 37 changed files with 212 additions and 172 deletions.
12 changes: 7 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,27 @@

![Lightly Logo](docs/logos/lightly_logo_crop.png)
![Lightly SSL self-supervised learning Logo](docs/logos/lightly_SSL_logo_crop.png)

![GitHub](https://img.shields.io/github/license/lightly-ai/lightly)
![Unit Tests](https://github.com/lightly-ai/lightly/workflows/Unit%20Tests/badge.svg)
[![PyPI](https://img.shields.io/pypi/v/lightly)](https://pypi.org/project/lightly/)
[![Downloads](https://static.pepy.tech/badge/lightly)](https://pepy.tech/project/lightly)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

Lightly is a computer vision framework for self-supervised learning.
Lightly SSL is a computer vision framework for self-supervised learning.

- [Documentation](https://docs.lightly.ai/self-supervised-learning/)
- [Github](https://github.com/lightly-ai/lightly)
- [Discord](https://discord.gg/xvNJW94) (We have weekly paper sessions!)

We also built a whole platform on top, with additional features for active learning
and data curation. If you're interested in the platform, check out [lightly.ai](https://www.lightly.ai).
We've also built a whole platform on top, with additional features for active learning
and [data curation](https://docs.lightly.ai/docs/what-is-lightly). If you're interested in the
Lightly Worker Solution to easily process millions of samples and run [powerful algorithms](https://docs.lightly.ai/docs/selection)
on your data, check out [lightly.ai](https://www.lightly.ai). It's free to get started!


## Features

This framework offers the following features:
This self-supervised learning framework offers the following features:

- Modular framework, which exposes low-level building blocks such as loss functions and
model heads.
Expand Down
Binary file added docs/logos/lightly_SSL_logo_crop.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/logos/lightly_SSL_logo_crop_white_text.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 7 additions & 1 deletion docs/source/_templates/footer.html
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,13 @@
{%- else %}
{% set copyright = copyright|e %}
<!-- Adapted to include link to website -->
&copy; {% trans %}Copyright{% endtrans %} {{ copyright_year }}, <a href="{{ website_url }}">{{ copyright }}</a>
&copy; {% trans %}Copyright{% endtrans %} {{ copyright_year }}
&nbsp;|&nbsp;<a href="{{ website_url }}" target="_blank">{{ copyright }}</a>
&nbsp;|&nbsp;<a href="https://github.com/lightly-ai/lightly" target="_blank">
<img width="24px" alt="Lightly SSL source code" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iOTgiIGhlaWdodD0iOTYiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PHBhdGggZmlsbC1ydWxlPSJldmVub2RkIiBjbGlwLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik00OC44NTQgMEMyMS44MzkgMCAwIDIyIDAgNDkuMjE3YzAgMjEuNzU2IDEzLjk5MyA0MC4xNzIgMzMuNDA1IDQ2LjY5IDIuNDI3LjQ5IDMuMzE2LTEuMDU5IDMuMzE2LTIuMzYyIDAtMS4xNDEtLjA4LTUuMDUyLS4wOC05LjEyNy0xMy41OSAyLjkzNC0xNi40Mi01Ljg2Ny0xNi40Mi01Ljg2Ny0yLjE4NC01LjcwNC01LjQyLTcuMTctNS40Mi03LjE3LTQuNDQ4LTMuMDE1LjMyNC0zLjAxNS4zMjQtMy4wMTUgNC45MzQuMzI2IDcuNTIzIDUuMDUyIDcuNTIzIDUuMDUyIDQuMzY3IDcuNDk2IDExLjQwNCA1LjM3OCAxNC4yMzUgNC4wNzQuNDA0LTMuMTc4IDEuNjk5LTUuMzc4IDMuMDc0LTYuNi0xMC44MzktMS4xNDEtMjIuMjQzLTUuMzc4LTIyLjI0My0yNC4yODMgMC01LjM3OCAxLjk0LTkuNzc4IDUuMDE0LTEzLjItLjQ4NS0xLjIyMi0yLjE4NC02LjI3NS40ODYtMTMuMDM4IDAgMCA0LjEyNS0xLjMwNCAxMy40MjYgNS4wNTJhNDYuOTcgNDYuOTcgMCAwIDEgMTIuMjE0LTEuNjNjNC4xMjUgMCA4LjMzLjU3MSAxMi4yMTMgMS42MyA5LjMwMi02LjM1NiAxMy40MjctNS4wNTIgMTMuNDI3LTUuMDUyIDIuNjcgNi43NjMuOTcgMTEuODE2LjQ4NSAxMy4wMzggMy4xNTUgMy40MjIgNS4wMTUgNy44MjIgNS4wMTUgMTMuMiAwIDE4LjkwNS0xMS40MDQgMjMuMDYtMjIuMzI0IDI0LjI4MyAxLjc4IDEuNTQ4IDMuMzE2IDQuNDgxIDMuMzE2IDkuMTI2IDAgNi42LS4wOCAxMS44OTctLjA4IDEzLjUyNiAwIDEuMzA0Ljg5IDIuODUzIDMuMzE2IDIuMzY0IDE5LjQxMi02LjUyIDMzLjQwNS0yNC45MzUgMzMuNDA1LTQ2LjY5MUM5Ny43MDcgMjIgNzUuNzg4IDAgNDguODU0IDB6IiBmaWxsPSIjMjQyOTJmIi8+PC9zdmc+" />
Source Code
</a>
&nbsp;|&nbsp;<a href="https://docs.lightly.ai" target="_blank">Lightly Worker Solution documentation</a>
{%- endif %}
{%- endif %}

Expand Down
30 changes: 30 additions & 0 deletions docs/source/_templates/layout.html
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,36 @@
We need this to override the footer
-->
{%- block content %}
<style>
.wy-nav-content-wrap {
background: #f0f0f0;
}
.wy-nav-content{
position: relative;
}
.lighlty-worker-banner{
background: #092643;
height: 40px;
width: 100%;
position: absolute;
padding: 0 3.236em;
color: #fcfcfc; /* would be the actual text on dark color -> #d9d9d9 */
top: 0;
left: 0;
display: flex;
align-items: center;
z-index: 9999;
}
.lighlty-worker-banner a{
color: #2CC2BD;
}
.rst-content{
margin-top: 40px;
}
</style>
<div class="lighlty-worker-banner">
<span>Looking to easily do active learning on millions of samples? See our <a href="http://docs.lightly.ai" target="_blank">Lighly Worker</a> docs.</span>
</div>
{% if theme_style_external_links|tobool %}
<div class="rst-content style-external-links">
{% else %}
Expand Down
6 changes: 3 additions & 3 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@
# -- Project information -----------------------------------------------------

project = "lightly"
copyright_year = "2020"
copyright_year = "2020-<script>document.write((new Date()).getFullYear())</script>"
copyright = "Lightly AG"
website_url = "https://www.lightly.ai/"
author = "Philipp Wirth, Igor Susmelj"
author = "Lightly Team"

# The full version, including alpha/beta/rc tags
release = lightly.__version__
Expand Down Expand Up @@ -98,7 +98,7 @@

html_favicon = "favicon.png"

html_logo = "../logos/lightly_logo_crop_white_text.png"
html_logo = "../logos/lightly_SSL_logo_crop_white_text.png"

# Exposes variables so that they can be used by django
html_context = {
Expand Down
6 changes: 3 additions & 3 deletions docs/source/docker/advanced/datapool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Datapool
=================

Lightly has been designed in a way that you can incrementally build up a
Lightly Worker has been designed in a way that you can incrementally build up a
dataset for your project. The software automatically keeps track of the
representations of previously selected samples and uses this information
to pick new samples in order to maximize the quality of the final dataset.
Expand Down Expand Up @@ -43,7 +43,7 @@ has the following advantages:
If you want to search all data in your bucket for new samples
instead of only newly added data,
then set :code:`'datasource.process_all': True` in your worker config. This has the
same effect as creating a new Lightly dataset and running the Lightly Worker from scratch
same effect as creating a new dataset and running the Lightly Worker from scratch
on the full dataset. We process all data instead of only the newly added ones.


Expand All @@ -67,7 +67,7 @@ first time.
|-- passageway1-c1.avi
`-- terrace1-c0.avi
Let's create a Lightly dataset which uses that bucket (choose your tab - S3, GCS or Azure):
Let's create a dataset which uses that bucket (choose your tab - S3, GCS or Azure):

.. tabs::
.. tab:: AWS S3 Datasource
Expand Down
14 changes: 7 additions & 7 deletions docs/source/docker/advanced/datasource_metadata.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Add Metadata to a Datasource
===============================

Lightly can make use of metadata collected alongside your images or videos. Provided,
Lightly Worker can make use of metadata collected alongside your images or videos. Provided,
metadata can be used to steer the selection process and to analyze the selected dataset
in the Lightly Platform.

Expand Down Expand Up @@ -45,7 +45,7 @@ Metadata Schema
The schema defines the format of the metadata and helps the Lightly Platform to correctly identify
and display different types of metadata.

You can provide this information to Lightly by adding a `schema.json` to the
You can provide this information to Lightly Worker by adding a `schema.json` to the
`.lightly/metadata` directory. The `schema.json` file must contain a list of
configuration entries. Each of the entries is a dictionary with the following keys:

Expand Down Expand Up @@ -105,9 +105,9 @@ of the images we have collected. A possible schema could look like this:
Metadata Files
--------------
Lightly requires a single metadata file per image or video. If an image or video has no corresponding metadata file,
Lightly assumes the default value from the `schema.json`. If a metadata file is provided for a full video,
Lightly assumes that the metadata is valid for all frames in that video.
Lightly Worker requires a single metadata file per image or video. If an image or video has no corresponding metadata file,
Lightly Worker assumes the default value from the `schema.json`. If a metadata file is provided for a full video,
Lightly Worker assumes that the metadata is valid for all frames in that video.

To provide metadata for an image or a video, place a metadata file with the same name
as the image or video in the `.lightly/metadata` directory but change the file extension to
Expand All @@ -130,8 +130,8 @@ as the image or video in the `.lightly/metadata` directory but change the file e
When working with videos it's also possible to provide metadata on a per-frame basis.
Then, Lightly requires a metadata file per frame. If a frame has no corresponding metadata file,
Lightly assumes the default value from the `schema.json`. Lightly uses a naming convention to
Then, Lightly Worker requires a metadata file per frame. If a frame has no corresponding metadata file,
Lightly Worker assumes the default value from the `schema.json`. Lightly Worker uses a naming convention to
identify frames: The filename of a frame consists of the video filename, the frame number
(padded to the length of the number of frames in the video), the video format separated
by hyphens. For example, for a video with 200 frames, the frame number will be padded
Expand Down
26 changes: 13 additions & 13 deletions docs/source/docker/advanced/datasource_predictions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
Add Predictions to a Datasource
===============================

Lightly can not only use images you provided in a datasource, but also predictions of a ML model on your images.
Lightly Worker can not only use images you provided in a datasource, but also predictions of a ML model on your images.
They are used for active learning for selecting images based on the objects in them.
Furthermore, object detection predictions can be used running Lightly on object level.
Furthermore, object detection predictions can be used running Lightly Worker on object level.
By providing the predictions in the datasource,
you have full control over them and they scale well to millions of samples.
Furthermore, if you add new samples to your datasource, you can simultaneously
Expand Down Expand Up @@ -62,8 +62,8 @@ and an object detection task). All of the files are explained in the next sectio

Prediction Tasks
----------------
To let Lightly know what kind of prediction tasks you want to work with, Lightly
needs to know their names. It's very easy to let Lightly know which tasks exist:
To let the Lightly Worker know what kind of prediction tasks you want to work with, the Lightly Worker
needs to know their names. It's very easy to let the Lightly Worker know which tasks exist:
simply add a `tasks.json` in your lightly bucket stored at the subdirectory `.lightly/predictions/`.

The `tasks.json` file must include a list of your task names which must match name
Expand Down Expand Up @@ -116,7 +116,7 @@ we can specify which subfolders contain relevant predictions in the `tasks.json`

Prediction Schema
-----------------
For Lightly it's required to store a prediction schema. The schema defines the
It's required to store a prediction schema. The schema defines the
format of the predictions and helps the Lightly Platform to correctly identify
and display classes. It also helps to prevent errors as all predictions which
are loaded are validated against this schema.
Expand All @@ -127,7 +127,7 @@ all the categories and their corresponding ids. For other tasks, such as keypoin
detection, it can be useful to store additional information like which keypoints
are connected with each other by an edge.

You can provide all this information to Lightly by adding a `schema.json` to the
You can provide all this information to the Lightly Worker by adding a `schema.json` to the
directory of the respective task. The schema.json file must have a key `categories`
with a corresponding list of categories following the COCO annotation format.
It must also have a key `task_type` indicating the type of the predictions.
Expand Down Expand Up @@ -167,10 +167,10 @@ The three classes are sunny, clouded, and rainy.
Prediction Files
----------------
Lightly requires a **single prediction file per image**. The file should be a .json
The Lightly Worker requires a **single prediction file per image**. The file should be a .json
following the format defined under :ref:`prediction-format` and stored in the subdirectory
`.lightly/predictions/${TASK_NAME}` in the storage bucket the dataset was configured with.
In order to make sure Lightly can match the predictions to the correct source image,
In order to make sure the Lightly Worker can match the predictions to the correct source image,
it's necessary to follow the naming convention:

.. code-block:: bash
Expand All @@ -189,7 +189,7 @@ it's necessary to follow the naming convention:

Prediction Files for Videos
---------------------------
When working with videos, Lightly requires a prediction file per frame. Lightly
When working with videos, the Lightly Worker requires a prediction file per frame. Lightly
uses a naming convention to identify frames: The filename of a frame consists of
the video filename, the video format, and the frame number (padded to the length
of the number of frames in the video) separated by hyphens. For example, for a
Expand Down Expand Up @@ -363,7 +363,7 @@ belonging to that category. Optionally, a list of probabilities can be provided
containing a probability for each category, indicating the likeliness that the
segment belongs to that category.

To kickstart using Lightly with semantic segmentation predictions we created an
To kickstart using the Lightly Worker with semantic segmentation predictions we created an
example script that takes model predictions and converts them to the correct
format :download:`semantic_segmentation_inference.py <code_examples/semantic_segmentation_inference.py>`

Expand Down Expand Up @@ -403,13 +403,13 @@ following function:
Segmentation models oftentimes output a probability for each pixel and category.
Storing such probabilities can quickly result in large file sizes if the input
images have a high resolution. To reduce storage requirements, Lightly expects
images have a high resolution. To reduce storage requirements, Lightly Worker expects
only a single score or probability per segmentation. If you have scores or
probabilities for each pixel in the image, you have to first aggregate them
into a single score/probability. We recommend to take either the median or mean
score/probability over all pixels within the segmentation mask. The example
below shows how pixelwise segmentation predictions can be converted to the
format required by Lightly.
format required by the Lightly Worker.

.. code-block:: python
Expand Down Expand Up @@ -522,7 +522,7 @@ Don't forget to change these 2 parameters at the top of the script.
Creating Prediction Files for Videos
-------------------------------------

Lightly expects one prediction file per frame in a video. Predictions can be
The Lightly Worker expects one prediction file per frame in a video. Predictions can be
created following the Python example code below. Make sure that `PyAV <https://pyav.org/>`_
is installed on your system for it to work correctly.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/docker/advanced/load_model_from_checkpoint.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
Load Model from Checkpoint
==========================

The Lightly worker can be used to :ref:`train a self-supervised model on your data. <training-a-self-supervised-model>`
Lightly saves the weights of the model after training to a checkpoint file in
The Lightly Worker can be used to :ref:`train a self-supervised model on your data. <training-a-self-supervised-model>`
Lightly Worker saves the weights of the model after training to a checkpoint file in
:code:`output_dir/lightly_epoch_X.ckpt`. This checkpoint can then be further
used to, for example, train a classifier model on your dataset. The code below
demonstrates how the checkpoint can be loaded:
Expand Down
18 changes: 9 additions & 9 deletions docs/source/docker/advanced/object_level.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Object Level
============
Lightly does not only work on full images but also on an object level. This
The Lightly Worker does not only work on full images but also on an object level. This
workflow is especially useful for datasets containing small objects or multiple
objects in each image and provides the following benefits over the full image
workflow:
Expand All @@ -21,7 +21,7 @@ workflow:

Prerequisites
-------------
In order to use the object level workflow with Lightly, you will need the
In order to use the object level workflow with the Lightly Worker, you will need the
following things:

- The installed Lightly Worker (see :ref:`docker-setup`)
Expand All @@ -31,13 +31,13 @@ following things:

.. note::

If you don't have any predictions available, you can use the Lightly pretagging
If you don't have any predictions available, you can use the Lightly Worker pretagging
model. See :ref:`Pretagging <object-level-pretagging>` for more information.


Predictions
-----------
Lightly needs to know which objects to process. This information is provided
The Lightly Worker needs to know which objects to process. This information is provided
by uploading a set of object predictions to the datasource (see :ref:`docker-datasource-predictions`).
Let's say we are working with a dataset containing different types of vehicles
and used an object detection model to find possible vehicle objects in the
Expand Down Expand Up @@ -170,7 +170,7 @@ code to sping up a Lightly Worker
Padding
-------
Lightly makes it possible to add a padding around your bounding boxes. This allows
The Lightly Worker makes it possible to add a padding around your bounding boxes. This allows
for better visualization of the cropped images in the web-app and can improve the
embeddings of the objects as the embedding model sees the objects in context. To add
padding, simply specify `object_level.padding=X` where `X` is the padding relative
Expand Down Expand Up @@ -239,9 +239,9 @@ properties of your dataset and reveal things like:

These hidden biases are hard to find in a dataset if you only rely on full
images or the coarse vehicle type predicted by the object detection model.
Lightly helps you to identify them quickly and assists you in monitoring and
The Lightly Worker helps you to identify them quickly and assists you in monitoring and
improving the quality of your dataset. After an initial exploration you can now
take further steps to enhance the dataset using one of the workflows Lightly
take further steps to enhance the dataset using one of the workflows the Lightly Worker
provides:

- Select a subset of your data using our :ref:`Sampling Algorithms <plaform-sampling>`
Expand All @@ -252,7 +252,7 @@ provides:
Multiple Object Level Runs
--------------------------
You can run multiple object level workflows using the same dataset. To start a
new run, please select your original full image dataset in the Lightly Web App
new run, please select your original full image dataset in the Lightly Platform
and schedule a new run from there. If you are running the Lightly Worker from Python or
over the API, you have to set the `dataset_id` configuration option to the id of
the original full image dataset. In both cases make sure that the run is *not*
Expand All @@ -261,7 +261,7 @@ started from the crops dataset as this is not supported!
You can control to which crops dataset the newly selected object crops are
uploaded by setting the `object_level.crop_dataset_name` configuration option.
By default this option is not set and if you did not specify it in the first run,
you can also omit it in future runs. In this case Lightly will automatically
you can also omit it in future runs. In this case the Lightly Worker will automatically
find the existing crops dataset and add the new crops to it. If you want to
upload the crops to a new dataset or have set a custom crop dataset name in a
previous run, then set the `object_level.crop_dataset_name` option to a new
Expand Down
2 changes: 1 addition & 1 deletion docs/source/docker/advanced/overview.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Advanced
===================================
Here you learn more advanced usage patterns of Lightly Docker.
Here you learn more advanced usage patterns of Lightly Worker.


.. toctree::
Expand Down
Loading

0 comments on commit 100afe6

Please sign in to comment.