Jellyfin is the Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps.
This repository contains operating system and Docker packaging for Jellyfin, for use by manual builders and our release CI system with GitHub workflows. All packaging has henceforth been removed from the main code repositories for the Jellyfin Server and Primary WebUI and moved here.
To build Jellyfin packages for yourself, follow this quickstart guide. You will need to be running on a Linux system, preferably Debian- or Ubuntu-based, with Docker, Python3 and the Python packages PyYAML
and git
(python3-yaml
and python3-git
in Debian). Other systems including WSL are untested.
-
Install Docker on your system. The build scripts leverage Docker containers to perform clean builds and avoid contaminating the host system with dependencies.
-
Clone this repository somewhere on your system and enter it.
-
Run
git submodule update --init
to check out the submodules (jellyfin-server
,jellyfin-web
). -
Run
./checkout.py
to update the submodules to the correctHEAD
s. This command takes one argument, the tag or branch (i.e.master
) of the repositories to check out; if nothing is specified,master
is assumed. For example,./checkout.py master
checks out the currentmaster
branch of bothjellyfin-server
andjellyfin-web
,./checkout.py v10.8.13
checks out thev10.8.13
tag of both, etc. If a tag is used and one (or more) of the repositories are missing the tag, this command will error out.
If you want a non-Docker image output (.deb
, tar
/zip
archive, etc.) follow this process:
-
Run
./build.py
. This command takes up to 4 arguments, depending on what you're trying to build:-
The first argument is the version you want to tag your build as. For our official releases, we use either dates for unstable builds (
YYYYMMDDHH
numerical format orauto
for autogeneration) or the tag withoutv
for stable release builds (10.8.13
,10.9.0
, etc.), but you can use any version tag you wish here. -
The second argument is the "platform" you want to build for. The available options are listed as top-level keys in the
build.yaml
configuration file or in the-h
help output. -
The third argument is, for all platforms except
portable
(DotNET portable), the architecture you want to build for. For each platform, the available architectures can be found as the keys underarchmaps
in thebuild.yaml
configuration file. -
The fourth argument is exclusive to
debian
andubuntu
.deb
packages, and is the release codename of Debian or Ubuntu to build for. For each platform, the available releases can be found as the keys underreleases
in thebuild.yaml
configuration file.
NOTE: Your running user must have Docker privileges, or you should run
build.py
as root/withsudo
. -
-
The output binaries will be in the
out/
directory, ready for use. The exact format varies depending on the build type, and can be found, for each archive-based platform, as the values to thearchivetypes
key in thebuild.yaml
configuration file.
Build .deb
packages for Debian 12 "Bookworm" amd64
:
./build.py auto debian amd64 bookworm
Build Linux .tar.xx
archives for arm64-musl
:
./build.py auto linux arm64-musl
Build Windows .zip
for amd64
:
./build.py auto windows amd64
Build a .NET portable .zip
:
./build.py auto portable
If you want a Docker image output follow this process:
-
Run
./build.py
. This command takes up to 4 arguments specific to Docker builds:-
The first argument is the version you want to tag your build as. For our official releases, we use either dates for unstable builds (
YYYYMMDDHH
numerical format orauto
for autogeneration) or the tag withoutv
for stable release builds (10.8.13
,10.9.0
, etc.), but you can use any version tag you wish here. -
The second argument is the "platform" you want to build for. For Docker images, this should be
docker
. -
The third argument is the architecture you wish to build for. This argument is optional, and not providing it will build images for all supported architectures (sequentially).
-
The fourth argument is
--local
, which should be provided to prevent the script from trying to generate image manifests and push the resulting images to our repositories.
-
-
The output container image(s) will be present in your
docker image ls
asjellyfin/jellyfin
with the tag(s)<jellyfin_version>-<build_arch>
.
Build an amd64
Docker image:
./build.py auto docker amd64 --local
Inside this repository are 7 major components:
-
Submodules for the
jellyfin
(asjellyfin-server
) andjellyfin-web
repositories. These are dynamic submodules; thecheckout.py
script will check them out to the requiredHEAD
on each build, and thus their actual committed value is irrelevant. Nonetheless, they should be bumped occasionally just to avoid excessive checkout times later. -
Debian/Ubuntu packaging configurations (under
debian
). These will build the 3 Jellyfin packages (jellyfin
metapackage,jellyfin-server
core server, andjellyfin-web
web client) from a single Dockerfile and helper script (build.sh
) underdebian/docker/
. Future packages (e.g. Vue) may be added here as well if and when they are promoted to a production build alongside the others, following one consistent versioning scheme. -
Docker image builder (under
docker
). Like the above two as well, only building the combined Docker images with a single Dockerfile as well as preparing the various manifests needed to push these to the container repos. -
Portable image builder (under
portable
), which covers all the "archive" builds (.NET portable, Linux, Windows, and MacOS) again from a single Dockerfile. -
NuGet package builder, to prepare NuGet packages for consumption by plugins and 3rd party clients.
-
Script infrastructure to handle coordinating builds (
build.py
). This script takes basic arguments and, using its internal logic, fires the correct Dockerized builds for the given build type. -
The GitHub Actions CI to build all the above for every supported version and architecture.
-
Unified packaging: all packaging is in this repository (vs. within the
jellyfin-server
andjellyfin-web
repositories)This helps ensure two things:
- There is a single source of truth for packaging. Previously, there were at least 3 sources of truth, and this became very confusing to update.
- Packaging can be run and managed independently of actual code, simplifying the release and build process.
-
GitHub Actions for CI: all builds use the GitHub Actions system instead of Azure DevOps
This helps ensure that CI is visible in a "single pane of glass" (GitHub) and is easier to manage long-term.
-
Python script-based builds: Building actually happens via the
build.py
scriptThis helps reduce the complexity of the builds by ensuring that the logic to actually generate specific builds is handled in one place in a consistent, well-known language, instead of the CI definitions.
-
Git Submodules to handle code (vs. cross-repo builds)
This ensures that the code checked out is consistent to both repositories, and allows for the unified builds described below without extra steps to combine.
-
Remote manual-only triggers: CI workers are triggered by a remote bot
This reduces the complexity of triggering builds; while it can be done manually in this repo, using an external bot allows for more release wrapper actions to occur before triggering builds.
-
Unified package build: this entire repo is the "source" and the source package is named "jellyfin".
This was chosen to simplify the source package system and simplify building. Now, there is only a single "jellyfin" source package rather than 2. There may be more in the future as other repos might be included (e.g. "jellyfin-ffmpeg", "jellyfin-vue", etc.)
-
Dockerized build (
debian/docker/
): the build is run inside a Docker container that matches the target OS releaseThis was chosen to ensure a clean slate for every build, as well as enable release-specific builds due to the complexities of our shared dependencies (e.g.
libssl
). -
Per-release/version builds: package versions contain the specific OS version (e.g.
-deb11
,-ubu2204
)This enables support for different builds and packages for each OS release, simplifying shared dependency handling as mentioned above.
-
Ubuntu LTS-only support: non-LTS Ubuntu versions have been dropped
This simplifies our builds as we do not need to then track many 9-month-only releases of Ubuntu, and also reduces the build burden. Users of non-LTS Ubuntu releases can use either the closest Ubuntu LTS version or use Docker containers instead.
-
Signing of Debian packages with
debsigs
.This was suggested in jellyfin#14 and was not something we had ever done, but has become trivial with this CI. This alows for the end-user verification of the ownership and integrity of manually downloaded binary
.deb
files obtained from the repository with thedebsigs-verify
command and the policy detailed in that issue. Note that since Debian as a whole (i.e.dpkg
,apt
, etc.) does not enforce package signing at this time, enabling this for the repository is not possible; conventional repository signatures (using the same signing key) are considered sufficient.
-
Single unified Docker build: the entirety of our Docker images are built as one container from one Dockerfile.
This was chosen to keep our Docker builds as simple as possible without requiring 2 intervening images (as was the case with our previous CI).
-
Push to both DockerHub and GHCR (GitHub Packages)
This ensures flexibility for container users to fetch the containers from whatever repository they choose.
-
Seamless rebuilds: The root images are appended with the build date to keep them unique
This ensures we can trigger rebuilds of the Docker containers arbitrarily, in response to things like base OS updates or packaging changes (e.g. a new version of the Intel compute engine for instance).
-
Based on Debian 12 ("Bookworm"): the latest base Debian release
While possibly not as "up-to-date" as Ubuntu, this release is quite current and should cover all major compatibility issues we had with the old images based on Debian 11.
-
Single unified build: the entirety of the output package is built in one container from one Dockerfile
This was chosen to keep the portable builds as simple as possible without requiring complex archive combining (as was the case with our previous CI).
-
Multiple archive type support (
.tar.gz
vs..zip
)The output archive type (
.tar.gz
or.zip
) is chosen based on the build target, with Portable providing both for maximum compatibility, Windows providing.zip
, and Linux and MacOS providing.tar.gz
. This can be changed later, for example to add more formats (e.g..tar.xz
) or change what produces what, without major complications. -
Full architecture support
The portable builds support all major architectures now, specifically adding
arm64
Windows builds (I'm certain that someone out there uses it), and making it quite trivial to add new architectures in the future if needed.