diff --git a/docs/source/index.rst b/docs/source/index.rst
index a8ce66c..a41136b 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -34,6 +34,7 @@ Welcome to the Viking Documentation!
using_viking/terminal_multiplexing
using_viking/virtual_desktops
using_viking/virtual_environments
+ using_viking/x11_forwarding
using_viking/project_folders
.. using_viking/apptainer
diff --git a/docs/source/using_viking/x11_forwarding.rst b/docs/source/using_viking/x11_forwarding.rst
new file mode 100644
index 0000000..924b84a
--- /dev/null
+++ b/docs/source/using_viking/x11_forwarding.rst
@@ -0,0 +1,53 @@
+X11 Forwarding
+==============
+
+X11 Forwarding is used to *forward* an *X11* type window from Viking, to your local computer. This is useful for running graphical programs, or GUIs, on Viking but having them display on your local machine. For most Linux distros this will work out of the box, for Mac OSX 10.8 and up, you'll need to install `XQuartz `_. For Windows `Xming `_ will need to be installed.
+
+By default when you log into Viking you'll be logged into one of the two login nodes. As mentioned in the :doc:`/getting_started/code_of_conduct` page, we don't want to run big jobs on the login nodes as they are shared with other users and this may affect others.
+
+Instead we'll access a compute node and then run the program on the compute node. This page will explains how we use X11 Forwarding to display the window on your local machine, whilst it's running on a compute node on Viking, behind the login node.
+
+There are other ways to accomplish similar results so please treat this as a basic guide you can build upon.
+
+
+Log into Viking
+---------------
+
+Use the ``-X`` option when we log into Viking over ``ssh``, this enables the ``X11 Forwarding`` from the login node to our local computer:
+
+.. code-block:: console
+
+ ssh -X viking.york.ac.uk
+
+
+Request resources
+-----------------
+
+Once we're logged into Viking, we will request some resources on a compute node. Below we use the `salloc `_ command and use the same options as you would if you used the `srun `_ command or the options from an jobscript when you use the `sbatch `_ command. Once allocated we ``ssh`` into the node using the ``$SLURM_NODELIST`` environment variable which is set by ``Slurm`` to the node where our resources are allocated. Ensure you use the ``-X`` option again here, this will enable the ``X11 Forwarding`` from the compute node to the login node:
+
+.. code-block:: console
+ :caption: this describes one node, one task and eight CPU cores for two hours
+
+ $ salloc --nodes=1 --ntasks=1 --cpus-per-task=8 --time=02:00:00
+ $ ssh -X $SLURM_NODELIST
+
+This means we now have ``X11 Forwarding`` from the compute node to the login node and also from the login node to our local computer.
+
+
+Run your program
+----------------
+
+Once you're logged into the compute node, you can now load the modules and run your graphical program. In this example we run MATLAB:
+
+.. code-block:: console
+
+ $ module load MATLAB/2023b
+ $ matlab
+
+After a few moments, the window for MATLAB should appear on your local machine.
+
+
+Tidy up
+-------
+
+Press ``Ctrl + d`` **twice** to exit the compute node. First to exit the ``ssh`` connection to the compute node and the second time to relinquish the resources from the ``salloc`` command earlier.