Juju Dashboard is a JavaScript application built with React, Redux Toolkit and TypeScript. Being familiar with those tools will help when developing the dashboard.
If you haven't already read the contribution guide then that is a good place to start as it will give you an overview of how to contribute and what kinds of contributions are welcome.
In this document:
- Setting up the dashboard for development
- Codebase and development guidelines
- Juju controllers in Multipass
- Building the Docker image
- Deployment configuration guides
To get started working on the dashboard you will need to set up a local development environement and you will also need access to a Juju controller (JAAS may be sufficient to get started).
Once set up you might like to take a look at our codebase overview and development guidelines.
If you want to you can set up the dashboard inside a
Multipass container. This will provide a clean
development environment. If you choose to use a Multipass container then launch
your container and shell
into it and continue from there.
multipass launch --cpus 2 --disk 15G --memory 8G --name dev
multipass shell dev
First, install Node.js (>= v18) and Yarn (>= v2) if they're not installed already.
On Ubuntu you can install Node.js with:
sudo snap install node --classic
On macOs Node.js can be installed via the instructions.
Next, follow the Yarn install instructions.
Now you will need to get a copy of the dashboard. Go to the dashboard repo, login and fork it.
Now clone your fork:
git clone https://github.com/<your-username>/juju-dashboard.git
cd juju-dashboard
Install the dependencies
yarn install
Then start the dashboard with:
yarn start
Next you can move on to configuring a Juju controller to use with the dashboard.
To configure the controller used by Juju Dashboard, create a local config file:
cp public/config.js public/config.local.js
Update controllerAPIEndpoint
to the address of your controller. When using a
controller inside a multipass you can get the IP address using multipass list
then set the endpoint to:
controllerAPIEndpoint: "wss://[controller.ip]:17070/api",
To use a local/non-JAAS controller you will need to set:
isJuju: true,
Don't forget to accept the self signed certificate for the controller.
To use the dashboard with Dotrun just replace yarn ...
commands with dotrun ...
.
Both the React dev tools and Redux dev tools are useful when developing Juju Dashboard.
Juju Dashboard uses React for it's component based UI.
Use function components and hooks over class based components.
It is recommended to have one component per file, and one component per directory. A typical component directory will have the structure:
_component.scss
(any SCSS specific to this component)Component.tsx
(the component itself)Component.test.tsx
(tests specific to this component)index.tsx
(export the component from this file)
Where possible write reusable code which should live in the top level
directories e.g. src/components
, src/hooks
.
Distinct views of the app live in the src/pages
directory. These will usually equate to the
top level routes.
Shared SCSS should live in the src/scss
directory, but SCSS specific to a page
or component should live in the component's directory and be imported inside the
component.
Juju Dashboard uses Redux and Redux Toolkit.
Redux code lives in src/store
. The code is structured by
"slice",
equivalent to a top level key of Redux state.
Each slice contains, the slice creator, selectors, TypeScript types and tests for that slice of state.
Fetching data from the Redux store inside a component is done via Reselect.
There are two pieces of middleware:
src/store/middleware/check-auth.ts
is used to gate authentication for requests to the Juju APIs.src/store/middleware/model-poller.ts
is used to make WebSocket connections to the Juju APIs.
Juju Dashboard is written in TypeScript. Wherever possible strict TypeScript should be used.
The dashboard is unit tested and interaction tested using Vitest and React Testing Library.
Codecov is used to monitor test coverage on PRs and is currently set to allow 90% coverage across the codebase, so new code should have strong test coverage to maintain this amount.
The dashboard uses test factories instead of data dumps to allow each test to declare the data required for it to pass.
Test factories are written using
Fishery and live in
src/testing/factories
.
The factories are set up in files that equate to the Juju facade that the data is returned from.
Juju Dashboard makes use of a few external libraries that are built and maintained by Canonical.
Jujulib is a core library for Juju Dashboard. This library provides a JavaScript client for interacting with the Juju WebSocket APIs and also provides TypeScript types for the API and underlying models.
Bakeryjs implements a macaroon interface in JavaScript. This library is used to authenticate with Juju when using a third-party identity provider.
Vanilla Framework is a CSS framework used to provide consistency across Canonical's codebases.
Vanilla React Components is a React implementation of Vanilla Framework and is the preferred method of consuming Vanilla Framework elements.
The easiest way to set up a juju controller is inside a Multipass container. This allows you cleanly add and remove controllers as necessary and provides a way to have multiple controllers running at once (with different Juju versions if needed).
There are three main types of deployment:
If this controller is being created on an M1 mac then you will need to set the arch when running some of the commands.
First, create a new Multipass container. You may need to adjust the resources depending on your host machine.
multipass launch --cpus 2 --disk 15G --memory 8G --name juju jammy
Enter the container:
multipass shell juju
Install Juju:
sudo snap install juju --channel=latest/stable
Generate SSH keys (the defaults should be fine for testing):
ssh-keygen
Create a local share directory (issue):
mkdir -p ~/.local/share
Bootstrap Juju (the defaults should work fine for most cases):
juju bootstrap
Get the controller machine's instance id ("Inst id")
juju switch controller
juju status
So that the Juju API can be accessed outside the Multipass container the API port will need to be forwarded to the controller machine. Using the instance id from above run:
lxc config device add [inst-id] portforward17070 proxy listen=tcp:0.0.0.0:17070 connect=tcp:127.0.0.1:17070
To be able to authenticate as the admin you will need to set a password:
juju change-user-password admin
At this point you can deploy the dashboard, or skip to the next section:
juju deploy juju-dashboard dashboard
Then integrate the controller and the dashboard:
juju integrate dashboard controller
Expose the dashboard:
juju expose dashboard
Get the dashboard machine's instance id:
juju status
Then port forward to the dashboard instance so that the dashboard can be access from outside the Multipass container:
lxc config device add [inst-id] portforward8080 proxy listen=tcp:0.0.0.0:8080 connect=tcp:127.0.0.1:8080
If you wish you can add additional models and deploy applications:
juju add-model test
juju deploy postgresql
Now exit the Multipass container and then run the following to get the IP address of the container:
multipass info juju
If you deployed the dashboard inside the container you will be able to access it at:
http://[container.ip]:8080/
If you want to use the controller with a local dashboard you can configure
the dashboard by setting the endpoint in
config.local.js
:
controllerAPIEndpoint: "wss://[container.ip]:17070/api",
Once you're finished with the controller you can stop the Multipass container:
multipass stop juju
And if you no longer require the container you can remove it:
multipass delete juju
multipass purge
First, create a new Multipass container. You may need to adjust the resources depending on your host machine.
multipass launch --cpus 2 --disk 15G --memory 8G --name juju-candid
Enter the container:
multipass shell juju-candid
Install the prerequisites:
sudo snap install juju lxd candid
Initialise LXD with the default configuration:
lxd init --auto
Bootstrap Juju with the Ubuntu SSO provider:
juju bootstrap --config identity-url=https://api.jujucharms.com/identity --config allow-model-access=true
Get the controller container's instance id ("Inst id")
juju switch controller
juju status
So that the Juju API can be accessed outside the Multipass container the API port will need to be forwarded to the controller machine. Using the instance id from above, run:
lxc config device add [inst-id] portforward17070 proxy listen=tcp:0.0.0.0:17070 connect=tcp:127.0.0.1:17070
To be able to access the controller you will need to allow access to your SSO user (check https://login.ubuntu.com/ if you're not sure what your username is):
juju grant [your-sso-username]@external superuser
If you wish you can add additional models and deploy applications:
juju add-model test
juju deploy postgresql
Now exit the Multipass container and then run the following to get the IP address of the container:
multipass info juju-candid
You can now configure your local dashboard by setting the endpoint in
config.local.js
:
controllerAPIEndpoint: "wss://[container.ip]:17070/api",
When deployed by a charm the controller relation will provide the value for
identityProviderURL
. The actual value isn't used by the dashboard at this
time, but rather the existence of the value informs the dashboard that Candid is
available, so in config.local.js
you just need to set the URL to any truthy value:
identityProviderURL: "/candid",
You also need to configure your dashboard to work with a local controller:
isJuju: true,
You can now access your local dashboard and log in using your Ubuntu SSO credentials.
First, create a new Multipass container. You may need to adjust the resources depending on your host machine, but you will need to allocate at least 20GB of disk space.
multipass launch --cpus 2 --disk 20G --memory 8G --name jimm
Copy your ssh key into the container. You can do this manually or use this one-liner:
cat ~/.ssh/id_[key-name].pub | multipass exec jimm -- tee -a .ssh/authorized_keys
SSH into the container:
ssh -A ubuntu@[multipass.ip]
Check out the JIMM repository:
git clone [email protected]:canonical/jimm.git
cd jimm
Start by making a small configuration change so that the login process redirects to the development dashboard URL:
nano compose-common.yaml
Find JIMM_DASHBOARD_FINAL_REDIRECT_URL
and set it to "http://jimm.localhost:8036"
.
Next follow the steps in the Starting the environment section of the JIMM docs.
Once the environment is running it will be steadily outputting openfga health checks like the following, at which point you can move on to the next steps:
openfga | 2024-06-03T02:25:47.168Z INFO grpc_req_complete {"grpc_service": "grpc.health.v1.Health", "grpc_method": "Check", "grpc_type": "unary", "request_id": "db6a8859-f296-44b3-b150-a1dff5be93fe", "raw_request": {"service":""}, "raw_response": {"status":"SERVING"}, "peer.address": "127.0.0.1:49320", "grpc_code": 0}
In a new terminal, enter the container, this time following the port forward instructions and then go to the JIMM directory.
cd jimm
Now follow the Q/A Using jimmctl steps.
To expose the various JIMM APIs so that they can be accessed from outside of the Multipass container, you can use an SSH port forward. This will also enable access to the Multipass using the hostnames set up inside the Multipass e.g. jimm.localhost.
On Linux, ports below 1024 are privileged. The easiest way to get around this is to port forward as root.
Start by generating SSH keys for root:
sudo ssh-keygen
Copy your root ssh key into the container. You can do this manually or use this one-liner:
sudo cat /root/.ssh/id_[key-name].pub | multipass exec jimm -- tee -a .ssh/authorized_keys
Now from your host machine run the following:
export JIMM_CONTAINER=[multipass.ip]
sudo ssh -A -L :8082:$JIMM_CONTAINER:8082 -L :17070:$JIMM_CONTAINER:17070 -L :443:$JIMM_CONTAINER:443 -L :8036:$JIMM_CONTAINER:8036 ubuntu@$JIMM_CONTAINER
Inside the JIMM Multipass, get your fork of the dashboard:
cd ~
git clone [email protected]:[your-username]/juju-dashboard.git
cd juju-dashboard
Install Node.js an Yarn:
sudo snap install node --classic
Install the dependencies:
yarn install
Copy the configuration file:
cp public/config.js public/config.local.js
Edit the file:
nano public/config.local.js
Change the configuration as follows:
controllerAPIEndpoint: "ws://jimm.localhost:17070/api",
Now you can start the dashboard with:
yarn start
To access the dashboard you can visit:
http://jimm.localhost:8036/
To log in you need to use the username and password listed in: Controller set up.
Each time you start the multipass container you need to do the following:
- Follow steps 4 and 5 of the Starting the environment instructions (doing the cleanup first and then start the env).
- Forward ports
- Follow the steps in Controller set
up (you
can skip the
setup-controller.sh
step and may need to runsudo iptables -F FORWARD && sudo iptables -P FORWARD ACCEPT
before these steps). - Now you can start the dashboard as normal.
The Juju controller uses a self-signed certificate for the API. To allow your local dashboard to connect to Juju you will need to first accept this certificate. If the dashboard is displaying a warning about not being able to connect to the controller, this might be the reason.
To accept the certificate, first find the address of the controller. This will
be set as controllerAPIEndpoint
in your config.local.js
(replace wss://
with https://
).
Open the address in the browser window you're using to load the dashboard.
You'll need to use https (otherwise you'll get an error Client sent an HTTP request to an HTTPS server.
) and include the port (usually 17070
).
The browser should now display a warning about the self signed certificate (unless it has already been accepted, in which case it will display "Bad Request"). Accept the certificate (this might be hidden under an advanced toggle).
Once accepted the page will display "Bad Request". This is good! You should now be able to log in from the dashboard.
When bootstrapping Juju or deploying apps on an M1 Mac (or other arm based computers) then you need to specify the arch.
This can either be set on a per-model basis:
juju set-model-constraints arch=arm64
Or passed to the bootstrap or deploy commands:
juju bootstrap ... --constraints="arch=arm64"
juju deploy ... --constraints="arch=arm64"
Note: not all charms are built for arm64 so it may be prudent to have access to an amd64 machine for testing.
The Docker image is used by the Juju Dashboard Kubernetes charm and is uploaded as a resource in Charmhub. There is a full guide for building the Docker image and Kubernetes charm in the juju-dashboard-charm repo.
The Dockerfile is also used by the PR demo service which builds a Docker image and deploys it to display a running version of a branch.
If this image needs to be tested in a Kubernetes environment then you can follow the Multipass and Kubernetes instructions.
To build the charm you first need to install Docker Engine.
Then, inside your juju-dashboard checkout run:
DOCKER_BUILDKIT=1 docker build -t juju-dashboard .
That's it! The Docker image has been built. To see details about the image run:
docker image inspect juju-dashboard | less
Note: here we are referring to local apps to mean those that are from your local filesystem which are listed as being from the Local store in the dashboard. Local apps can also refer to apps that are deployed in a model as opposed to apps that are displayed via a cross-model relation. Also, it should be noted that these two types of local apps are not mutually exclusive.
Charms that are on your local filesystem can be built and deployed to a model. In this example we will use the Postgresql charm, but this process can also be used for the dashboard charms.
First, get a copy of the code:
git clone https://github.com/canonical/postgresql-operator.git
Enter the charm directory:
cd postgresql-operator/
Install charmcraft so you can build the charm:
sudo snap install charmcraft --classic
Build the charm with:
charmcraft pack
Finally, you can deploy the charm to the current model (you may wish to juju switch ...
to a different model or juju add-model ...
to create a
new one):
juju deploy ./postgresql_ubuntu-*.charm
Now if you navigate to the model in your dashboard you should see the app in the "Local apps" table.
Integrations can be created between applications in separate models. For further information see the docs.
First, create a model to contain the application that will offer an integration:
juju add-model cmi-provider
Deploy an application:
juju deploy mysql
Offer the integration:
juju offer mysql:mysql mysql-cmi
Now, add a model that will be used to consume the integration:
juju add-model cmi-consumer
Deploy an application that can consume the mysql
interface:
juju deploy slurmdbd
Get the full name of the offer:
juju find-offers
Finally, create the integration:
juju integrate slurmdbd:db admin/cmi-provider.mysql-cmi
Using the dashboard you should now see the offer listed in the cmi-provider model and you should see the remote application in the cmi-consumer model.
To get a model into a broken state you need an application to have an error.
First, deploy an application:
juju deploy nginx
Now set the status of the application:
juju exec --app nginx "status-set --application=True blocked 'this app is broken'"
If you view the model list it should now be listed in the blocked models table.
Note: the application's units will still have a "Running" status as this is determined by the unit's agent and workload status.
To get the model out of the broken state run:
juju exec --app nginx "status-set --application=True active"