Skip to content

Commit

Permalink
Merge pull request #114 from DUNE-DAQ/jhancock/elisa-documentation
Browse files Browse the repository at this point in the history
Jhancock/elisa documentation
  • Loading branch information
TiagoTAlves authored Jun 24, 2024
2 parents d6a0e22 + 346e76c commit 4e04060
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ docker run --rm -e MICROSERVICE=<name of microservice> ghcr.io/dune-daq/microser
```

There are a couple of points to note:
* The value of MICROSERVICE should be the name of a given microservice's subdirectory in this repo. As of Oct-6-2023, the available subdirectories are: `config-service`, `ers-dbwriter`, `ers-protobuf-dbwriter`, `logbook`, `opmon-dbwriter`, `runnumber-rest` and `runregistry-rest`.
* The value of MICROSERVICE should be the name of a given microservice's subdirectory in this repo. As of Oct-6-2023, the available subdirectories are: `config-service`, `elisa-logbook`, `ers-dbwriter`, `ers-protobuf-dbwriter`, `opmon-dbwriter`, `runnumber-rest` and `runregistry-rest`.
* Most microservices require additional environment variables to be set, which can be passed using the usual docker syntax: `-e VARIABLE_NAME=<variable value>`
* If you don't know what these additional environment variables are, you can just run the `docker` command as above without setting them; the container will exit out almost immediately but only after telling you what variables are missing
* The `9685` tag for the image in the example above just refers to the first four characters of the git commit of the microservices repo whose `dockerfiles/Dockerfile.microservices` Docker file was used to create the image. Currently [Dec-08-2023] this is the head of a branch soon to be merged in develop with a PR.
Expand Down
13 changes: 5 additions & 8 deletions docs/README_logbook.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,15 @@
# logbook-test
<h2>Instructions for use</h2>
To run the API outside of a container, it is recommended to git clone into your user area on lxplus, in order to skip the installation of kerberos, the ELisA client and some perl modules. No arguments are given: the API gets the data needed for initialisation from three environment variables(USERNAME, PASSWORD and HARDWARE). Remember to set these before you run: the username and password are your CERN SSO, and the hardware variable should correspond to one of the top-level keys in elisaconf.json. To run in a container, use docker build to create an image from the dockerfile, and docker run to make the container. Environment variables should be set using the -e flag. The final method (arguably the best) is to just use the instance already running on the np04-srv-015 kubernetes cluster. In this case, localhost must be replaced with the cluster ip in all URLs.

To run the service locally, clone the repository and do `python3 logbook.py`. Assuming this was done on one of the np04 servers, all of the relevant dependencies should already be present. Otherwise, ensure that Kerberos, Flask, and auth-get-sso-cookie are installed. Before running, the The USERNAME, PASSWORD and HARDWARE environment variables must be set. HARDWARE is either "npvd" or "pdsp", and determines which logbook website the messages will be sent to. Once the flask server is running, requests can be sent to it on port 5005. For production use, there are four instances running in the kubernetes cluster: two for each website, with production and development versions. `kubectl get all -n elisa-logbook` will show all relevant information, such as the port numbers for each service.
<h2>URL Documentation</h2>
There are 6 different URLs that can be used, three for each kind of logbook. Examples of how to use each one with curl can be found as comments inside of logbook.py, above their respective parts of the code.

There are 5 routes available in total. The first three will save logs locally as text files: this is legacy code from nanorc and will probably not be very useful for most users. The remaining two will send the logs to an ELisA logbook, for long-term storage. The routes are defined in `logbook.py` and have an example curl request for each one provided as a comment
<h3>File Logbook</h3>
/v1/fileLogbook/message_on_start/ is used to create a new file with a message. It accepts POST requests.<br />
/v1/fileLogbook/add_message/ is used to append messages to an existing file. It accepts PUT requests.<br />
/v1/fileLogbook/message_on_stop/ is used to add a final message to a file. It accepts PUT requests.<br />
All of these requests need the author, message, run_num and run_type variables to be provided.

<h3>ELisA Logbook</h3>
/v1/elisaLogbook/message_on_start/ is used to start a new message thread in ELisA, with a user supplied message. It accepts POST requests.<br />
/v1/elisaLogbook/add_message/ is used to add a message to the current thread in ELisA. It accepts PUT requests.<br />
/v1/elisaLogbook/message_on_stop/ is used to add a final message to the current thread in ELisA. It accepts PUT requests.<br />
message_on_start and message_on_stop need the author, message, run_num and run_type variables to be provided. add_message just needs the author and message.<br />
/v1/elisaLogbook/new_message/ is used to start a new message thread in ELisA, with a user supplied message. It accepts POST requests.<br />
/v1/elisaLogbook/reply_to_message/ is used to add a message to an existing thread in ELisA. It accepts PUT requests.<br />
Both of these routes require the author, body, command and systems variables to be provided in a JSON. title and ID must also be provided to `new_message` and `reply_to_message` respectively: the id can be obtained from the response when successfully posting a message.<br />

0 comments on commit 4e04060

Please sign in to comment.