To run the project with docker-compose use:
docker-compose up --build
The frontend is built using TypeScript and components from the Equinor Design System (EDS).
If no API_URL
is provided, it will resolve to https://backend-fusion-bmt-dev.radix.equinor.com
which is the dev server from master. If you want to run frontend with local
backend, you need to provide explicitly API_URL
. It can be set as an environment
variable, or the file .env
in frontend
can be created. A sample
file .env.example
is provided.
cd frontend
npm install
npm start
Any changes made to the database model or the GraphQL queries and mutations in the backend need to be communicated to the frontend. This is done by running:
npm run schema
from the /frontend
-directory. The changes must be checked in to git. Note
that for npm run schema
to run properly the backend must be running and
authorization must be turned off
The backend is build using .NET 5.0. We use GraphQL to handle requests to the backend, and Hot Chocolate is used as the implementation in .NET.
We are using a Entity Framework SQL database for storing our data.
The environment variable Database__ConnectionString
can be a ADO.NET connection
string to an existing database. If empty we use an InMemory database which is
initialized with dummy data.
To start the backend, the file launchSettings.json
in backend/api/Properties
needs to be created. A sample file launchSettings.Template.json
is provided.
When running locally, a playground server for trying out GraphQL queries will be available at localhost:5000/graphql. This will not work properly in production, since the playground server will not provide a bearer token for authentication. For generating a bearer token and try out the API, the Swagger URL localhost:5000/swagger can be used.
If you wish to interact with endpoints directly during development, disable authorization.
The Schema used for the models in the backend can be found here.
It is sometimes useful to disable authorization for local development.
This can be done by commenting out app.UseAuthorization();
line in backend/api/Startup.cs
Remember to change it back before committing!
cd backend/api
dotnet run
Our database model is defined in
/backend/api/Models/Models.cs
and we use
Entity FrameWork Core as an
object-relational mapper (O/RM). When making changes to the model, we also need
to create a new
migration
and apply it to our databases.
dotnet tool install --global dotnet-ef
After making changes to the model run (from /backend/api
):
dotnet ef migrations add {migration-name}
add
will make changes to existing files and add 2 new files in
backend/api/Migrations
, which all need to be checked in to git.
Note that the {migration-name} is just a descriptive name of your choosing.
Also note that Database__ConnectionString
should be pointed at one of our
databases when running add
. The reason for this is that the migration will be
created slightly different when based of the in-memory database. add
will not
update or alter the connected database in any way.
If you for some reason are unhappy with your migration, you can delete it with
dotnet ef migrations remove
or simply delete the files and changes created by add
. Once deleted you can
make new changes to the model and then create a new migration with add
. Note
that you also need to update the frontend
schema.
For the migration to take effect, we need to apply it to our databases. To get
an overview of the current migrations in a database, set the correct
Database__ConnectionString
for that database and run:
dotnet ef migrations list
This will list all migrations that are applied to the database and the local migrations that are yet to be applied. The latter are denoted with the text (pending).
To apply the pending migrations to the database run:
dotnet ef database update
If everything runs smoothly the pending tag should be gone if you run list
once more.
You can apply migrations to the dev database at any time to test that it behaves as expected.
The prod and qa databases doesn't need to be updated manually, as all migrations are applied to it automatically as part of the pipelines when pushed to qa and prod.
For populating SQL database with question templates go to backend/scripts
make sure your Database__ConnectionString
is set and run
dotnet run -t PATH-TO-FILE
. An example file of question templates:
backend/api/Context/InitQuestions.json
We are using Cypress as a test framework for End to End tests. Details can be found in this section.
Cypress E2E tests can be run locally with:
docker-compose -f docker-compose.cypress.yml up --build cypress
To run locally the two last lines in frontend/cypress.Dockerfile should be commented out.
Cypress tests will be run in Azure DevOps when pushing to the upstream branch cypress.
This can be done with the following command:
git push upstream HEAD:cypress -f
We are using Cypress to create a suite of automated functional regression tests.
We aim to create a system that
'describes and validates the behavior of the system as seen from a user perspective'.
This implicitly ensures that business requirements are tested (covered by tests).
The automated tests run in a CI pipeline.
In the majority of the tests we adopt a BDD style coding.
This means that we in the testcase describe the user actions and the behavior of the software using a ubiquitous language based on English. (A ubiquitous language is a vocabulary shared by all stakeholders.)
See here for more information: https://en.wikipedia.org/wiki/Behavior-driven_development
The tests are organized after pages and areas of concern (major capabilities)
for the users of the application.
The tests for each area of concern or each page reside in their respective
_spec.ts
file.
One example of an area of concern is actions management (creation, editing,
completing, voiding, viewing on different stages of actions).
The set of tests for action management reside in actions_spec.ts
.
Each test in the set of tests tests one specific capability in the area of
concern or on the page.
One example from actions_spec.ts
is Creating and editing actions.
We use parameterization of tests extensively to iterate over sets of input values.
In many tests we use randomization to select one random element from a a set of possible input values where BMT behaves equivalent for any of those input values.
For instance, the progression of an evaluation is often chosen randomly from a list of progressions. Which progressions that are included in this list are determined based on the behavior that is being verified.
We develop the tests immideately after the relevant functionality is developed in separate PRs.
We mock authentication and the integration with the Fusion framework but we have no type safety for these mocks.
To mitigate these risks we employ short sessions of exploratory testing prior to release into production.
We have 4 different environments in use; dev, qa, prod and pr. Dev is the environment that runs when pushing to master. Qa and prod will be created with a DevOps pipeline when a new tag is created. The pr env is built on push to the pr branch. Dev and pr will only deploy to Radix environment, while qa and prod will deploy the frontend to Fusion.
We use both Azure Application Insight and dynatrace to monitor our applications. To start components with Dynatrace OneAgent locally, simply run
docker-compose -f docker-compose.dynatrace.yml up --build
We use Prettier. Remember to set this up.
VSCode settings:
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true