Requirements:
- Python 3.11 (pyenv recommended)
- Poetry
- Git (
brew install git
) - Postgres with PostGIS extension (
brew install postgis
)
Note: You will need a BAS GitLab access token with at least the read_api
scope to install this package as it
depends on privately published dependencies.
Clone project:
$ git clone https://gitlab.data.bas.ac.uk/MAGIC/assets-tracking-service.git
$ cd assets-tracking-service
Install project:
$ poetry config http-basic.ats-air __token__
$ poetry install
$ poetry run python -m pip install --no-deps arcgis
Create databases:
$ createdb assets-tracking-dev
$ createdb assets-tracking-test
Set configuration as per the Configuration documentation:
$ cp .env.example .env
$ poetry run python ats-ctl ...
See the CLI Reference documentation for available commands.
All changes MUST:
- be associated with an issue (either directly or by reference)
- be included in the Change Log
Conventions:
- all deployable code should be contained in the
assets-tracking-service
package - use
Path.resolve()
if displaying or logging file/directory paths - use logging to record how actions progress, using the app
logger
- (e.g.
logger = logging.getLogger('app')
)
- (e.g.
The Python version is limited to 3.11 it is the latest version supported by the arcgis
dependency.
The arcgis
package (ArcGIS API for Python) is not included in the main
package dependencies because it is incompatible with Poetry and depends on a large number of dependencies that we don't
need.
This dependency therefore needs to be installed manually after the main project dependencies are installed.
The Safety package is used to check dependencies against known vulnerabilities.
WARNING! As with all security tools, Safety is an aid for spotting common mistakes, not a guarantee of secure code. In particular this is using the free vulnerability database, which is updated less frequently than paid options.
Checks are run automatically in Continuous Integration. To check locally:
$ poetry run safety scan
Ruff is used to lint and format Python files. Specific checks and config options are
set in pyproject.toml
. Linting checks are run automatically in
Continuous Integration.
To check linting locally:
$ poetry run ruff check src/ tests/
To run and check formatting locally:
$ poetry run ruff format src/ tests/
$ poetry run ruff format --check src/ tests/
Ruff is configured to run Bandit, a static analysis tool for Python.
WARNING! As with all security tools, Bandit is an aid for spotting common mistakes, not a guarantee of secure code. In particular this tool can't check for issues that are only be detectable when running code.
For consistency, it's strongly recommended to configure your IDE or other editor to use the
EditorConfig settings defined in .editorconfig
.
pytest with a number of plugins is used to test the application. Config options are set in
pyproject.toml
. Tests checks are run automatically in
Continuous Integration.
To run tests locally:
$ poetry run pytest
Tests for the application are defined in the
tests/assets_tracking_service_tests
module.
Fixtures should be defined in conftest.py, prefixed with fx_
to indicate they are a fixture,
e.g.:
import pytest
@pytest.fixture()
def fx_test_foo() -> str:
"""Example of a test fixture."""
return 'foo'
pytest-cov
checks test coverage. We aim for 100% coverage but exemptions are fine with good justification:
# pragma: no cover
- for general exemptions# pragma: no branch
- where a conditional branch can never be called
To run tests with coverage locally:
$ poetry run pytest --cov --cov-report=html
Where tests are added to ensure coverage, use the cov
mark, e.g:
import pytest
@pytest.mark.cov
def test_foo():
assert 'foo' == 'foo'
pytest-recording is used to mock HTTP calls to provider APIs (ensuring known values are used in tests).
To (re-)record responses:
- if re-recording, remove some or all existing 'cassette' YAML files
- update test fixtures to use real credentials
- run tests in record mode:
poetry run pytest --record-mode=once
- update test fixtures to use fake/safe credentials
- redact credentials captured in captured cassettes
All commits will trigger Continuous Integration using GitLab's CI/CD platform, configured in .gitlab-ci.yml
.
If using a local Postgres database installed through homebrew (assuming @14
is the version installed):
- manage service:
brew services [command] postgresql@14
- view logs:
/usr/local/var/log/[email protected]
In the Config
class:
- define a new property (use upper case name if configurable by the end user)
- add property to
ConfigDumpSafe
typed dict - add property to
dumps_safe
method - if needed, add logic to
validate
method
In the Configuration documentation:
- add to either configurable or unconfigurable options table in alphabetical order
- update the
.env.example
template and local.env
file - update the
deploy
job in the.gitlab-ci.yml
file - update the
[tool.pytest_env]
section inpyproject.toml
In the test_config.py module:
- update the expected response in the
test_dumps_safe
method - if validated, update the
test_
- update or create tests as needed
In the db_migrations
resource directory:
- create an up migration, which applies the change in the
up/
subdirectory - create a down migration, which reverts the change in the
down/
subdirectory
Migration files are numbered to ensure they apply in the correct order:
- up migrations count upwards
- down migrations count backwards
Migrations should be grouped into logical units, for example if creating a new entity define a table and it's indexes in a single migration. Define separate entities (even if related and part of the same change/feature) in separate migrations.
Existing migrations MUST NOT be amended. I.e. if a column type should change, use an ALTER
command in a new migration.
See the Implementation documentation for more information on migrations.
If a new command group is needed:
- create a new module in the
cli
package - create a corresponding test module
- import and add the new command CLI in the Root CLI
In the relevant command group module, create a new method:
- make sure the command decorator name and help are set correctly
- follow the conventions established in other commands for error handling and presenting data to the user
- add corresponding tests
In the CLI Reference documentation:
- if needed, create a new command group section
- list and summarise the new command in the relevant group section
[WIP] This section is a work in progress.
- add config option for enabling/disabling provider
- update
enabled_providers
property to include new provider - add provider specific config options as needed
- create a new module in the
providers
package - create a new class inheriting from the
BaseProvider
- implement methods required by the base class
- integrate into the ProvidersManager class
- update the
_make_providers
method
- update the
- add tests as needed
[WIP] This section is a work in progress.
- add config option for enabling/disabling exporter
- update
enabled_exporters
property to include new exporter - add exporter specific config options as needed
- if another exporter is required, update the config validation method to ensure the dependant exporter is enabled
- create a new module in the
exporters
package - create a new class inheriting from the
BaseExporter
- implement methods required by the base class
- integrate into the ExportersManager class:
- update the
_make_exporters
method
- update the
- add tests as needed:
- create a new module in
exporters
test package test_make_each_exporter
- add mock for exporter in `test_export
- create a new module in