Skip to content

Commit

Permalink
Merge branch 'main' into update-cost-expr-naming
Browse files Browse the repository at this point in the history
  • Loading branch information
brynpickering committed Sep 30, 2024
2 parents 19a63f8 + 0fc4f5e commit dfafbe4
Show file tree
Hide file tree
Showing 107 changed files with 2,325 additions and 2,087 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/commit-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:

- uses: mamba-org/setup-micromamba@v1
with:
micromamba-version: latest
micromamba-version: '1.5.10-0'
environment-name: ${{ github.event.repository.name }}-ubuntu-latest-312-${{ hashFiles('requirements/dev.txt') }}
environment-file: requirements/base.txt
create-args: >-
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pr-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ jobs:

- uses: mamba-org/setup-micromamba@v1
with:
micromamba-version: latest
micromamba-version: '1.5.10-0'
environment-name: ${{ github.event.repository.name }}-${{ matrix.os }}-3${{ matrix.py3version }}-${{ hashFiles('requirements/dev.txt') }}
environment-file: requirements/base.txt
create-args: >-
Expand Down Expand Up @@ -108,7 +108,7 @@ jobs:
- uses: actions/checkout@v4
- uses: mamba-org/setup-micromamba@v1
with:
micromamba-version: latest
micromamba-version: '1.5.10-0'
environment-file: .github/workflows/pr-ci-pipbuild-environment.yml
post-cleanup: all
cache-environment: true
Expand All @@ -127,4 +127,4 @@ jobs:
TEST_PYPI_API_TOKEN: ${{ secrets.TEST_PYPI_API_TOKEN }}
with:
package_name: calliope
version: ${{needs.pre-release-version.outputs.version}}
version: ${{needs.pre-release-version.outputs.version}}
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ default_language_version:

repos:
- repo: https://github.com/astral-sh/ruff-pre-commit # https://beta.ruff.rs/docs/usage/#github-action
rev: v0.5.6
rev: v0.6.3
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
Expand Down
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,13 @@

|changed| cost expressions in math, to split out investment costs into the capital cost (`cost_investment`), annualised capital cost (`cost_investment_annualised`), fixed operation costs (`cost_operation_fixed`) and variable operation costs (`cost_operation_variable`, previously `cost_var`) (#645).

|new| Math has been removed from `model.math`, and can now be accessed via `model.math.data` (#639).

|new| (non-NaN) Default values and data types for parameters appear in math documentation (if they appear in the model definition schema) (#677).

|changed| `data_sources` -> `data_tables` and `data_sources.source` -> `data_tables.data`.
This change has occurred to avoid confusion between data "sources" and model energy "sources" (#673).

## 0.7.0.dev4 (2024-09-10)

### User-facing changes
Expand Down Expand Up @@ -60,6 +67,12 @@ Parameter titles from the model definition schema will also propagate to the mod

### Internal changes

|changed| `model._model_def_dict` has been removed.

|new| `CalliopeMath` is a new helper class to handle math additions, including separate methods for pre-defined math, user-defined math and validation checks.

|changed| `MathDocumentation` has been extracted from `Model`/`LatexBackend`, and now is a postprocessing module which can take models as input.

|new| `gurobipy` is a development dependency that will be added as an optional dependency to the conda-forge calliope feedstock recipe.

|changed| Added any new math dicts defined with `calliope.Model.backend.add_[...](...)` to the backend math dict registry stored in `calliope.Model.backend.inputs.attrs["math"]`.
Expand Down
74 changes: 37 additions & 37 deletions docs/creating/data_sources.md → docs/creating/data_tables.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# Loading tabular data (`data_sources`)
# Loading tabular data (`data_tables`)

We have chosen YAML syntax to define Calliope models as it is human-readable.
However, when you have a large dataset, the YAML files can become large and ultimately not as readable as we would like.
For instance, for parameters that vary in time we would have a list of 8760 values and timestamps to put in our YAML file!

Therefore, alongside your YAML model definition, you can load tabular data from CSV files (or from in-memory [pandas.DataFrame][] objects) under the `data_sources` top-level key.
Therefore, alongside your YAML model definition, you can load tabular data from CSV files (or from in-memory [pandas.DataFrame][] objects) under the `data_tables` top-level key.
As of Calliope v0.7.0, this tabular data can be of _any_ kind.
Prior to this, loading from file was limited to timeseries data.

The full syntax from loading tabular data can be found in the associated [schema][data-source-schema].
The full syntax from loading tabular data can be found in the associated [schema][data-table-schema].
In brief it is:

* **source**: path to file or reference name for an in-memory object.
* **data**: path to file or reference name for an in-memory object.
* **rows**: the dimension(s) in your table defined per row.
* **columns**: the dimension(s) in your table defined per column.
* **select**: values within dimensions that you want to select from your tabular data, discarding the rest.
Expand Down Expand Up @@ -126,9 +126,9 @@ In this section we will show some examples of loading data and provide the equiv
YAML definition to load data:

```yaml
data_sources:
data_tables:
pv_capacity_factor_data:
source: data_sources/pv_resource.csv
data: data_tables/pv_resource.csv
rows: timesteps
add_dims:
techs: pv
Expand Down Expand Up @@ -181,9 +181,9 @@ In this section we will show some examples of loading data and provide the equiv
YAML definition to load data:

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: [techs, parameters]
```

Expand Down Expand Up @@ -224,9 +224,9 @@ In this section we will show some examples of loading data and provide the equiv
YAML definition to load data:

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: [techs, parameters]
add_dims:
costs: monetary
Expand Down Expand Up @@ -272,7 +272,7 @@ In this section we will show some examples of loading data and provide the equiv
1. To limit repetition, we have defined [templates](templates.md) for our costs.

!!! info "See also"
Our [data source loading tutorial][loading-tabular-data] has more examples of loading tabular data into your model.
Our [data table loading tutorial][loading-tabular-data] has more examples of loading tabular data into your model.

## Selecting dimension values and dropping dimensions

Expand All @@ -290,9 +290,9 @@ Data in file:
YAML definition to load only data from nodes 1 and 2:

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: [techs, parameters]
columns: nodes
select:
Expand All @@ -312,22 +312,22 @@ You will also need to `drop` the dimension so that it doesn't appear in the fina
YAML definition to load only data from scenario 1:

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: [techs, parameters]
columns: scenarios
select:
scenarios: scenario1
drop: scenarios
```

You can then also tweak just one line of your data source YAML with an [override](scenarios.md) to point to your other scenario:
You can then also tweak just one line of your data table YAML with an [override](scenarios.md) to point to your other scenario:

```yaml
override:
switch_to_scenario2:
data_sources.tech_data.select.scenarios: scenario2 # (1)!
data_tables.tech_data.select.scenarios: scenario2 # (1)!
```

1. We use the dot notation as a shorthand for [abbreviate nested dictionaries](yaml.md#abbreviated-nesting).
Expand All @@ -348,9 +348,9 @@ For example, to define costs for the parameter `cost_flow_cap`:
| tech3 | monetary | cost_flow_cap | 20 | 45 | 50 |

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: [techs, costs, parameters]
columns: nodes
```
Expand All @@ -364,9 +364,9 @@ For example, to define costs for the parameter `cost_flow_cap`:
| tech3 | 20 | 45 | 50 |

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: techs
columns: nodes
add_dims:
Expand All @@ -384,9 +384,9 @@ Or to define the same timeseries source data for two technologies at different n
| 2005-01-01 01:00 | 200 | 200 |

```yaml
data_sources:
data_tables:
tech_data:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: timesteps
columns: [nodes, techs, parameters]
```
Expand All @@ -401,16 +401,16 @@ Or to define the same timeseries source data for two technologies at different n
| 2005-01-01 01:00 | 200 |

```yaml
data_sources:
data_tables:
tech_data_1:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: timesteps
add_dims:
techs: tech1
nodes: node1
parameters: source_use_max
tech_data_2:
source: data_sources/tech_data.csv
data: data_tables/tech_data.csv
rows: timesteps
add_dims:
techs: tech2
Expand All @@ -420,10 +420,10 @@ Or to define the same timeseries source data for two technologies at different n

## Loading CSV files vs `pandas` dataframes

To load from CSV, set the filepath in `source` to point to your file.
To load from CSV, set the filepath in `data` to point to your file.
This filepath can either be relative to your `model.yaml` file (as in the above examples) or an absolute path.

To load from a [pandas.DataFrame][], you can specify the `data_source_dfs` dictionary of objects when you initialise your model:
To load from a [pandas.DataFrame][], you can specify the `data_table_dfs` dictionary of objects when you initialise your model:

```python
import calliope
Expand All @@ -433,19 +433,19 @@ df2 = pd.DataFrame(...)
model = calliope.Model(
"path/to/model.yaml",
data_source_dfs={"data_source_1": df1, "data_source_2": df2}
data_table_dfs={"data_source_1": df1, "data_source_2": df2}
)
```

And then you point to those dictionary keys in the `source` for your data source:
And then you point to those dictionary keys in the `data` for your data table:

```yaml
data_sources:
data_tables:
ds1:
source: data_source_1
data: data_source_1
...
ds2:
source: data_source_2
data: data_source_2
...
```

Expand All @@ -454,7 +454,7 @@ data_sources:
Rows correspond to your dataframe index levels and columns to your dataframe column levels.

You _cannot_ specify [pandas.Series][] objects.
Ensure you convert them to dataframes (`to_frame()`) before adding them to your data source dictionary.
Ensure you convert them to dataframes (`to_frame()`) before adding them to your data table dictionary.

## Important considerations

Expand All @@ -468,8 +468,8 @@ This could be defined in `rows`, `columns`, or `add_dims`.
3. `add_dims` to add dimensions.
This means you can technically select value "A" from dimensions `nodes`, then drop `nodes`, then add `nodes` back in with the value "B".
This effectively replaces "A" with "B" on that dimension.
3. The order of tabular data loading is in the order you list the sources.
If a new table has data which clashes with preceding data sources, it will override that data.
3. The order of tabular data loading is in the order you list the tables.
If a new table has data which clashes with preceding tables, it will override that data.
This may have unexpected results if the files have different dimensions as the dimensions will be broadcast to match each other.
4. CSV files must have `.csv` in their filename (even if compressed, e.g., `.csv.zip`).
If they don't, they won't be picked up by Calliope.
Expand All @@ -481,7 +481,7 @@ E.g.,
nodes:
node1.techs: {tech1, tech2, tech3}
node2.techs: {tech1, tech2}
data_sources:
data_tables:
...
```
6. We process dimension data after loading it in according to a limited set of heuristics:
Expand Down
8 changes: 4 additions & 4 deletions docs/creating/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ We distinguish between:
- the model **definition** (your representation of a physical system in YAML).

Model configuration is everything under the top-level YAML key [`config`](config.md).
Model definition is everything else, under the top-level YAML keys [`parameters`](parameters.md), [`techs`](techs.md), [`nodes`](nodes.md), [`templates`](templates.md), and [`data_sources`](data_sources.md).
Model definition is everything else, under the top-level YAML keys [`parameters`](parameters.md), [`techs`](techs.md), [`nodes`](nodes.md), [`templates`](templates.md), and [`data_tables`](data_tables.md).

It is possible to define alternatives to the model configuration/definition that you can refer to when you initialise your model.
These are defined under the top-level YAML keys [`scenarios` and `overrides`](scenarios.md).
Expand All @@ -52,7 +52,7 @@ The layout of that directory typically looks roughly like this (`+` denotes dire
+ model_definition
- nodes.yaml
- techs.yaml
+ data_sources
+ data_tables
- solar_resource.csv
- electricity_demand.csv
- model.yaml
Expand All @@ -63,7 +63,7 @@ In the above example, the files `model.yaml`, `nodes.yaml` and `techs.yaml` toge
This definition could be in one file, but it is more readable when split into multiple.
We use the above layout in the example models.

Inside the `data_sources` directory, tabular data are stored as CSV files.
Inside the `data_tables` directory, tabular data are stored as CSV files.

!!! note
The easiest way to create a new model is to use the `calliope new` command, which makes a copy of one of the built-in examples models:
Expand All @@ -85,4 +85,4 @@ The rest of this section discusses everything you need to know to set up a model
- More details on the [model configuration](config.md).
- The key parts of the model definition, first, the [technologies](techs.md), then, the [nodes](nodes.md), the locations in space where technologies can be placed.
- How to use [technology and node templates](templates.md) to reduce repetition in the model definition.
- Other important features to be aware of when defining your model: defining [indexed parameters](parameters.md), i.e. parameter which are not indexed over technologies and nodes, [loading tabular data](data_sources.md), and defining [scenarios and overrides](scenarios.md).
- Other important features to be aware of when defining your model: defining [indexed parameters](parameters.md), i.e. parameter which are not indexed over technologies and nodes, [loading tabular data](data_tables.md), and defining [scenarios and overrides](scenarios.md).
25 changes: 1 addition & 24 deletions docs/examples/calliope_model_object.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,33 +36,10 @@
# Get information on the model
print(m.info())

# %% [markdown]
# ## Model definition dictionary
#
# `m._model_def_dict` is a python dictionary that holds all the data from the model definition YAML files, restructured into one dictionary.
#
# The underscore before the method indicates that it defaults to being hidden (i.e. you wouldn't see it by trying a tab auto-complete and it isn't documented)

# %%
m._model_def_dict.keys()

# %% [markdown]
# `techs` hold only the information about a technology that is specific to that node

# %%
m._model_def_dict["techs"]["pv"]

# %% [markdown]
# `nodes` hold only the information about a technology that is specific to that node

# %%
m._model_def_dict["nodes"]["X2"]["techs"]["pv"]

# %% [markdown]
# ## Model data
#
# `m._model_data` is an xarray Dataset.
# Like `_model_def_dict` it is a hidden prperty of the Model as you are expected to access the data via the public property `inputs`
# `m._model_data` is an xarray Dataset, a hidden property of the Model as you are expected to access the data via the public property `inputs`

# %%
m.inputs
Expand Down
Loading

0 comments on commit dfafbe4

Please sign in to comment.