diff --git a/docs/community/faq/faq-contributors.md b/docs/community/faq/faq-contributors.md index f251fea4..c312a62d 100644 --- a/docs/community/faq/faq-contributors.md +++ b/docs/community/faq/faq-contributors.md @@ -1,67 +1,58 @@ # First-Time Contributors' Frequently Asked Questions -**TODO** - -## Getting Started - -1. How can I contribute to AutoEmulate? - - -2. What are the guidelines for contributing code? - - -3. How do I choose what to work on for my first contribution? - - -4. What coding standards and practices does AutoEmulate follow? - - -5. Are there any specific development tools or environments recommended for working on AutoEmulate? - - -## Making Contributions - -1. How do I submit a contribution, and what is the review process? - - -2. Can I contribute by writing documentation or tutorials, and how? - - -3. What should I do if my pull request gets rejected or needs revision? - - ## Technical Questions 1. How is the AutoEmulate project structured? + * The key component is the `AutoEmulate` class in `autoemulate/compare.py`, which is the main class for setting up and comparing emulators, visualising and summarising results, saving models, and applications such as sensitivity analysis. + * All other modules in `autoemulate/` are supporting modules for the main class, such as data splitting, model processing, hyperparameter searching, plotting, saving, etc. + * `autoemulate/emulators/` contains the emulator models, which are implemented as [scikit-learn estimators](https://scikit-learn.org/1.5/developers/develop.html). Deep learning models have two main parts: The scikit-learn estimator interface in `autoemulate/emulators/` and the neural network architecture in `autoemulate/emulators/neural_networks/`. + * Emulators need to be registered in the model registry in `autoemulate/emulators/__init__.py` to be available in `AutoEmulate`. + * `autoemulate/simulations/` contains simple example simulations. + * `tests/` contains tests for the package. + * `data/` contains example datasets. + * `docs/` contains the documentation source files. We use `jupyter-book` to build the documentation. 2. How do I set up my development environment for AutoEmulate? + See the 'Install using Poetry' section of the [installation](../../getting-started/installation.md) page. 3. How do I run tests for AutoEmulate? + * We use `pytest` to run the tests. To run all tests: -## Community and Support + ```bash + pytest + ``` -1. Where can I ask questions if I'm stuck? - + * To run tests with print statements: -2. How does AutoEmulate handle contributions related to security issues? - + ```bash + pytest -s + ``` -3. Is there a code of conduct for contributors? - + * To run a specific test module: -4. How can I get involved in decision-making or project planning as a contributor? - + ```bash + pytest tests/test_example.py + ``` -## Beyond Code Contributions + * To run a specific test: -1. Can I contribute without coding, for example, through design, marketing, or community management? - + ```bash + pytest tests/test_example.py::test_function + ``` -2. How does the project recognise or reward contributions? - +## Community and Support -3. Are there regular meetings or forums where contributors can discuss the project? - +1. Where can I ask questions if I'm stuck? + + * We use [Discussion on GitHub](https://github.com/alan-turing-institute/autoemulate/discussions) for questions and general discussion. + +2. Is there a code of conduct for contributors? + + * Yes, it's [here](../code-of-conduct.md). + +3. How can I get involved in decision-making or project planning as a contributor? + + * We use GitHub [Discussions](https://github.com/alan-turing-institute/autoemulate/discussions) for general discussion and [Issues](https://github.com/alan-turing-institute/autoemulate/issues) for project planning and development. \ No newline at end of file diff --git a/docs/community/faq/faq-users.md b/docs/community/faq/faq-users.md index a6d9e464..bac5d7ec 100644 --- a/docs/community/faq/faq-users.md +++ b/docs/community/faq/faq-users.md @@ -4,36 +4,31 @@ 1. What is `AutoEmulate`? - - A Python package that makes it easy to build emulators for complex simulations. It takes a set of simulation inputs `X` and outputs `y`, and automatically fits, optimises and evaluates various machine learning models to find the best emulator model. The emulator model can then be used as a drop-in replacement for the simulation, but will be much faster and computationally cheaper to evaluate. + - A Python package that makes it easy to create emulators for complex simulations. It takes a set of simulation inputs `X` and outputs `y`, and automatically fits, optimises and evaluates various machine learning models to find the best emulator model. The emulator model can then be used as a drop-in replacement for the simulation, but will be much faster and computationally cheaper to evaluate. We have also implemented global sensitivity analysis as a common emulator application and working towards making `AutoEmulate` a true end-to-end package for building emulators. -2. How do I install `AutoEmulate`? - - - See the [installation guide](../../getting-started/installation.md) for detailed instructions. +2. How do I know whether `AutoEmulate` is the right tool for me? + - You need to build an emulator for a simulation. + - You want to do global sensitivity analysis + - Your inputs `X` and outputs `y` are numeric and complete (we don't support missing data yet). + - You have one or more input parameters and one or more output variables. + - You have a small-ish dataset in the order of hundreds to few thousands of samples. All default emulator parameters and search spaces are optimised for smaller datasets. -3. What are the prerequisites for using `AutoEmulate`? - - - `AutoEmulate` is designed to be easy to use. The user has to first generate a dataset of simulation inputs `X` and outputs `y`, and optimally have a basic understanding of Python and machine learning concepts. +3. Does `AutoEmulate` support multi-output data? + - Yes, all models support multi-output data. Some do so natively, others are wrapped in a `MultiOutputRegressor`, which fits one model per target variable. -## Usage Questions - -1. How do I start using `AutoEmulate` with my simulation? - - - See the [getting started guide](../../getting-started/quickstart.ipynb) or a more [in-depth tutorial](../../tutorials/01_start.ipynb). - -2. What kind of data does `AutoEmulate` need to build an emulator? - - - - `AutoEmulate` takes simulation inputs `X` and simulation outputs `y` to build an emulator.`X` is an ndarray of shape `(n_samples, n_parameters)` and `y` is an ndarray of shape `(n_samples, n_outputs)`. Each sample here is a simulation run, so each row of `X` corresponds to a set of input parameters and each row of `y` corresponds to the corresponding simulation output. Currently, all inputs and outputs should be numeric, and we don't support missing data. +4. Does `AutoEmulate` support temporal or spatial data? + - Not explicitly. The train-test split just takes a random subset as a test set, so does KFold cross-validation. - - All models work with multi-output data. We have optimised `AutoEmulate` to work with smaller datasets (in the order of hundreds to thousands of samples). Training emulators with large datasets (hundreds of thousands of samples) may currently require a long time and is not recommended. +5. Why is `AutoEmulate` so slow? + - The package fits a lot of models, in particular when hyperparameters are optimised. With say 8 default models and 5-fold cross-validation, this amounts to 40 model fits. With the addition of hyperparameter optimisation (n_iter=20), this results in 800 model fits. Some models such as Gaussian Processes and Neural Processes will take a long time to run on a CPU. However, don't despair! There is a [speeding up AutoEmulate guide](../../tutorials/02_speed.ipynb). As a rule of thumb, if your dataset is smaller than 1000 samples, you should be fine, if it's larger and you want to optimise hyperparameters, you might want to read the guide. -3. How do I interpret the results from `AutoEmulate`? - - - See the [tutorial](../../tutorials/01_start.ipynb) for an example of how to interpret the results from `AutoEmulate`. Briefly, `X` and `y` are first split into training and test sets. Cross-validation and/or hyperparameter optimisation are performed on the training data. After comparing the results from different emulators, the user can evaluate the chosen emulator on the test set with `AutoEmulate.evaluate_model()`, and plot test set predictions with `AutoEmulate.plot_model()`, see [autoemulate.compare](../../reference/compare.rst) module for details. +## Usage Questions - - An important thing to note is that the emulator can only be as good as the data it was trained on. Therefore, the experimental design (on which points the simulation was evaluated) is key to obtaining a good emulator. +1. What data do I need to provide to `AutoEmulate` to build an emulator? + + - You'll need two input objects: `X` and `y`. `X` is an ndarray / Pandas DataFrame of shape `(n_samples, n_parameters)` and `y` is an ndarray / Pandas DataFrame of shape `(n_samples, n_outputs)`. Each sample here is a simulation run, so each row of `X` corresponds to a set of input parameters and each row of `y` corresponds to the corresponding simulation output. You'll usually have created `X` using Latin hypercube sampling or similar methods, and `y` by running the simulation on these `X` inputs. -4. Can I use `AutoEmulate` for commercial purposes? +2. Can I use `AutoEmulate` for commercial purposes? - Yes. It's licensed under the MIT license, which allows for commercial use. See the [license](../../../LICENSE) for more information. @@ -41,28 +36,24 @@ 1. Does AutoEmulate support parallel processing or high-performance computing (HPC) environments? - - Yes, [AutoEmulate.setup()](../../reference/compare.rst) has an `n_jobs` parameter which allows to parallelise cross-validation and hyperparameter optimisation. + - Yes, [AutoEmulate.setup()](../../reference/compare.rst) has an `n_jobs` parameter which allows to parallelise cross-validation and hyperparameter optimisation. We are also working on GPU support for some models. 2. Can AutoEmulate be integrated with other data analysis or simulation tools? - - `AutoEmulate` takes simple `X` and `y` ndarrays as input, and returns emulator models that can be saved and loaded with `joblib`. All emulators are written as scikit learn estimators, so they can be used like any other scikit learn model in a pipeline. + - `AutoEmulate` takes simple `X` and `y` ndarrays as input, and returns emulators which are [scikit-learn estimators](https://scikit-learn.org/1.5/developers/develop.html), that can be saved and loaded, and used like any other scikit-learn model. ## Data Handling 1. What are the best practices for data preprocessing before using `AutoEmulate`? - - The user will typically run their simulation on a selected set of input parameters (-> experimental design) using a latin hypercube or other sampling method. `AutoEmulate` currently needs all inputs to be numeric and we don't support missing data. By default, `AutoEmulate` will scale the input data to zero mean and unit variance, and there's the option to do dimensionality reduction in `setup()`. - -2. How does AutoEmulate handle large datasets? - - - `AutoEmulate` is optimised to work with smaller datasets (in the order of hundreds to thousands of samples). Training emulators with large datasets (hundreds of thousands of samples) may currently require a long time and is not recommended. Emulators are created because it's expensive to evaluate the simulation, so we expect most users to have a relatively small dataset. + - The user will typically run their simulation on a selected set of input parameters (-> experimental design) using a latin hypercube or other sampling method. `AutoEmulate` currently needs all inputs to be numeric and we don't support missing data. By default, `AutoEmulate` will scale the input data to zero mean and unit variance, and for some models it will also scale the output data. There's also the option to do dimensionality reduction in `setup()`. ## Troubleshooting 1. What common issues might I encounter when using `AutoEmulate`, and how can I solve them? - `AutoEmulate.setup()` has a `log_to_file` option to log all warnings/errors to a file. It also has a `verbose` option to print more information to the console. If you encounter an error, please open an issue (see below). - + - One common issue is that the Jupyter notebook kernel crashes when running `compare()` in parallel, often due to `LightGBM`. In this case, we recommend either specifying `n_jobs=1` or selecting specific (non-LightGBM) models in `setup()` with the `models` parameter. 2. How can I report a bug or request a feature in `AutoEmulate`? - You can report a bug or request a new feature through the [issue templates](https://github.com/alan-turing-institute/autoemulate/issues/new/choose) in our GitHub repository. Head on over there and choose one of the templates for your purpose and get started. @@ -71,11 +62,11 @@ 1. Are there any community projects or collaborations using `AutoEmulate` I can join or learn from? - - Reach out to Martin ([email](mailto:mstoffel@turing.ac.uk)) or Kalle ([email](mailto:kwesterline@turing.ac.uk)) for more information. + - Reach out to Martin ([email](mailto:mstoffel@turing.ac.uk)) or Sophie ([email](mailto:sarana@turing.ac.uk)) for more information. 2. Where can I find tutorials or case studies on using `AutoEmulate`? - - See the [tutorial](../../tutorials/01_start.ipynb) for a comprehensive guide on using the package. + - See the [tutorial](../../tutorials/01_start.ipynb) for a comprehensive guide on using the package. Case studies are coming soon. 3. How can I stay updated on new releases or updates to AutoEmulate? @@ -83,4 +74,4 @@ 4. What support options are available if I need help with AutoEmulate? - - Please open an issue or contact the maintainer ([email](mailto:mstoffel@turing.ac.uk)) directly. + - Please open an issue on GitHub or contact the maintainer ([email](mailto:mstoffel@turing.ac.uk)) directly. diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md index 6ad0860e..d0120a6d 100644 --- a/docs/getting-started/installation.md +++ b/docs/getting-started/installation.md @@ -2,38 +2,52 @@ `AutoEmulate` is a Python package that can be installed in a number of ways. In this section we will describe the main ways to install the package. -## Install from PyPI +## Install from GitHub This is the easiest way to install `AutoEmulate`. -Currently, because we are in active development, you have to install the development version from GitHub: +Currently, because we are in active development, it's recommended to install the development version from GitHub: + +```bash +pip install git+https://github.com/alan-turing-institute/autoemulate.git +``` + +## Install from PyPI + +Once we have a release on PyPI, you can install the package from there: ```bash -$ pip install git+https://github.com/alan-turing-institute/autoemulate.git +pip install autoemulate ``` ## Install using Poetry -If you are a code contributor, you can also use [Poetry](https://python-poetry.org/) +If you'd like to contribute to `AutoEmulate`, you can install the package using Poetry. + +* Ensure you have poetry installed. If not, install it following the [official instructions](https://python-poetry.org/docs/). + +* Fork the repository on GitHub by clicking the "Fork" button at the top right of the [AutoEmulate repository](https://github.com/alan-turing-institute/autoemulate) + +* Clone your forked repository: ```bash -$ git clone https://github.com/alan-turing-institute/autoemulate.git +git clone https://github.com/YOUR-USERNAME/autoemulate.git ``` Navigate into the directory: -``` -$ cd autoemulate +```bash +cd autoemulate ``` Set up poetry: -``` -$ poetry install +```bash +poetry install ``` Enter the poetry shell: -``` -$ poetry shell +```bash +poetry shell ```