Skip to content
/ PySR Public
forked from MilesCranmer/PySR

High-Performance Symbolic Regression in Python and Julia

License

Notifications You must be signed in to change notification settings

pjpei/PySR

Β 
Β 

Repository files navigation

PySR searches for symbolic expressions which optimize a particular objective.

pysr_animation.mp4

PySR: High-Performance Symbolic Regression in Python and Julia

Docs Forums Paper colab demo
Documentation Discussions Paper Colab
pip conda Stats
PyPI version Conda Version
pip: Downloads
conda: Anaconda-Server Badge

If you find PySR useful, please cite the paper arXiv:2305.01582. If you've finished a project with PySR, please submit a PR to showcase your work on the research showcase page!

Contents:

Test status

Linux Windows macOS
Linux Windows macOS
Docker Conda Coverage
Docker conda-forge Coverage Status

Why PySR?

PySR is an open-source tool for Symbolic Regression: a machine learning task where the goal is to find an interpretable symbolic expression that optimizes some objective.

Over a period of several years, PySR has been engineered from the ground up to be (1) as high-performance as possible, (2) as configurable as possible, and (3) easy to use. PySR is developed alongside the Julia library SymbolicRegression.jl, which forms the powerful search engine of PySR. The details of these algorithms are described in the PySR paper.

Symbolic regression works best on low-dimensional datasets, but one can also extend these approaches to higher-dimensional spaces by using "Symbolic Distillation" of Neural Networks, as explained in 2006.11287, where we apply it to N-body problems. Here, one essentially uses symbolic regression to convert a neural net to an analytic equation. Thus, these tools simultaneously present an explicit and powerful way to interpret deep neural networks.

Installation

Pip

You can install PySR with pip:

pip install pysr

Julia dependencies will be installed at first import.

Conda

Similarly, with conda:

conda install -c conda-forge pysr

Docker

You can also use the Dockerfile to install PySR in a docker container

  1. Clone this repo.
  2. Within the repo's directory, build the docker container:
docker build -t pysr .
  1. You can then start the container with an IPython execution with:
docker run -it --rm pysr ipython

For more details, see the docker section.

Apptainer

If you are using PySR on a cluster where you do not have root access, you can use Apptainer to build a container instead of Docker. The Apptainer.def file is analogous to the Dockerfile, and can be built with:

apptainer build --notest pysr.sif Apptainer.def

and launched with

apptainer run pysr.sif

Troubleshooting

One issue you might run into can result in a hard crash at import with a message like "GLIBCXX_... not found". This is due to another one of the Python dependencies loading an incorrect libstdc++ library. To fix this, you should modify your LD_LIBRARY_PATH variable to reference the Julia libraries. For example, if the Julia version of libstdc++.so is located in $HOME/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/lib/julia/ (which likely differs on your system!), you could add:

export LD_LIBRARY_PATH=$HOME/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/lib/julia/:$LD_LIBRARY_PATH

to your .bashrc or .zshrc file.

Quickstart

You might wish to try the interactive tutorial here, which uses the notebook in examples/pysr_demo.ipynb.

In practice, I highly recommend using IPython rather than Jupyter, as the printing is much nicer. Below is a quick demo here which you can paste into a Python runtime. First, let's import numpy to generate some test data:

import numpy as np

X = 2 * np.random.randn(100, 5)
y = 2.5382 * np.cos(X[:, 3]) + X[:, 0] ** 2 - 0.5

We have created a dataset with 100 datapoints, with 5 features each. The relation we wish to model is $2.5382 \cos(x_3) + x_0^2 - 0.5$.

Now, let's create a PySR model and train it. PySR's main interface is in the style of scikit-learn:

from pysr import PySRRegressor

model = PySRRegressor(
    niterations=40,  # < Increase me for better results
    binary_operators=["+", "*"],
    unary_operators=[
        "cos",
        "exp",
        "sin",
        "inv(x) = 1/x",
        # ^ Custom operator (julia syntax)
    ],
    extra_sympy_mappings={"inv": lambda x: 1 / x},
    # ^ Define operator for SymPy as well
    elementwise_loss="loss(prediction, target) = (prediction - target)^2",
    # ^ Custom loss function (julia syntax)
)

This will set up the model for 40 iterations of the search code, which contains hundreds of thousands of mutations and equation evaluations.

Let's train this model on our dataset:

model.fit(X, y)

Internally, this launches a Julia process which will do a multithreaded search for equations to fit the dataset.

Equations will be printed during training, and once you are satisfied, you may quit early by hitting 'q' and then <enter>.

After the model has been fit, you can run model.predict(X) to see the predictions on a given dataset using the automatically-selected expression, or, for example, model.predict(X, 3) to see the predictions of the 3rd equation.

You may run:

print(model)

to print the learned equations:

PySRRegressor.equations_ = [
	   pick     score                                           equation       loss  complexity
	0        0.000000                                          4.4324794  42.354317           1
	1        1.255691                                          (x0 * x0)   3.437307           3
	2        0.011629                          ((x0 * x0) + -0.28087974)   3.358285           5
	3        0.897855                              ((x0 * x0) + cos(x3))   1.368308           6
	4        0.857018                ((x0 * x0) + (cos(x3) * 2.4566472))   0.246483           8
	5  >>>>       inf  (((cos(x3) + -0.19699033) * 2.5382123) + (x0 *...   0.000000          10
]

This arrow in the pick column indicates which equation is currently selected by your model_selection strategy for prediction. (You may change model_selection after .fit(X, y) as well.)

model.equations_ is a pandas DataFrame containing all equations, including callable format (lambda_format), SymPy format (sympy_format - which you can also get with model.sympy()), and even JAX and PyTorch format (both of which are differentiable - which you can get with model.jax() and model.pytorch()).

Note that PySRRegressor stores the state of the last search, and will restart from where you left off the next time you call .fit(), assuming you have set warm_start=True. This will cause problems if significant changes are made to the search parameters (like changing the operators). You can run model.reset() to reset the state.

You will notice that PySR will save two files: hall_of_fame...csv and hall_of_fame...pkl. The csv file is a list of equations and their losses, and the pkl file is a saved state of the model. You may load the model from the pkl file with:

model = PySRRegressor.from_file("hall_of_fame.2022-08-10_100832.281.pkl")

There are several other useful features such as denoising (e.g., denoise=True), feature selection (e.g., select_k_features=3). For examples of these and other features, see the examples page. For a detailed look at more options, see the options page. You can also see the full API at this page. There are also tips for tuning PySR on this page.

Detailed Example

The following code makes use of as many PySR features as possible. Note that is just a demonstration of features and you should not use this example as-is. For details on what each parameter does, check out the API page.

model = PySRRegressor(
    procs=4,
    populations=8,
    # ^ 2 populations per core, so one is always running.
    population_size=50,
    # ^ Slightly larger populations, for greater diversity.
    ncycles_per_iteration=500,
    # ^ Generations between migrations.
    niterations=10000000,  # Run forever
    early_stop_condition=(
        "stop_if(loss, complexity) = loss < 1e-6 && complexity < 10"
        # Stop early if we find a good and simple equation
    ),
    timeout_in_seconds=60 * 60 * 24,
    # ^ Alternatively, stop after 24 hours have passed.
    maxsize=50,
    # ^ Allow greater complexity.
    maxdepth=10,
    # ^ But, avoid deep nesting.
    binary_operators=["*", "+", "-", "/"],
    unary_operators=["square", "cube", "exp", "cos2(x)=cos(x)^2"],
    constraints={
        "/": (-1, 9),
        "square": 9,
        "cube": 9,
        "exp": 9,
    },
    # ^ Limit the complexity within each argument.
    # "inv": (-1, 9) states that the numerator has no constraint,
    # but the denominator has a max complexity of 9.
    # "exp": 9 simply states that `exp` can only have
    # an expression of complexity 9 as input.
    nested_constraints={
        "square": {"square": 1, "cube": 1, "exp": 0},
        "cube": {"square": 1, "cube": 1, "exp": 0},
        "exp": {"square": 1, "cube": 1, "exp": 0},
    },
    # ^ Nesting constraints on operators. For example,
    # "square(exp(x))" is not allowed, since "square": {"exp": 0}.
    complexity_of_operators={"/": 2, "exp": 3},
    # ^ Custom complexity of particular operators.
    complexity_of_constants=2,
    # ^ Punish constants more than variables
    select_k_features=4,
    # ^ Train on only the 4 most important features
    progress=True,
    # ^ Can set to false if printing to a file.
    weight_randomize=0.1,
    # ^ Randomize the tree much more frequently
    cluster_manager=None,
    # ^ Can be set to, e.g., "slurm", to run a slurm
    # cluster. Just launch one script from the head node.
    precision=64,
    # ^ Higher precision calculations.
    warm_start=True,
    # ^ Start from where left off.
    turbo=True,
    # ^ Faster evaluation (experimental)
    extra_sympy_mappings={"cos2": lambda x: sympy.cos(x)**2},
    # extra_torch_mappings={sympy.cos: torch.cos},
    # ^ Not needed as cos already defined, but this
    # is how you define custom torch operators.
    # extra_jax_mappings={sympy.cos: "jnp.cos"},
    # ^ For JAX, one passes a string.
)

Docker

You can also test out PySR in Docker, without installing it locally, by running the following command in the root directory of this repo:

docker build -t pysr .

This builds an image called pysr for your system's architecture, which also contains IPython. You can select a specific version of Python and Julia with:

docker build -t pysr --build-arg JLVERSION=1.10.0 --build-arg PYVERSION=3.11.6 .

You can then run with this dockerfile using:

docker run -it --rm -v "$PWD:/data" pysr ipython

which will link the current directory to the container's /data directory and then launch ipython.

If you have issues building for your system's architecture, you can emulate another architecture by including --platform linux/amd64, before the build and run commands.

Contributors ✨

We are eager to welcome new contributors! Check out our contributors guide for tips πŸš€. If you have an idea for a new feature, don't hesitate to share it on the issues or discussions page.

Mark Kittisopikul
Mark Kittisopikul

πŸ’» πŸ’‘ πŸš‡ πŸ“¦ πŸ“£ πŸ‘€ πŸ”§ ⚠️
T Coxon
T Coxon

πŸ› πŸ’» πŸ”Œ πŸ’‘ πŸš‡ 🚧 πŸ‘€ πŸ”§ ⚠️ πŸ““
Dhananjay Ashok
Dhananjay Ashok

πŸ’» 🌍 πŸ’‘ 🚧 ⚠️
Johan BlΓ₯bΓ€ck
Johan BlΓ₯bΓ€ck

πŸ› πŸ’» πŸ’‘ 🚧 πŸ“£ πŸ‘€ ⚠️ πŸ““
JuliusMartensen
JuliusMartensen

πŸ› πŸ’» πŸ“– πŸ”Œ πŸ’‘ πŸš‡ 🚧 πŸ“¦ πŸ“£ πŸ‘€ πŸ”§ πŸ““
ngam
ngam

πŸ’» πŸš‡ πŸ“¦ πŸ‘€ πŸ”§ ⚠️
Christopher Rowley
Christopher Rowley

πŸ’» πŸ’‘ πŸš‡ πŸ“¦ πŸ‘€
Kaze Wong
Kaze Wong

πŸ› πŸ’» πŸ’‘ πŸš‡ 🚧 πŸ“£ πŸ‘€ πŸ”¬ πŸ““
Christopher Rackauckas
Christopher Rackauckas

πŸ› πŸ’» πŸ”Œ πŸ’‘ πŸš‡ πŸ“£ πŸ‘€ πŸ”¬ πŸ”§ ⚠️ πŸ““
Patrick Kidger
Patrick Kidger

πŸ› πŸ’» πŸ“– πŸ”Œ πŸ’‘ 🚧 πŸ“£ πŸ‘€ πŸ”¬ πŸ”§ ⚠️ πŸ““
Okon Samuel
Okon Samuel

πŸ› πŸ’» πŸ“– 🚧 πŸ’‘ πŸš‡ πŸ‘€ ⚠️ πŸ““
William Booth-Clibborn
William Booth-Clibborn

πŸ’» 🌍 πŸ“– πŸ““ 🚧 πŸ‘€ πŸ”§ ⚠️
Pablo Lemos
Pablo Lemos

πŸ› πŸ’‘ πŸ“£ πŸ‘€ πŸ”¬ πŸ““
Jerry Ling
Jerry Ling

πŸ› πŸ’» πŸ“– 🌍 πŸ’‘ πŸ“£ πŸ‘€ πŸ““
Charles Fox
Charles Fox

πŸ› πŸ’» πŸ’‘ 🚧 πŸ“£ πŸ‘€ πŸ”¬ πŸ““
Johann Brehmer
Johann Brehmer

πŸ’» πŸ“– πŸ’‘ πŸ“£ πŸ‘€ πŸ”¬ ⚠️ πŸ““
Marius Millea
Marius Millea

πŸ’» πŸ’‘ πŸ“£ πŸ‘€ πŸ““
Coba
Coba

πŸ› πŸ’» πŸ’‘ πŸ‘€ πŸ““
foxtran
foxtran

πŸ’» πŸ’‘ 🚧 πŸ”§ πŸ““
Shah Mahdi Hasan
Shah Mahdi Hasan

πŸ› πŸ’» πŸ‘€ πŸ““
Pietro Monticone
Pietro Monticone

πŸ› πŸ“– πŸ’‘
Mateusz Kubica
Mateusz Kubica

πŸ“– πŸ’‘
Jay Wadekar
Jay Wadekar

πŸ› πŸ’‘ πŸ“£ πŸ”¬
Anthony Blaom, PhD
Anthony Blaom, PhD

πŸš‡ πŸ’‘ πŸ‘€
Jgmedina95
Jgmedina95

πŸ› πŸ’‘ πŸ‘€
Michael Abbott
Michael Abbott

πŸ’» πŸ’‘ πŸ‘€ πŸ”§
Oscar Smith
Oscar Smith

πŸ’» πŸ’‘
Eric Hanson
Eric Hanson

πŸ’‘ πŸ“£ πŸ““
Henrique Becker
Henrique Becker

πŸ’» πŸ’‘ πŸ‘€
qwertyjl
qwertyjl

πŸ› πŸ“– πŸ’‘ πŸ““
Rik Huijzer
Rik Huijzer

πŸ’‘ πŸš‡
Hongyu Wang
Hongyu Wang

πŸ’‘ πŸ“£ πŸ”¬
Zehao Jin
Zehao Jin

πŸ”¬ πŸ“£
Tanner Mengel
Tanner Mengel

πŸ”¬ πŸ“£
Arthur Grundner
Arthur Grundner

πŸ”¬ πŸ“£
sjwetzel
sjwetzel

πŸ”¬ πŸ“£ πŸ““
Saurav Maheshkar
Saurav Maheshkar

πŸ”§

About

High-Performance Symbolic Regression in Python and Julia

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.7%
  • Dockerfile 1.4%
  • Jupyter Notebook 1.3%
  • Shell 0.6%