CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms
An MLIR Based Compiler Framework for Emerging Architectures
Paper Link»
Emerging compute-near-memory (CNM) and compute-in-memory (CIM) architectures have gained considerable attention in recent years, with some now commercially available. However, their programmability remains a significant challenge. These devices typically require very low-level code, directly using device-specific APIs, which restricts their usage to device experts. With Cinnamon, we are taking a step closer to bridging the substantial abstraction gap in application representation between what these architectures expect and what users typically write. The framework is based on MLIR, providing domain-specific and device-specific hierarchical abstractions. This repository includes the sources for these abstractions and the necessary transformations and conversion passes to progressively lower them. It emphasizes conversions to illustrate various intermediate representations (IRs) and transformations to demonstrate certain optimizations.
This is an example of how you can build the framework locally.
CINM depends on a patched version of LLVM 19.1.3
. This is built automatically.
Additionally, a number of software packages are required to build it:
- CMake (at least version 3.22)
just
- A somewhat recent Python installation (>=3.7?)
On some systems you might need to update your C++ compiler or update the default, e.g. on Ubuntu 24.04:
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 70 --slave /usr/bin/g++ g++ /usr/bin/g++-13
# Or use another compiler or gcc/g++ version supporting the C++ 20 standard.
The repository contains a justfile
that installs all needed dependencies and builds the sources.
- Clone the repo
git clone https://github.com/tud-ccc/Cinnamon.git
- Set up the environment variables in a
.env
-file (in the root)# Example: CMAKE_GENERATOR=Ninja # You could add your own LLVM dir; the build script won't try to clone and build LLVM LLVM_BUILD_DIR=/home/username/projects/Cinnamon/llvm/build/
- Download, configure, and build dependencies and the sources (without the torch-mlir frontend).
cd Cinnamon just configure -no-torch-mlir
All benchmarks at the cinm
abstraction are in this repository under cinnamon/benchmarks/
. The compile-benches.sh
script compiles all the benchmarks using the Cinnamon flow. The generated code and the intermediate IRs for each bench can be found undercinnamon/benchmarks/generated/
.
chmod +x compile-benches.sh
./compile-benches.sh
The user can also try running individual benchmarks by manually trying individual conversions. The benchmark files have a comment at the top giving the command used to lower them to the upmem IR.
-
cinm
,cnm
andcim
abstractions and their necessary conversions - The
upmem
abstraction, its conversions and connection to the target - The
tiling
transformation -
PyTorch
Front-end - The
xbar
abstraction, conversions and transformations- Associated conversions and transformations
- Establishing the backend connection
See the open issues for a full list of proposed features (and known issues).
If you have a suggestion, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". If you want to contribute in any way , that is also greatly appreciated.
Distributed under the BSD 2-clause License. See LICENSE.txt
for more information.
- Clément Fournier ([email protected])
- Hamid Farzaneh ([email protected])
- George M. Kunze ([email protected])
- Karl F. A. Friebel ([email protected])
- Asif Ali Khan ([email protected])