Skip to content

Commit

Permalink
Improve build process and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
meesfrensel committed Dec 12, 2024
1 parent 3420eb5 commit ad91450
Show file tree
Hide file tree
Showing 10 changed files with 51 additions and 35 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/build-ci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,10 @@ if echo "$@" | grep -q -- "-no-cinnamon-wheel"; then
build_cinnamon_wheel=0
fi

if echo "$@" | grep -q -- "-enable-gpu"; then
CINNAMON_CMAKE_OPTIONS="$CINNAMON_CMAKE_OPTIONS -DCINM_BUILD_GPU_SUPPORT=ON"
fi

if echo "$@" | grep -q -- "-enable-cuda"; then
enable_cuda=1
fi
Expand Down Expand Up @@ -304,7 +308,6 @@ if [ ! -d "build" ] || [ $reconfigure -eq 1 ]; then
cmake -S . -B "build" \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
$dependency_paths \
-DCINM_BUILD_GPU_SUPPORT=ON \
-DCMAKE_EXPORT_COMPILE_COMMANDS=ON \
$CINNAMON_CMAKE_OPTIONS
fi
Expand Down
12 changes: 7 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
.cache
.vscode
/.vscode/
/.idea/
.directory
.venv
/llvm
/torch-mlir
/upmem
/.venv/
/llvm/
/torch-mlir/
/upmem/
/.env
53 changes: 30 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,6 @@

Emerging compute-near-memory (CNM) and compute-in-memory (CIM) architectures have gained considerable attention in recent years, with some now commercially available. However, their programmability remains a significant challenge. These devices typically require very low-level code, directly using device-specific APIs, which restricts their usage to device experts. With Cinnamon, we are taking a step closer to bridging the substantial abstraction gap in application representation between what these architectures expect and what users typically write. The framework is based on MLIR, providing domain-specific and device-specific hierarchical abstractions. This repository includes the sources for these abstractions and the necessary transformations and conversion passes to progressively lower them. It emphasizes conversions to illustrate various intermediate representations (IRs) and transformations to demonstrate certain optimizations.

<!--
### Built With
The CINM framework depends on a patched version of LLVM 18.1.6.
Additionally, a number of software packages are required to build it, like CMake. -->
<!--
* [![MLIR][mlir]][Mlir-url]
* [![CMake][CMake]][React-url] -->

<!-- GETTING STARTED -->
## Getting Started
Expand All @@ -33,23 +25,38 @@ This is an example of how you can build the framework locally.

### Prerequisites

CINM depends on a patched version of `LLVM 18.1.6`.
Additionally, a number of software packages are required to build it, like `CMake`.
CINM depends on a patched version of `LLVM 19.1.3`. This is built automatically.
Additionally, a number of software packages are required to build it:
- CMake (at least version 3.22)
- [`just`](https://github.com/casey/just?tab=readme-ov-file#installation)
- A somewhat recent Python installation (>=3.7?)

On some systems you might need to update your C++ compiler or update the default, e.g. on Ubuntu 24.04:
```sh
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 70 --slave /usr/bin/g++ g++ /usr/bin/g++-13
# Or use another compiler or gcc/g++ version supporting the C++ 20 standard.
```

### Download and Build

The repository contains a script, `build.sh` that installs all needed dependencies and builds the sources.
The repository contains a `justfile` that installs all needed dependencies and builds the sources.

* Clone the repo
```sh
git clone https://github.com/tud-ccc/Cinnamon.git
```
* Build the sources
```sh
cd Cinnamon
chmod +x build.sh
./build.sh
```
```sh
git clone https://github.com/tud-ccc/Cinnamon.git
```
* Set up the environment variables in a `.env`-file (in the root)
```
# Example:
CMAKE_GENERATOR=Ninja
# You could add your own LLVM dir; the build script won't try to clone and build LLVM
LLVM_BUILD_DIR=/home/username/projects/Cinnamon/llvm/build/
```
* Download, configure, and build dependencies and the sources (without the torch-mlir frontend).
```sh
cd Cinnamon
just configure -no-torch-mlir
```

<!-- USAGE EXAMPLES -->
## Usage
Expand All @@ -68,9 +75,9 @@ The user can also try running individual benchmarks by manually trying individua
- [x] The `upmem` abstraction, its conversions and connection to the target
- [x] The `tiling` transformation
- [ ] `PyTorch` Front-end
- [ ] The `xbar` abstraction, conversions and transformatons
- [ ] The `xbar` abstraction, conversions and transformations
- [ ] Associated conversions and transformations
- [ ] Establshing the backend connection
- [ ] Establishing the backend connection

See the [open issues](https://github.com/tud-ccc/Cinnamon/issues) for a full list of proposed features (and known issues).

Expand All @@ -83,7 +90,7 @@ If you want to contribute in any way , that is also **greatly appreciated**.
<!-- LICENSE -->
## License

Distributed under the BSD 2-claues License. See `LICENSE.txt` for more information.
Distributed under the BSD 2-clause License. See `LICENSE.txt` for more information.

<!-- CONTACT -->
## Contributors
Expand Down
1 change: 0 additions & 1 deletion cinnamon/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ build-*
build
llvm
.cache
.env
sandbox

python/cinnamon/_resources
6 changes: 3 additions & 3 deletions cinnamon/lib/Conversion/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ add_mlir_library(CinmCommonPatterns
MLIRDialectUtils
)


add_subdirectory(TorchToCinm)

if (TORCH_MLIR_DIR)
add_subdirectory(TorchToCinm)
endif()
if (CINM_BUILD_GPU_SUPPORT)
add_subdirectory(CnmToGPU)
endif()
Expand Down
1 change: 1 addition & 0 deletions cinnamon/lib/Conversion/CimToMemristor/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ add_mlir_conversion_library(MLIRCimToMemristor

DEPENDS
CimConversionPassIncGen
MemristorIncGen

LINK_COMPONENTS
Core
Expand Down
1 change: 1 addition & 0 deletions cinnamon/lib/Conversion/CinmToCim/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ add_mlir_conversion_library(MLIRCinmToCim

DEPENDS
CinmConversionPassIncGen
CimIncGen

LINK_COMPONENTS
Core
Expand Down
1 change: 1 addition & 0 deletions cinnamon/lib/Conversion/CnmToUPMEM/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ add_mlir_conversion_library(MLIRCnmToUPMEM

DEPENDS
CnmConversionPassIncGen
UPMEMIncGen

LINK_COMPONENTS
Core
Expand Down
4 changes: 3 additions & 1 deletion cinnamon/tools/cinm-opt/cinm-opt.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
/// @author Clément Fournier ([email protected])

#include "cinm-mlir/Conversion/CimPasses.h"
#include "cinm-mlir/Conversion/CinmFrontendPasses.h"
#include "cinm-mlir/Conversion/CinmPasses.h"
#include "cinm-mlir/Conversion/CnmPasses.h"
#include "cinm-mlir/Conversion/MemristorPasses.h"
Expand All @@ -23,6 +22,7 @@
#include "cinm-mlir/Dialect/UPMEM/Transforms/Passes.h"

#ifdef CINM_TORCH_MLIR_ENABLED
#include "cinm-mlir/Conversion/CinmFrontendPasses.h" // Does the TorchToCinm pass
#include "torch-mlir/Dialect/Torch/IR/TorchDialect.h"
#include "torch-mlir/Dialect/TorchConversion/IR/TorchConversionDialect.h"
#endif
Expand Down Expand Up @@ -55,7 +55,9 @@ int main(int argc, char *argv[]) {

registerAllPasses();
registerAllExtensions(registry);
#ifdef CINM_TORCH_MLIR_ENABLED
registerCinmFrontendConversionPasses();
#endif
registerCinmConversionPasses();
registerCimConversionPasses();
registerCnmConversionPasses();
Expand Down
2 changes: 1 addition & 1 deletion justfile
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ upmem_dir := env_var_or_default("UPMEM_HOME", "")
build_dir := "cinnamon/build"

# Do a full build as if in CI. Only needed the first time you build the project.
# Parameters: no-upmem enable-cuda enable-roc no-torch-mlir no-python-venv
# Parameters: no-upmem enable-gpu enable-cuda enable-roc no-torch-mlir no-python-venv
configure *ARGS:
.github/workflows/build-ci.sh -reconfigure {{ARGS}}

Expand Down

0 comments on commit ad91450

Please sign in to comment.