Modern computer processors generally use static or dynamic branch prediction to accurately and efficiently predict branch outcomes. Here, we present a new method for branch prediction. This method was designed so that machine learning could choose the optimal method and pattern of prediction for a specific program or thread, for a discrete number of branches, in order to yield the most accurate results. Our results show that we can obtain a similar or greater level of performance than a traditional predictor when employing a feature-training machine learning predictor. In short, our method aims to determine the best standard branch predictor among the G-Share, Bimodal and Smith Predictors for a section of a program execution to maximize prediction accuracy across the entirety of program execution.
Feature importance for a sample trace
The report is available to view in the repository.
- Matplotlib
- NumPy
- Scikit-learn
- Pandas
- XGBoost
Data collected with Intel Pin Tool
- Install required libraries (
pip install -r requirements.txt
) - Configure and run featurefuture.py
A collection of basic simulators
Compile with make clean
&& make sim