-
Notifications
You must be signed in to change notification settings - Fork 12
Test Suite for Regression Test
##1. Get Started LLTFI Test Suite is a regression test suite designed to test the integrity of LLTFI. It checks functions of LLTFI including: Custom Hardware Fault Injections, Software Failure Injections, Trace Propagation Analysis Tools, etc. New test cases and new programs should be incorporated in future along the development of LLTFI. And regression test should be done whenever the source code is changed to make sure original functions are not corrupted.
Current LLTFI test suite tests only the command line interface of LLTFI. The GUI of LLTFI is not included in this suite.
The test suite of LLTFI is placed at <LLFI_DST_ROOT>/test_suite/
.
###Fault Injection Test on One Test Case: ####First is a sample example of using the test suite to test a hardware fault injection test case: insttype. Run following commands in your terminal (assume python3 has been installed):
- Build test programs:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/build_prog.py
This will build LLVM IR code for all the test programs under <LLFI_DST_ROOT>/test_suite/PROGRAMS/
.
- Deploy testing programs:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/deploy_prog.py insttype
This command will copy the test programs' IR code for test case: insttype into injection test directories: <LLFI_DST_ROOT>/test_suite/HardwareFaults/insttype
.
- Start Fault Injection and wait until it finishes (may need 1 minute):
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/inject_prog.py 1 insttype
This command start an fault injection test on test case insttype, under the following directory: <LLFI_DST_ROOT>/test_suite/HardwareFaults/insttype/
, with one thread. This command will initiate LLTFI command: instrument, profile and injectfault to inject faults into the program which has been copied to the directory of insttype, with an input.yaml comes along with insttype.
You may encounter errors in the fault injection executable during this stage. These failures are due to the injection itself. Whether the injection is done correctly or not is determined in the next stage.
- Check whether the injection is done correctly:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/check_injection.py insttype
This command check whether a fault injection experiment is done successfully at <LLFI_DST_ROOT>/test_suite/HardwareFaults/insttype
. It checks llfi/ folder, llfi/llfi_stat_output/ folder, llfi/baseline/ folder, llfi/prog_output/ folder, llfi/std_output/ folder and llfi statistic files under llfi/llfi_stat_output/ directory.
- If everything runs correctly, you should see following words in your terminal:
=============== Result ===============
./HardwareFaults/insttype PASS
This means the test case: insttype has been tested and passed the test. If any of the llfi/ folder, llfi/llfi_stat_output/ folder, llfi/baseline/ folder, llfi/prog_output/ folder, llfi/std_output/ folder and llfi statistic files are missing, the check will fail, and the missing content will be shown in this result table.
####To Run Other Test Cases:
You can check the NAMES of the folders under <LLFI_DST_ROOT>/test_suite/HardwareFaults
, <LLFI_DST_ROOT>/test_suite/SoftwareFaults
and <LLFI_DST_ROOT>/test_suite/BatchMode
. These are also the names of fault injection test cases.
By specifying the names of the test cases at the end of commands in step 2)~5), you can select the fault injection test cases to run.
For example, to run test cases: llfiindex (Hardware Fault), funcname (Hardware Fault), BufferOverflow_API (Software Fault), and WrongPointer_Data (Software Fault), run following commands in your terminal:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/build_prog.py
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/deploy_prog.py llfiindex funcname BufferOverflow_API WrongPointer_Data
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/inject_prog.py 1 llfiindex funcname BufferOverflow_API WrongPointer_Data
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/check_injection.py llfiindex funcname BufferOverflow_API WrongPointer_Data
Note when multiple test cases are selected to be tested, you can assign more than one thread on running the test cases. For example, in the third command, you may use four threads instead of one thread to run the cases in parallel. This may save time cost of running these test cases:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/inject_prog.py 4 llfiindex funcname BufferOverflow_API WrongPointer_Data
If everything is fine, you should see following words in your terminal:
=============== Result ===============
./SoftwareFaults/BufferOverflow_API PASS
./SoftwareFaults/WrongPointer_Data PASS
./HardwareFaults/funcname PASS
./HardwareFaults/llfiindex PASS
###Run All The Fault Injection Tests: To run all the fault injection tests together, simply append no name of any test cases in the commands above, the suite will test all the injection test cases. For example, run following commands one by one. Note this may take several hours:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/build_prog.py
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/deploy_prog.py
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/inject_prog.py 1
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/check_injection.py
If everything works fine, you should see a table similar to:
=============== Result ===============
./HardwareFaults/random PASS
./SoftwareFaults/StalePointer_Res PASS
./BatchMode/SoftwareFailureAutoScan PASS
./SoftwareFaults/InvalidPointer_Res PASS
./SoftwareFaults/CPUHog_Res PASS
./SoftwareFaults/WrongSavedFormat_IO PASS
./SoftwareFaults/MemoryLeak_Res PASS
./SoftwareFaults/NoOpen_API PASS
./SoftwareFaults/RaceCondition_Timing PASS
./SoftwareFaults/NoAck_MPI PASS
./SoftwareFaults/InappropriateClose_API PASS
./SoftwareFaults/WrongRetrievedAddress_IO PASS
./SoftwareFaults/BufferOverflowMalloc_Data PASS
./SoftwareFaults/WrongSource_Data PASS
./SoftwareFaults/IncorrectOutput_API PASS
./HardwareFaults/insttype PASS
./SoftwareFaults/ThreadKiller_Res PASS
./SoftwareFaults/LowMemory_Res PASS
./HardwareFaults/funcname PASS
./SoftwareFaults/Deadlock_Res PASS
./SoftwareFaults/NoOutput_Data PASS
./SoftwareFaults/InvalidMessage_MPI PASS
./HardwareFaults/llfiindex PASS
./SoftwareFaults/BufferOverflowMemmove_Data PASS
./SoftwareFaults/InvalidSender_MPI PASS
./SoftwareFaults/WrongMode_API PASS
./SoftwareFaults/BufferUnderflow_API PASS
./SoftwareFaults/BufferOverflow_API PASS
./HardwareFaults/multiplebits PASS
./SoftwareFaults/NoMessage_MPI PASS
./SoftwareFaults/MemoryExhaustion_Res PASS
./BatchMode/NoOpen_API_WrongMode_API_BufferUnderflow_API PASS
./HardwareFaults/tracing PASS
./SoftwareFaults/UnderAccumulator_Res PASS
./SoftwareFaults/WrongDestination_Data PASS
./SoftwareFaults/WrongSavedAddress_IO PASS
./SoftwareFaults/WrongAPI_API PASS
./SoftwareFaults/PacketStorm_MPI PASS
./SoftwareFaults/NoClose_API PASS
./SoftwareFaults/WrongPointer_Data PASS
./SoftwareFaults/IncorrectOutput_Data PASS
./SoftwareFaults/NoOutput_API PASS
./SoftwareFaults/HighFrequentEvent_Timing PASS
./SoftwareFaults/WrongRetrievedFormat_IO PASS
./SoftwareFaults/DataCorruption_Data PASS
./SoftwareFaults/NoDrain_MPI PASS
This result table also listed all the injection test cases included in this test suite. All the injection test cases are grouped into three categories: HardwareFaults, SoftwareFaults and BatchMode and placed under three directories respectively. HardwareFaults/ contains several custom fault injection configurations to simulate hardware faults. SoftwareFaults/ contains tests for all the software failures supported by LLTFI. BatchMode/ contains tests for batch mode LLTFI commands, which is designed to simulate a list of software failures one by one. All of these three folders are placed under <LLFI_DST_ROOT>/test_suite/
.
###Test Trace Propagation Analysis Tools:
Trace propagation analysis tools are placed under <LLFI_DST_ROOT>/tools/
. These tools depend on the tracing information generated during fault injections, so we provide already-generated LLTFI injection files to test these tools.
To test tools using test suite, run following commands in your terminal:
python3 <LLFI_DST_ROOT>/test_suite/SCRIPTS/test_trace_tools.py
This command will test tracediff, traceunion, traceontograph and tracetodot command one by one. The test will fail if any of the commands exits abnormally and any intermediate files are not generated successfully. For details of trace analysis tools, please refer to Generate analyse traces
If everything works fine, you should see:
=============== Result ===============
./Traces/BufferOverflow_API PASS
./Traces/BufferOverflowMemmove_Data PASS
./Traces/factorial PASS
##2. Suite Structure
The logical structure, or say dependency, of the test suite is defined in a yaml file under <LLFI_DST_ROOT>/test_suite
: test_suite.yaml.
This yaml file declares which program is used for which test cases, what files should be copied for each test program, what input arguments should be append for running a program, how to categorize injection test cases, and what files are required for trace analysis tools.
All the scripts of the test suite (placed under <LLFI_DST_ROOT>/test_suite/SCRIPTS
refer to test_suite.yaml to drive the tests.
Here is a simplified test_suite.yaml, with descriptions in comments:
PROGRAMS: ## define test programs
factorial: ## name of a test program
- factorial.ll ## required files to run the test program
memcpy1:
- memcpy1.ll
- sample.txt
INPUTS:
factorial: 6 ## the command line arguments to run the test program
mpi: 127.0.0.1
HardwareFaults: ## declare a test group with name: HardwareFaults
tracing: factorial ## declare a hardware fault injection test case
## with name: tracing, using program: factorial.
SoftwareFaults: ## declare a test group with name: SoftwareFaults
MemoryExhaustion_Res: memcpy1 ## declare a software fault injection test case
## with name: MemoryExhaustion_Res,
## using program: memcpy1.
Traces: ## declare a test group with name: Traces
## declare a test case for trace analysis tools.
## with name: BufferOverflow_API,
## profiling trace record file: llfi/baseline/llfi.stat.trace.prof.txt
## injection trace record files:
## llfi/llfi_stat_output/llfi.stat.trace.0-0.txt etc.
## profiling dot file: llfi.stat.graph.dot
BufferOverflow_API:
trace_prof: llfi/baseline/llfi.stat.trace.prof.txt
trace_inject:
- llfi/llfi_stat_output/llfi.stat.trace.0-0.txt
- llfi/llfi_stat_output/llfi.stat.trace.0-1.txt
- llfi/llfi_stat_output/llfi.stat.trace.0-2.txt
- llfi/llfi_stat_output/llfi.stat.trace.0-3.txt
- llfi/llfi_stat_output/llfi.stat.trace.0-4.txt
cdfg_prof: llfi.stat.graph.dot
BatchMode: ## declare a test group with name: BatchMode
NoOpen_API_WrongMode_API_BufferUnderflow_API: memcpy1
## declare a BatchMode test case with name:
## NoOpen_API_WrongMode_API_BufferUnderflow_API
## using program: memcpy1
Below is the directory structure of the suite. Please note the names of test groups and names of test cases matches with the name of directories exactly.
Top Level:
test_suite/
├── BatchMode ## store test cases of group: BatchMode
├── HardwareFaults ## store test cases of group: HardwareFaults
├── PROGRAMS ## store source code of test programs
├── SCRIPTS ## store all the scripts to drive the tests
├── SoftwareFaults ## store test cases of group: SoftwareFaults
├── test_suite.yaml ## declare the logical structure
└── Traces ## store test cases of group: Traces
SCRIPTS/:
test_suite/SCRIPTS/
├── build_prog.py # build test programs
├── check_injection.py # check the results of fault injections
├── clean_prog.py # make clean for all test programs
├── clear_all.py # remove all generated files for all test cases
├── clear_llfi.py # remove all generated files with prefix 'llfi' in name for all cases
├── deploy_prog.py # copy program files to test directories accordingly
├── inject_prog.py # initiate fault injections
└── test_trace_tools.py # test trace analysis tools
PROGRAMS/:
test_suite/PROGRAMS/
├── bfs
├── deadlock
├── factorial
├── Makefile
├── Makefile.common
├── mcf
├── memcpy1
├── mpi
└── sudoku2
## Each folder stores source code files and other dependent file
## for building and running one program.
HardwareFaults/:
test_suite/HardwareFaults/
├── funcname
├── insttype
├── llfiindex
├── multiplebits
├── random
└── tracing
## Each folder is a test case of group: HardwareFaults.
SoftwareFaults/:
test_suite/SoftwareFaults/
├── BufferOverflow_API
├── BufferOverflowMalloc_Data
...
└── WrongSource_Data
## Each folder is a test case of group: SoftwareFaults.
BatchMode/:
test_suite/BatchMode/
├── NoOpen_API_WrongMode_API_BufferUnderflow_API
└── SoftwareFailureAutoScan
## Each folder is a test case of group: BatchMode.
Trace/:
test_suite/Traces/
├── BufferOverflow_API
├── BufferOverflowMemmove_Data
└── factorial
## Each folder is a test case of group: Trace.
Follow the below steps to build all the models and perform fault injection:
- Modify the
input.yaml
file in the current folder if you need to add any additional options. - Execute the
execute_all_prog.sh
script with either thecompile
orrun
option.- Use the
compile
option to download and build the model. - Use the
run
option to perform fault injection.
sh execute_all_prog.sh compile sh execute_all_prog.sh run
- Use the