diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
new file mode 100644
index 00000000..4590aecc
--- /dev/null
+++ b/CONTRIBUTORS.md
@@ -0,0 +1,40 @@
+# Contributors
+
+CRF-RNN specific code in this repository was written by:
+
++ Sadeep Jayasumana (sadeep.jay@gmail.com)
++ Bernardino Romera-Paredes (bernardino.romeraparedes@eng.ox.ac.uk)
++ Shuai Zheng (kylezheng04@gmail.com)
++ Zhizhong Su (suzhizhong@baidu.com)
+
+Our code uses the [Permutohedral lattice library](http://graphics.stanford.edu/papers/permutohedral/), and the [Caffe future version](https://github.com/longjon/caffe/tree/future).
+We also used parts of the [Dense CRF code](http://www.philkr.net/home/densecrf) while implementing this.
+
+Permutohedral lattice library (BSD license) is from Andrew Adams, Jongmin Baek, Abe Davis. Fast High-Dimensional Filtering Using the
+Permutohedral Lattice. Eurographics 2010.
+DenseCRF library is from from Philipp Krahenbuhl and Vladlen Koltun. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials.
+NIPS 2011.
+
+Our software is built on top of the Caffe software library. Below is a copy of its CONTRIBUTORS.md file.
+Please note that, due to technical difficulties, we could not keep the history of the original contributors to Caffe code in our git repository.
+Please refer to the original [Caffe git repository](https://github.com/BVLC/caffe) for this purpose.
+
+-----------------------------------------------------------------------------------------------------------------
+
+Caffe is developed by a core set of BVLC members and the open-source community.
+
+We thank all of our [contributors](https://github.com/BVLC/caffe/graphs/contributors)!
+
+**For the detailed history of contributions** of a given file, try
+
+ git blame file
+
+to see line-by-line credits and
+
+ git log --follow file
+
+to see the change log even across renames and rewrites.
+
+Please refer to the [acknowledgements](http://caffe.berkeleyvision.org/#acknowledgements) on the Caffe site for further details.
+
+**Copyright** is held by the original contributor according to the versioning history; see LICENSE.
diff --git a/README.md b/README.md
index f05f1b2f..5f401c69 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,121 @@
-# crfasrnn
-This repository contains the source code for the semantic image segmentation method described in the ICCV 2015 paper: Conditional Random Fields as Recurrent Neural Networks. http://crfasrnn.torr.vision/
+# CRF-RNN for Semantic Image Segmentation
+![sample](sample.png)
+
+Live demo: [http://crfasrnn.torr.vision](http://crfasrnn.torr.vision)
+
+This package contains code for the "CRF-RNN" semantic image segmentation method, published in the ICCV 2015 paper [Conditional Random Fields as Recurrent Neural Networks](http://www.robots.ox.ac.uk/~szheng/papers/CRFasRNN.pdf). Our software is built on top of the [Caffe](http://caffe.berkeleyvision.org/) deep learning library. The current version was developed by:
+
+[Sadeep Jayasumana](http://www.robots.ox.ac.uk/~sadeep/),
+[Shuai Zheng](http://kylezheng.org/),
+[Bernardino Romera Paredes](http://romera-paredes.com/), and
+[Zhizhong Su](suzhizhong@baidu.com).
+
+Supervisor: [Philip Torr](http://www.robots.ox.ac.uk/~tvg/)
+
+Our work allows computers to recognize objects in images, what is distinctive about our work is that we also recover the 2D outline of the object.
+
+Currently we have trained this model to recognize 20 classes. This software allows you to test our algorithm on your own images – have a try and see if you can fool it, if you get some good examples you can send them to us.
+
+Why are we doing this? This work is part of a project to build augmented reality glasses for the partially sighted. Please read about it here: [smart-specs](http://www.va-st.com/smart-specs/).
+
+For demo and more information about CRF-RNN please visit the project website .
+
+If you use this code/model for your research, please consider citing the following paper:
+```
+@inproceedings{crfasrnn_ICCV2015,
+ author = {Shuai Zheng and Sadeep Jayasumana and Bernardino Romera-Paredes and Vibhav Vineet and Zhizhong Su and Dalong Du and Chang Huang and Philip H. S. Torr},
+ title = {Conditional Random Fields as Recurrent Neural Networks},
+ booktitle = {International Conference on Computer Vision (ICCV)},
+ year = {2015}
+}
+```
+
+
+#Installation Guide
+
+You need to compile the modified Caffe library in this repository. Instructions for Ubuntu 14.04 are included below. You can also consult the generic [Caffe installation guide](http://caffe.berkeleyvision.org/installation.html).
+
+
+###1.1 Install dependencies
+#####General dependencies
+```
+sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
+sudo apt-get install --no-install-recommends libboost-all-dev
+```
+
+#####CUDA
+Install CUDA correct driver and its SDK. Download CUDA SDK from Nvidia website.
+
+In Ubuntu 14.04. You need to make sure the required tools are installed. You might need to blacklist the required modules so that they do not interfere with the driver installation. You also need to uninstall your default Nvidia Driver first.
+```
+sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev
+```
+open /etc/modprobe.d/blacklist.conf and add:
+```
+blacklist amd76x_edac
+blacklist vga16fb
+blacklist nouveau
+blacklist rivafb
+blacklist nvidiafb
+blacklist rivatv
+```
+```
+sudo apt-get remove --purge nvidia*
+```
+
+When you restart your PC, before loging in, try "Ctrl+Alt+F1" switch to a text-based login. Try:
+```
+sudo service lightdm stop
+chmod +x cuda*.run
+sudo ./cuda*.run
+```
+
+#####BLAS
+Install ATLAS or OpenBLAS or MKL.
+
+#####Python
+Install Anaconda Python distribution or install the default Python distribution with numpy/scipy/...
+
+#####MATLAB (optional)
+Install MATLAB using a standard distribution.
+
+###1.2 Build the custom Caffe version
+Set the path correctly in the Makefile.config. You can copy the Makefile.config.example to Makefile.config, as most common parts are filled already. You need to change it according to your environment.
+
+After this, in Ubuntu 14.04, try:
+```
+make
+```
+
+If there are no error messages, you can then compile and install the python and matlab wrappers:
+```
+make matcaffe
+```
+
+```
+make pycaffe
+```
+
+That's it! Enjoy our software!
+
+
+###1.3 Run the demo
+Matlab and Python scripts for running the demo are available in the matlab-scripts and python-scripts directories, respectively. You can choose either of them. Note that you should change the paths in the scripts according your environment.
+
+# LICENSE
+CRF-RNN feature in Caffe is implemented for the paper:
+Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr.
+Conditional Random Fields as Recurrent Neural Networks. IEEE ICCV 2015.
+
+Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, and Philip H. S. Torr are with University of Oxford.
+Vibhav Vineet did this work when he was with the University of Oxford, he is now with the Stanford University.
+Zhizhong Su, Dalong Du, Chang Huang are with the Baidu Institute of Deep Learning (IDL).
+
+CRF-RNN uses the Permutohedral lattice library, the DenseCRF library and the Caffe future version.
+
+Permutohedral lattice library (BSD license) is from Andrew Adams, Jongmin Baek, Abe Davis. Fast High-Dimensional Filtering Using the
+Permutohedral Lattice. Eurographics 2010.
+DenseCRF library from Philipp Krahenbuhl and Vladlen Koltun. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials.
+NIPS 2011.
+
+For more information about CRF-RNN please vist the project website http://crfasrnn.torr.vision.
diff --git a/caffe-crfrnn/CMakeLists.txt b/caffe-crfrnn/CMakeLists.txt
new file mode 100644
index 00000000..ef599b68
--- /dev/null
+++ b/caffe-crfrnn/CMakeLists.txt
@@ -0,0 +1,73 @@
+cmake_minimum_required(VERSION 2.8.7)
+
+# ---[ Caffe project
+project(Caffe C CXX)
+
+# ---[ Using cmake scripts and modules
+list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules)
+
+include(ExternalProject)
+
+include(cmake/Utils.cmake)
+include(cmake/Targets.cmake)
+include(cmake/Misc.cmake)
+include(cmake/Summary.cmake)
+include(cmake/ConfigGen.cmake)
+
+# ---[ Options
+caffe_option(CPU_ONLY "Build Caffe without CUDA support" OFF) # TODO: rename to USE_CUDA
+caffe_option(USE_CUDNN "Build Caffe with cuDNN libary support" ON IF NOT CPU_ONLY)
+caffe_option(BUILD_SHARED_LIBS "Build shared libraries" ON)
+caffe_option(BUILD_python "Build Python wrapper" ON)
+set(python_version "2" CACHE STRING "Specify which python version to use")
+caffe_option(BUILD_matlab "Build Matlab wrapper" OFF IF UNIX OR APPLE)
+caffe_option(BUILD_docs "Build documentation" ON IF UNIX OR APPLE)
+caffe_option(BUILD_python_layer "Build the Caffe python layer" ON)
+
+# ---[ Dependencies
+include(cmake/Dependencies.cmake)
+
+# ---[ Flags
+if(UNIX OR APPLE)
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -Wall")
+endif()
+
+if(USE_libstdcpp)
+ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libstdc++")
+ message("-- Warning: forcing libstdc++ (controlled by USE_libstdcpp option in cmake)")
+endif()
+
+add_definitions(-DGTEST_USE_OWN_TR1_TUPLE)
+
+# ---[ Warnings
+caffe_warnings_disable(CMAKE_CXX_FLAGS -Wno-sign-compare -Wno-uninitialized)
+
+# ---[ Config generation
+configure_file(cmake/Templates/caffe_config.h.in "${PROJECT_BINARY_DIR}/caffe_config.h")
+
+# ---[ Includes
+set(Caffe_INCLUDE_DIR ${PROJECT_SOURCE_DIR}/include)
+include_directories(${Caffe_INCLUDE_DIR} ${PROJECT_BINARY_DIR})
+include_directories(BEFORE src) # This is needed for gtest.
+
+# ---[ Subdirectories
+add_subdirectory(src/gtest)
+add_subdirectory(src/caffe)
+add_subdirectory(tools)
+add_subdirectory(examples)
+add_subdirectory(python)
+add_subdirectory(matlab)
+add_subdirectory(docs)
+
+# ---[ Linter target
+add_custom_target(lint COMMAND ${CMAKE_COMMAND} -P ${PROJECT_SOURCE_DIR}/cmake/lint.cmake)
+
+# ---[ pytest target
+add_custom_target(pytest COMMAND python${python_version} -m unittest discover -s caffe/test WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/python )
+add_dependencies(pytest pycaffe)
+
+# ---[ Configuration summary
+caffe_print_configuration_summary()
+
+# ---[ Export configs generation
+caffe_generate_export_configs()
diff --git a/caffe-crfrnn/CMakeScripts/FindAtlas.cmake b/caffe-crfrnn/CMakeScripts/FindAtlas.cmake
new file mode 100644
index 00000000..27657a6c
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindAtlas.cmake
@@ -0,0 +1,61 @@
+# Find the Atlas (and Lapack) libraries
+#
+# The following variables are optionally searched for defaults
+# Atlas_ROOT_DIR: Base directory where all Atlas components are found
+#
+# The following are set after configuration is done:
+# Atlas_FOUND
+# Atlas_INCLUDE_DIRS
+# Atlas_LIBRARIES
+# Atlas_LIBRARYRARY_DIRS
+
+set(Atlas_INCLUDE_SEARCH_PATHS
+ /usr/include/atlas
+ /usr/include/atlas-base
+ $ENV{Atlas_ROOT_DIR}
+ $ENV{Atlas_ROOT_DIR}/include
+)
+
+set(Atlas_LIB_SEARCH_PATHS
+ /usr/lib/atlas
+ /usr/lib/atlas-base
+ $ENV{Atlas_ROOT_DIR}
+ $ENV{Atlas_ROOT_DIR}/lib
+)
+
+find_path(Atlas_CBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
+find_path(Atlas_CLAPACK_INCLUDE_DIR NAMES clapack.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
+find_library(Atlas_CBLAS_LIBRARY NAMES ptcblas_r ptcblas cblas_r cblas PATHS ${Atlas_LIB_SEARCH_PATHS})
+find_library(Atlas_BLAS_LIBRARY NAMES atlas_r atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
+find_library(Atlas_LAPACK_LIBRARY NAMES alapack_r alapack lapack_atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
+
+set(LOOKED_FOR
+
+ Atlas_CBLAS_INCLUDE_DIR
+ Atlas_CLAPACK_INCLUDE_DIR
+
+ Atlas_CBLAS_LIBRARY
+ Atlas_BLAS_LIBRARY
+ Atlas_LAPACK_LIBRARY
+)
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(Atlas DEFAULT_MSG ${LOOKED_FOR})
+
+if(ATLAS_FOUND)
+
+ mark_as_advanced(${LOOKED_FOR})
+
+ set(Atlas_INCLUDE_DIR
+ ${Atlas_CBLAS_INCLUDE_DIR}
+ ${Atlas_CLAPACK_INCLUDE_DIR}
+ )
+
+ set(Atlas_LIBRARIES
+ ${Atlas_LAPACK_LIBRARY}
+ ${Atlas_CBLAS_LIBRARY}
+ ${Atlas_BLAS_LIBRARY}
+ )
+
+endif(ATLAS_FOUND)
+
diff --git a/caffe-crfrnn/CMakeScripts/FindGFlags.cmake b/caffe-crfrnn/CMakeScripts/FindGFlags.cmake
new file mode 100644
index 00000000..f93c5713
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindGFlags.cmake
@@ -0,0 +1,48 @@
+# - Try to find GFLAGS
+#
+# The following variables are optionally searched for defaults
+# GFLAGS_ROOT_DIR: Base directory where all GFLAGS components are found
+#
+# The following are set after configuration is done:
+# GFLAGS_FOUND
+# GFLAGS_INCLUDE_DIRS
+# GFLAGS_LIBRARIES
+# GFLAGS_LIBRARYRARY_DIRS
+
+include(FindPackageHandleStandardArgs)
+
+set(GFLAGS_ROOT_DIR "" CACHE PATH "Folder contains Gflags")
+
+# We are testing only a couple of files in the include directories
+if(WIN32)
+ find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
+ PATHS ${GFLAGS_ROOT_DIR}/src/windows)
+else()
+ find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
+ PATHS ${GFLAGS_ROOT_DIR})
+endif()
+
+if(MSVC)
+ find_library(GFLAGS_LIBRARY_RELEASE
+ NAMES libgflags
+ PATHS ${GFLAGS_ROOT_DIR}
+ PATH_SUFFIXES Release)
+
+ find_library(GFLAGS_LIBRARY_DEBUG
+ NAMES libgflags-debug
+ PATHS ${GFLAGS_ROOT_DIR}
+ PATH_SUFFIXES Debug)
+
+ set(GFLAGS_LIBRARY optimized ${GFLAGS_LIBRARY_RELEASE} debug ${GFLAGS_LIBRARY_DEBUG})
+else()
+ find_library(GFLAGS_LIBRARY gflags)
+endif()
+
+find_package_handle_standard_args(GFLAGS DEFAULT_MSG
+ GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)
+
+
+if(GFLAGS_FOUND)
+ set(GFLAGS_INCLUDE_DIRS ${GFLAGS_INCLUDE_DIR})
+ set(GFLAGS_LIBRARIES ${GFLAGS_LIBRARY})
+endif()
diff --git a/caffe-crfrnn/CMakeScripts/FindGlog.cmake b/caffe-crfrnn/CMakeScripts/FindGlog.cmake
new file mode 100644
index 00000000..0dc30abd
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindGlog.cmake
@@ -0,0 +1,48 @@
+# - Try to find Glog
+#
+# The following variables are optionally searched for defaults
+# GLOG_ROOT_DIR: Base directory where all GLOG components are found
+#
+# The following are set after configuration is done:
+# GLOG_FOUND
+# GLOG_INCLUDE_DIRS
+# GLOG_LIBRARIES
+# GLOG_LIBRARYRARY_DIRS
+
+include(FindPackageHandleStandardArgs)
+
+set(GLOG_ROOT_DIR "" CACHE PATH "Folder contains Google glog")
+
+if(WIN32)
+ find_path(GLOG_INCLUDE_DIR glog/logging.h
+ PATHS ${GLOG_ROOT_DIR}/src/windows)
+else()
+ find_path(GLOG_INCLUDE_DIR glog/logging.h
+ PATHS ${GLOG_ROOT_DIR})
+endif()
+
+if(MSVC)
+ find_library(GLOG_LIBRARY_RELEASE libglog_static
+ PATHS ${GLOG_ROOT_DIR}
+ PATH_SUFFIXES Release)
+
+ find_library(GLOG_LIBRARY_DEBUG libglog_static
+ PATHS ${GLOG_ROOT_DIR}
+ PATH_SUFFIXES Debug)
+
+ set(GLOG_LIBRARY optimized ${GLOG_LIBRARY_RELEASE} debug ${GLOG_LIBRARY_DEBUG})
+else()
+ find_library(GLOG_LIBRARY glog
+ PATHS ${GLOG_ROOT_DIR}
+ PATH_SUFFIXES
+ lib
+ lib64)
+endif()
+
+find_package_handle_standard_args(GLOG DEFAULT_MSG
+ GLOG_INCLUDE_DIR GLOG_LIBRARY)
+
+if(GLOG_FOUND)
+ set(GLOG_INCLUDE_DIRS ${GLOG_INCLUDE_DIR})
+ set(GLOG_LIBRARIES ${GLOG_LIBRARY})
+endif()
diff --git a/caffe-crfrnn/CMakeScripts/FindLAPACK.cmake b/caffe-crfrnn/CMakeScripts/FindLAPACK.cmake
new file mode 100644
index 00000000..9641c45d
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindLAPACK.cmake
@@ -0,0 +1,190 @@
+# - Find LAPACK library
+# This module finds an installed fortran library that implements the LAPACK
+# linear-algebra interface (see http://www.netlib.org/lapack/).
+#
+# The approach follows that taken for the autoconf macro file, acx_lapack.m4
+# (distributed at http://ac-archive.sourceforge.net/ac-archive/acx_lapack.html).
+#
+# This module sets the following variables:
+# LAPACK_FOUND - set to true if a library implementing the LAPACK interface is found
+# LAPACK_LIBRARIES - list of libraries (using full path name) for LAPACK
+
+# Note: I do not think it is a good idea to mixup different BLAS/LAPACK versions
+# Hence, this script wants to find a Lapack library matching your Blas library
+
+# Do nothing if LAPACK was found before
+IF(NOT LAPACK_FOUND)
+
+SET(LAPACK_LIBRARIES)
+SET(LAPACK_INFO)
+
+IF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
+ FIND_PACKAGE(BLAS)
+ELSE(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
+ FIND_PACKAGE(BLAS REQUIRED)
+ENDIF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
+
+# Old search lapack script
+include(CheckFortranFunctionExists)
+
+macro(Check_Lapack_Libraries LIBRARIES _prefix _name _flags _list _blas)
+ # This macro checks for the existence of the combination of fortran libraries
+ # given by _list. If the combination is found, this macro checks (using the
+ # Check_Fortran_Function_Exists macro) whether can link against that library
+ # combination using the name of a routine given by _name using the linker
+ # flags given by _flags. If the combination of libraries is found and passes
+ # the link test, LIBRARIES is set to the list of complete library paths that
+ # have been found. Otherwise, LIBRARIES is set to FALSE.
+ # N.B. _prefix is the prefix applied to the names of all cached variables that
+ # are generated internally and marked advanced by this macro.
+ set(_libraries_work TRUE)
+ set(${LIBRARIES})
+ set(_combined_name)
+ foreach(_library ${_list})
+ set(_combined_name ${_combined_name}_${_library})
+ if(_libraries_work)
+ if (WIN32)
+ find_library(${_prefix}_${_library}_LIBRARY
+ NAMES ${_library} PATHS ENV LIB PATHS ENV PATH)
+ else (WIN32)
+ if(APPLE)
+ find_library(${_prefix}_${_library}_LIBRARY
+ NAMES ${_library}
+ PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64
+ ENV DYLD_LIBRARY_PATH)
+ else(APPLE)
+ find_library(${_prefix}_${_library}_LIBRARY
+ NAMES ${_library}
+ PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64
+ ENV LD_LIBRARY_PATH)
+ endif(APPLE)
+ endif(WIN32)
+ mark_as_advanced(${_prefix}_${_library}_LIBRARY)
+ set(${LIBRARIES} ${${LIBRARIES}} ${${_prefix}_${_library}_LIBRARY})
+ set(_libraries_work ${${_prefix}_${_library}_LIBRARY})
+ endif(_libraries_work)
+ endforeach(_library ${_list})
+ if(_libraries_work)
+ # Test this combination of libraries.
+ set(CMAKE_REQUIRED_LIBRARIES ${_flags} ${${LIBRARIES}} ${_blas})
+ if (CMAKE_Fortran_COMPILER_WORKS)
+ check_fortran_function_exists(${_name} ${_prefix}${_combined_name}_WORKS)
+ else (CMAKE_Fortran_COMPILER_WORKS)
+ check_function_exists("${_name}_" ${_prefix}${_combined_name}_WORKS)
+ endif (CMAKE_Fortran_COMPILER_WORKS)
+ set(CMAKE_REQUIRED_LIBRARIES)
+ mark_as_advanced(${_prefix}${_combined_name}_WORKS)
+ set(_libraries_work ${${_prefix}${_combined_name}_WORKS})
+ endif(_libraries_work)
+ if(NOT _libraries_work)
+ set(${LIBRARIES} FALSE)
+ endif(NOT _libraries_work)
+endmacro(Check_Lapack_Libraries)
+
+
+if(BLAS_FOUND)
+
+ # Intel MKL
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "mkl"))
+ IF(MKL_LAPACK_LIBRARIES)
+ SET(LAPACK_LIBRARIES ${MKL_LAPACK_LIBRARIES} ${MKL_LIBRARIES})
+ ELSE(MKL_LAPACK_LIBRARIES)
+ SET(LAPACK_LIBRARIES ${MKL_LIBRARIES})
+ ENDIF(MKL_LAPACK_LIBRARIES)
+ SET(LAPACK_INCLUDE_DIR ${MKL_INCLUDE_DIR})
+ SET(LAPACK_INFO "mkl")
+ ENDIF()
+
+ # OpenBlas
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "open"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" OPEN_LAPACK_WORKS)
+ if(OPEN_LAPACK_WORKS)
+ SET(LAPACK_INFO "open")
+ else()
+ message(STATUS "It seems OpenBlas has not been compiled with Lapack support")
+ endif()
+ endif()
+
+ # GotoBlas
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "goto"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" GOTO_LAPACK_WORKS)
+ if(GOTO_LAPACK_WORKS)
+ SET(LAPACK_INFO "goto")
+ else()
+ message(STATUS "It seems GotoBlas has not been compiled with Lapack support")
+ endif()
+ endif()
+
+ # ACML
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "acml"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" ACML_LAPACK_WORKS)
+ if(ACML_LAPACK_WORKS)
+ SET(LAPACK_INFO "acml")
+ else()
+ message(STATUS "Strangely, this ACML library does not support Lapack?!")
+ endif()
+ endif()
+
+ # Accelerate
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "accelerate"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" ACCELERATE_LAPACK_WORKS)
+ if(ACCELERATE_LAPACK_WORKS)
+ SET(LAPACK_INFO "accelerate")
+ else()
+ message(STATUS "Strangely, this Accelerate library does not support Lapack?!")
+ endif()
+ endif()
+
+ # vecLib
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "veclib"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" VECLIB_LAPACK_WORKS)
+ if(VECLIB_LAPACK_WORKS)
+ SET(LAPACK_INFO "veclib")
+ else()
+ message(STATUS "Strangely, this vecLib library does not support Lapack?!")
+ endif()
+ endif()
+
+ # Generic LAPACK library?
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "generic"))
+ check_lapack_libraries(
+ LAPACK_LIBRARIES
+ LAPACK
+ cheev
+ ""
+ "lapack"
+ "${BLAS_LIBRARIES}"
+ )
+ if(LAPACK_LIBRARIES)
+ SET(LAPACK_INFO "generic")
+ endif(LAPACK_LIBRARIES)
+ endif()
+
+else(BLAS_FOUND)
+ message(STATUS "LAPACK requires BLAS")
+endif(BLAS_FOUND)
+
+if(LAPACK_INFO)
+ set(LAPACK_FOUND TRUE)
+else(LAPACK_INFO)
+ set(LAPACK_FOUND FALSE)
+endif(LAPACK_INFO)
+
+IF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)
+ message(FATAL_ERROR "Cannot find a library with LAPACK API. Please specify library location.")
+ENDIF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)
+IF(NOT LAPACK_FIND_QUIETLY)
+ IF(LAPACK_FOUND)
+ MESSAGE(STATUS "Found a library with LAPACK API. (${LAPACK_INFO})")
+ ELSE(LAPACK_FOUND)
+ MESSAGE(STATUS "Cannot find a library with LAPACK API. Not using LAPACK.")
+ ENDIF(LAPACK_FOUND)
+ENDIF(NOT LAPACK_FIND_QUIETLY)
+
+# Do nothing if LAPACK was found before
+ENDIF(NOT LAPACK_FOUND)
diff --git a/caffe-crfrnn/CMakeScripts/FindLMDB.cmake b/caffe-crfrnn/CMakeScripts/FindLMDB.cmake
new file mode 100644
index 00000000..e615f542
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindLMDB.cmake
@@ -0,0 +1,28 @@
+# Try to find the LMBD libraries and headers
+# LMDB_FOUND - system has LMDB lib
+# LMDB_INCLUDE_DIR - the LMDB include directory
+# LMDB_LIBRARIES - Libraries needed to use LMDB
+
+# FindCWD based on FindGMP by:
+# Copyright (c) 2006, Laurent Montel,
+#
+# Redistribution and use is allowed according to the terms of the BSD license.
+
+# Adapted from FindCWD by:
+# Copyright 2013 Conrad Steenberg
+# Aug 31, 2013
+
+if (LMDB_INCLUDE_DIR AND LMDB_LIBRARIES)
+ # Already in cache, be silent
+ set(LMDB_FIND_QUIETLY TRUE)
+endif (LMDB_INCLUDE_DIR AND LMDB_LIBRARIES)
+
+find_path(LMDB_INCLUDE_DIR NAMES "lmdb.h" HINTS "$ENV{LMDB_DIR}/include")
+find_library(LMDB_LIBRARIES NAMES lmdb HINTS $ENV{LMDB_DIR}/lib )
+MESSAGE(STATUS "LMDB lib: " ${LMDB_LIBRARIES} )
+MESSAGE(STATUS "LMDB include: " ${LMDB_INCLUDE} )
+
+include(FindPackageHandleStandardArgs)
+FIND_PACKAGE_HANDLE_STANDARD_ARGS(LMDB DEFAULT_MSG LMDB_INCLUDE_DIR LMDB_LIBRARIES)
+
+mark_as_advanced(LMDB_INCLUDE_DIR LMDB_LIBRARIES)
diff --git a/caffe-crfrnn/CMakeScripts/FindLevelDB.cmake b/caffe-crfrnn/CMakeScripts/FindLevelDB.cmake
new file mode 100644
index 00000000..f3386f26
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindLevelDB.cmake
@@ -0,0 +1,37 @@
+# - Find LevelDB
+#
+# LEVELDB_INCLUDE - Where to find leveldb/db.h
+# LEVELDB_LIBS - List of libraries when using LevelDB.
+# LEVELDB_FOUND - True if LevelDB found.
+
+get_filename_component(module_file_path ${CMAKE_CURRENT_LIST_FILE} PATH)
+
+# Look for the header file.
+find_path(LEVELDB_INCLUDE NAMES leveldb/db.h PATHS $ENV{LEVELDB_ROOT}/include /opt/local/include /usr/local/include /usr/include DOC "Path in which the file leveldb/db.h is located." )
+mark_as_advanced(LEVELDB_INCLUDE)
+
+# Look for the library.
+# Does this work on UNIX systems? (LINUX)
+find_library(LEVELDB_LIBS NAMES leveldb PATHS /usr/lib $ENV{LEVELDB_ROOT}/lib DOC "Path to leveldb library." )
+mark_as_advanced(LEVELDB_LIBS)
+
+# Copy the results to the output variables.
+if (LEVELDB_INCLUDE AND LEVELDB_LIBS)
+ message(STATUS "Found leveldb in ${LEVELDB_INCLUDE} ${LEVELDB_LIBS}")
+ set(LEVELDB_FOUND 1)
+ include(CheckCXXSourceCompiles)
+ set(CMAKE_REQUIRED_LIBRARY ${LEVELDB_LIBS} pthread)
+ set(CMAKE_REQUIRED_INCLUDES ${LEVELDB_INCLUDE})
+ else ()
+ set(LEVELDB_FOUND 0)
+ endif ()
+
+ # Report the results.
+ if (NOT LEVELDB_FOUND)
+ set(LEVELDB_DIR_MESSAGE "LEVELDB was not found. Make sure LEVELDB_LIBS and LEVELDB_INCLUDE are set.")
+ if (LEVELDB_FIND_REQUIRED)
+ message(FATAL_ERROR "${LEVELDB_DIR_MESSAGE}")
+ elseif (NOT LEVELDB_FIND_QUIETLY)
+ message(STATUS "${LEVELDB_DIR_MESSAGE}")
+ endif ()
+ endif ()
\ No newline at end of file
diff --git a/caffe-crfrnn/CMakeScripts/FindMKL.cmake b/caffe-crfrnn/CMakeScripts/FindMKL.cmake
new file mode 100644
index 00000000..eb2d9f88
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindMKL.cmake
@@ -0,0 +1,113 @@
+# - Find Intel MKL
+# Find the MKL libraries
+#
+# Options:
+#
+# MKL_STATAIC : use static linking
+# MKL_MULTI_THREADED: use multi-threading
+# MKL_SDL : Single Dynamic Library interface
+#
+# This module defines the following variables:
+#
+# MKL_FOUND : True if MKL_INCLUDE_DIR are found
+# MKL_INCLUDE_DIR : where to find mkl.h, etc.
+# MKL_INCLUDE_DIRS : set when MKL_INCLUDE_DIR found
+# MKL_LIBRARIES : the library to link against.
+
+
+include(FindPackageHandleStandardArgs)
+
+set(INTEL_ROOT "/opt/intel" CACHE PATH "Folder contains intel libs")
+set(MKL_ROOT ${INTEL_ROOT}/mkl CACHE PATH "Folder contains MKL")
+
+# Find include dir
+find_path(MKL_INCLUDE_DIR mkl.h
+ PATHS ${MKL_ROOT}/include)
+
+# Find include directory
+# There is no include folder under linux
+if(WIN32)
+ find_path(INTEL_INCLUDE_DIR omp.h
+ PATHS ${INTEL_ROOT}/include)
+ set(MKL_INCLUDE_DIR ${MKL_INCLUDE_DIR} ${INTEL_INCLUDE_DIR})
+endif()
+
+# Find libraries
+
+# Handle suffix
+set(_MKL_ORIG_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})
+
+if(WIN32)
+ if(MKL_STATAIC)
+ set(CMAKE_FIND_LIBRARY_SUFFIXES .lib)
+ else()
+ set(CMAKE_FIND_LIBRARY_SUFFIXES _dll.lib)
+ endif()
+else()
+ if(MKL_STATAIC)
+ set(CMAKE_FIND_LIBRARY_SUFFIXES .a)
+ else()
+ set(CMAKE_FIND_LIBRARY_SUFFIXES .so)
+ endif()
+endif()
+
+
+# MKL is composed by four layers: Interface, Threading, Computational and RTL
+
+if(MKL_SDL)
+ find_library(MKL_LIBRARY mkl_rt
+ PATHS ${MKL_ROOT}/lib/ia32/)
+
+ set(MKL_MINIMAL_LIBRARY ${MKL_LIBRARY})
+else()
+ ######################### Interface layer #######################
+ if(WIN32)
+ set(MKL_INTERFACE_LIBNAME mkl_intel_c)
+ else()
+ set(MKL_INTERFACE_LIBNAME mkl_intel)
+ endif()
+
+ find_library(MKL_INTERFACE_LIBRARY ${MKL_INTERFACE_LIBNAME}
+ PATHS ${MKL_ROOT}/lib/ia32/)
+
+ ######################## Threading layer ########################
+ if(MKL_MULTI_THREADED)
+ set(MKL_THREADING_LIBNAME mkl_intel_thread)
+ else()
+ set(MKL_THREADING_LIBNAME mkl_sequential)
+ endif()
+
+ find_library(MKL_THREADING_LIBRARY ${MKL_THREADING_LIBNAME}
+ PATHS ${MKL_ROOT}/lib/ia32/)
+
+ ####################### Computational layer #####################
+ find_library(MKL_CORE_LIBRARY mkl_core
+ PATHS ${MKL_ROOT}/lib/ia32/)
+ find_library(MKL_FFT_LIBRARY mkl_cdft_core
+ PATHS ${MKL_ROOT}/lib/ia32/)
+ find_library(MKL_SCALAPACK_LIBRARY mkl_scalapack_core
+ PATHS ${MKL_ROOT}/lib/ia32/)
+
+ ############################ RTL layer ##########################
+ if(WIN32)
+ set(MKL_RTL_LIBNAME libiomp5md)
+ else()
+ set(MKL_RTL_LIBNAME libiomp5)
+ endif()
+ find_library(MKL_RTL_LIBRARY ${MKL_RTL_LIBNAME}
+ PATHS ${INTEL_RTL_ROOT}/lib)
+
+ set(MKL_LIBRARY ${MKL_INTERFACE_LIBRARY} ${MKL_THREADING_LIBRARY} ${MKL_CORE_LIBRARY} ${MKL_FFT_LIBRARY} ${MKL_SCALAPACK_LIBRARY} ${MKL_RTL_LIBRARY})
+ set(MKL_MINIMAL_LIBRARY ${MKL_INTERFACE_LIBRARY} ${MKL_THREADING_LIBRARY} ${MKL_CORE_LIBRARY} ${MKL_RTL_LIBRARY})
+endif()
+
+set(CMAKE_FIND_LIBRARY_SUFFIXES ${_MKL_ORIG_CMAKE_FIND_LIBRARY_SUFFIXES})
+
+find_package_handle_standard_args(MKL DEFAULT_MSG
+ MKL_INCLUDE_DIR MKL_LIBRARY MKL_MINIMAL_LIBRARY)
+
+if(MKL_FOUND)
+ set(MKL_INCLUDE_DIRS ${MKL_INCLUDE_DIR})
+ set(MKL_LIBRARIES ${MKL_LIBRARY})
+ set(MKL_MINIMAL_LIBRARIES ${MKL_LIBRARY})
+endif()
diff --git a/caffe-crfrnn/CMakeScripts/FindNumPy.cmake b/caffe-crfrnn/CMakeScripts/FindNumPy.cmake
new file mode 100644
index 00000000..baf21541
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindNumPy.cmake
@@ -0,0 +1,103 @@
+# - Find the NumPy libraries
+# This module finds if NumPy is installed, and sets the following variables
+# indicating where it is.
+#
+# TODO: Update to provide the libraries and paths for linking npymath lib.
+#
+# NUMPY_FOUND - was NumPy found
+# NUMPY_VERSION - the version of NumPy found as a string
+# NUMPY_VERSION_MAJOR - the major version number of NumPy
+# NUMPY_VERSION_MINOR - the minor version number of NumPy
+# NUMPY_VERSION_PATCH - the patch version number of NumPy
+# NUMPY_VERSION_DECIMAL - e.g. version 1.6.1 is 10601
+# NUMPY_INCLUDE_DIRS - path to the NumPy include files
+
+#============================================================================
+# Copyright 2012 Continuum Analytics, Inc.
+#
+# MIT License
+#
+# Permission is hereby granted, free of charge, to any person obtaining
+# a copy of this software and associated documentation files
+# (the "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+# OTHER DEALINGS IN THE SOFTWARE.
+#
+#============================================================================
+
+# Finding NumPy involves calling the Python interpreter
+if(NumPy_FIND_REQUIRED)
+ find_package(PythonInterp REQUIRED)
+else()
+ find_package(PythonInterp)
+endif()
+
+if(NOT PYTHONINTERP_FOUND)
+ set(NUMPY_FOUND FALSE)
+ return()
+endif()
+
+execute_process(COMMAND "${PYTHON_EXECUTABLE}" "-c"
+ "import numpy as n; print(n.__version__); print(n.get_include());"
+ RESULT_VARIABLE _NUMPY_SEARCH_SUCCESS
+ OUTPUT_VARIABLE _NUMPY_VALUES_OUTPUT
+ ERROR_VARIABLE _NUMPY_ERROR_VALUE
+ OUTPUT_STRIP_TRAILING_WHITESPACE)
+
+if(NOT _NUMPY_SEARCH_SUCCESS MATCHES 0)
+ if(NumPy_FIND_REQUIRED)
+ message(FATAL_ERROR
+ "NumPy import failure:\n${_NUMPY_ERROR_VALUE}")
+ endif()
+ set(NUMPY_FOUND FALSE)
+ return()
+endif()
+
+# Convert the process output into a list
+string(REGEX REPLACE ";" "\\\\;" _NUMPY_VALUES ${_NUMPY_VALUES_OUTPUT})
+string(REGEX REPLACE "\n" ";" _NUMPY_VALUES ${_NUMPY_VALUES})
+# Just in case there is unexpected output from the Python command.
+list(GET _NUMPY_VALUES -2 NUMPY_VERSION)
+list(GET _NUMPY_VALUES -1 NUMPY_INCLUDE_DIRS)
+
+string(REGEX MATCH "^[0-9]+\\.[0-9]+\\.[0-9]+" _VER_CHECK "${NUMPY_VERSION}")
+if("${_VER_CHECK}" STREQUAL "")
+ # The output from Python was unexpected. Raise an error always
+ # here, because we found NumPy, but it appears to be corrupted somehow.
+ message(FATAL_ERROR
+ "Requested version and include path from NumPy, got instead:\n${_NUMPY_VALUES_OUTPUT}\n")
+ return()
+endif()
+
+# Make sure all directory separators are '/'
+string(REGEX REPLACE "\\\\" "/" NUMPY_INCLUDE_DIRS ${NUMPY_INCLUDE_DIRS})
+
+# Get the major and minor version numbers
+string(REGEX REPLACE "\\." ";" _NUMPY_VERSION_LIST ${NUMPY_VERSION})
+list(GET _NUMPY_VERSION_LIST 0 NUMPY_VERSION_MAJOR)
+list(GET _NUMPY_VERSION_LIST 1 NUMPY_VERSION_MINOR)
+list(GET _NUMPY_VERSION_LIST 2 NUMPY_VERSION_PATCH)
+string(REGEX MATCH "[0-9]*" NUMPY_VERSION_PATCH ${NUMPY_VERSION_PATCH})
+math(EXPR NUMPY_VERSION_DECIMAL
+ "(${NUMPY_VERSION_MAJOR} * 10000) + (${NUMPY_VERSION_MINOR} * 100) + ${NUMPY_VERSION_PATCH}")
+
+find_package_message(NUMPY
+ "Found NumPy: version \"${NUMPY_VERSION}\" ${NUMPY_INCLUDE_DIRS}"
+ "${NUMPY_INCLUDE_DIRS}${NUMPY_VERSION}")
+
+set(NUMPY_FOUND TRUE)
+
+
diff --git a/caffe-crfrnn/CMakeScripts/FindOpenBLAS.cmake b/caffe-crfrnn/CMakeScripts/FindOpenBLAS.cmake
new file mode 100644
index 00000000..b8434927
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindOpenBLAS.cmake
@@ -0,0 +1,62 @@
+
+
+SET(Open_BLAS_INCLUDE_SEARCH_PATHS
+ /usr/include
+ /usr/include/openblas-base
+ /usr/local/include
+ /usr/local/include/openblas-base
+ /opt/OpenBLAS/include
+ $ENV{OpenBLAS_HOME}
+ $ENV{OpenBLAS_HOME}/include
+)
+
+SET(Open_BLAS_LIB_SEARCH_PATHS
+ /lib/
+ /lib/openblas-base
+ /lib64/
+ /usr/lib
+ /usr/lib/openblas-base
+ /usr/lib64
+ /usr/local/lib
+ /usr/local/lib64
+ /opt/OpenBLAS/lib
+ $ENV{OpenBLAS}cd
+ $ENV{OpenBLAS}/lib
+ $ENV{OpenBLAS_HOME}
+ $ENV{OpenBLAS_HOME}/lib
+ )
+
+FIND_PATH(OpenBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Open_BLAS_INCLUDE_SEARCH_PATHS})
+FIND_LIBRARY(OpenBLAS_LIB NAMES openblas PATHS ${Open_BLAS_LIB_SEARCH_PATHS})
+
+SET(OpenBLAS_FOUND ON)
+
+# Check include files
+IF(NOT OpenBLAS_INCLUDE_DIR)
+ SET(OpenBLAS_FOUND OFF)
+ MESSAGE(STATUS "Could not find OpenBLAS include. Turning OpenBLAS_FOUND off")
+ENDIF()
+
+# Check libraries
+IF(NOT OpenBLAS_LIB)
+ SET(OpenBLAS_FOUND OFF)
+ MESSAGE(STATUS "Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off")
+ENDIF()
+
+IF (OpenBLAS_FOUND)
+ IF (NOT OpenBLAS_FIND_QUIETLY)
+ MESSAGE(STATUS "Found OpenBLAS libraries: ${OpenBLAS_LIB}")
+ MESSAGE(STATUS "Found OpenBLAS include: ${OpenBLAS_INCLUDE_DIR}")
+ ENDIF (NOT OpenBLAS_FIND_QUIETLY)
+ELSE (OpenBLAS_FOUND)
+ IF (OpenBLAS_FIND_REQUIRED)
+ MESSAGE(FATAL_ERROR "Could not find OpenBLAS")
+ ENDIF (OpenBLAS_FIND_REQUIRED)
+ENDIF (OpenBLAS_FOUND)
+
+MARK_AS_ADVANCED(
+ OpenBLAS_INCLUDE_DIR
+ OpenBLAS_LIB
+ OpenBLAS
+)
+
diff --git a/caffe-crfrnn/CMakeScripts/FindProtobuf.cmake b/caffe-crfrnn/CMakeScripts/FindProtobuf.cmake
new file mode 100644
index 00000000..0f94f498
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindProtobuf.cmake
@@ -0,0 +1,152 @@
+# Locate and configure the Google Protocol Buffers library.
+# Defines the following variables:
+#
+# PROTOBUF_FOUND - Found the Google Protocol Buffers library
+# PROTOBUF_INCLUDE_DIRS - Include directories for Google Protocol Buffers
+# PROTOBUF_LIBRARIES - The protobuf library
+#
+# The following cache variables are also defined:
+# PROTOBUF_LIBRARY - The protobuf library
+# PROTOBUF_PROTOC_LIBRARY - The protoc library
+# PROTOBUF_INCLUDE_DIR - The include directory for protocol buffers
+# PROTOBUF_PROTOC_EXECUTABLE - The protoc compiler
+#
+# ====================================================================
+# Example:
+#
+# find_package(Protobuf REQUIRED)
+# include_directories(${PROTOBUF_INCLUDE_DIRS})
+#
+# include_directories(${CMAKE_CURRENT_BINARY_DIR})
+# PROTOBUF_GENERATE_CPP(PROTO_SRCS PROTO_HDRS foo.proto)
+# add_executable(bar bar.cc ${PROTO_SRCS} ${PROTO_HDRS})
+# target_link_libraries(bar ${PROTOBUF_LIBRARY})
+#
+# NOTE: You may need to link against pthreads, depending
+# on the platform.
+# ====================================================================
+#
+# PROTOBUF_GENERATE_CPP (public function)
+# SRCS = Variable to define with autogenerated
+# source files
+# HDRS = Variable to define with autogenerated
+# header files
+# ARGN = proto files
+#
+# ====================================================================
+
+
+#=============================================================================
+# Copyright 2009 Kitware, Inc.
+# Copyright 2009 Philip Lowman
+# Copyright 2008 Esben Mose Hansen, Ange Optimization ApS
+#
+# Distributed under the OSI-approved BSD License (the "License");
+# see accompanying file Copyright.txt for details.
+#
+# This software is distributed WITHOUT ANY WARRANTY; without even the
+# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the License for more information.
+#=============================================================================
+# (To distributed this file outside of CMake, substitute the full
+# License text for the above reference.)
+
+function(PROTOBUF_GENERATE_PYTHON SRCS)
+ if(NOT ARGN)
+ message(SEND_ERROR "Error: PROTOBUF_GENERATE_PYTHON() called without any proto files")
+ return()
+ endif(NOT ARGN)
+
+ set(${SRCS})
+ foreach(FIL ${ARGN})
+ get_filename_component(ABS_FIL ${FIL} ABSOLUTE)
+ get_filename_component(FIL_WE ${FIL} NAME_WE)
+
+
+ list(APPEND ${SRCS} "${CMAKE_CURRENT_BINARY_DIR}/${FIL_WE}_pb2.py")
+
+ add_custom_command(
+ OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${FIL_WE}_pb2.py"
+ COMMAND ${PROTOBUF_PROTOC_EXECUTABLE}
+ ARGS --python_out ${CMAKE_CURRENT_BINARY_DIR} --proto_path ${CMAKE_CURRENT_SOURCE_DIR}
+${ABS_FIL}
+ DEPENDS ${ABS_FIL}
+ COMMENT "Running Python protocol buffer compiler on ${FIL}"
+ VERBATIM )
+ endforeach()
+
+
+ set_source_files_properties(${${SRCS}} PROPERTIES GENERATED TRUE)
+ set(${SRCS} ${${SRCS}} PARENT_SCOPE)
+endfunction()
+
+
+function(PROTOBUF_GENERATE_CPP SRCS HDRS)
+ if(NOT ARGN)
+ message(SEND_ERROR "Error: PROTOBUF_GENERATE_CPP() called without any proto files")
+ return()
+ endif(NOT ARGN)
+
+ set(${SRCS})
+ set(${HDRS})
+ foreach(FIL ${ARGN})
+ get_filename_component(ABS_FIL ${FIL} ABSOLUTE)
+ get_filename_component(FIL_WE ${FIL} NAME_WE)
+
+ list(APPEND ${SRCS} "${CMAKE_CURRENT_BINARY_DIR}/${FIL_WE}.pb.cc")
+ list(APPEND ${HDRS} "${CMAKE_CURRENT_BINARY_DIR}/${FIL_WE}.pb.h")
+
+ add_custom_command(
+ OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${FIL_WE}.pb.cc"
+ "${CMAKE_CURRENT_BINARY_DIR}/${FIL_WE}.pb.h"
+ COMMAND ${PROTOBUF_PROTOC_EXECUTABLE}
+ ARGS --cpp_out ${CMAKE_CURRENT_BINARY_DIR} --proto_path ${CMAKE_CURRENT_SOURCE_DIR}
+${ABS_FIL}
+ DEPENDS ${ABS_FIL}
+ COMMENT "Running C++ protocol buffer compiler on ${FIL}"
+ VERBATIM )
+ endforeach()
+
+ set_source_files_properties(${${SRCS}} ${${HDRS}} PROPERTIES GENERATED TRUE)
+ set(${SRCS} ${${SRCS}} PARENT_SCOPE)
+ set(${HDRS} ${${HDRS}} PARENT_SCOPE)
+endfunction()
+
+
+find_path(PROTOBUF_INCLUDE_DIR google/protobuf/service.h)
+
+# Google's provided vcproj files generate libraries with a "lib"
+# prefix on Windows
+if(WIN32)
+ set(PROTOBUF_ORIG_FIND_LIBRARY_PREFIXES "${CMAKE_FIND_LIBRARY_PREFIXES}")
+ set(CMAKE_FIND_LIBRARY_PREFIXES "lib" "")
+endif()
+
+find_library(PROTOBUF_LIBRARY NAMES protobuf
+ DOC "The Google Protocol Buffers Library"
+)
+find_library(PROTOBUF_PROTOC_LIBRARY NAMES protoc
+ DOC "The Google Protocol Buffers Compiler Library"
+)
+find_program(PROTOBUF_PROTOC_EXECUTABLE NAMES protoc
+ DOC "The Google Protocol Buffers Compiler"
+)
+
+mark_as_advanced(PROTOBUF_INCLUDE_DIR
+ PROTOBUF_LIBRARY
+ PROTOBUF_PROTOC_LIBRARY
+ PROTOBUF_PROTOC_EXECUTABLE)
+
+# Restore original find library prefixes
+if(WIN32)
+ set(CMAKE_FIND_LIBRARY_PREFIXES "${PROTOBUF_ORIG_FIND_LIBRARY_PREFIXES}")
+endif()
+
+include(FindPackageHandleStandardArgs)
+FIND_PACKAGE_HANDLE_STANDARD_ARGS(PROTOBUF DEFAULT_MSG
+ PROTOBUF_LIBRARY PROTOBUF_INCLUDE_DIR)
+
+if(PROTOBUF_FOUND)
+ set(PROTOBUF_INCLUDE_DIRS ${PROTOBUF_INCLUDE_DIR})
+ set(PROTOBUF_LIBRARIES ${PROTOBUF_LIBRARY})
+endif()
diff --git a/caffe-crfrnn/CMakeScripts/FindSnappy.cmake b/caffe-crfrnn/CMakeScripts/FindSnappy.cmake
new file mode 100644
index 00000000..d769b442
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/FindSnappy.cmake
@@ -0,0 +1,33 @@
+# Find the Snappy libraries
+#
+# The following variables are optionally searched for defaults
+# Snappy_ROOT_DIR: Base directory where all Snappy components are found
+#
+# The following are set after configuration is done:
+# Snappy_FOUND
+# Snappy_INCLUDE_DIRS
+# Snappy_LIBS
+
+find_path(SNAPPY_INCLUDE_DIR
+ NAMES snappy.h
+ HINTS ${SNAPPY_ROOT_DIR}
+ ${SNAPPY_ROOT_DIR}/include
+)
+
+find_library(SNAPPY_LIBS
+ NAMES snappy
+ HINTS ${SNAPPY_ROOT_DIR}
+ ${SNAPPY_ROOT_DIR}/lib
+)
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(Snappy
+ DEFAULT_MSG
+ SNAPPY_LIBS
+ SNAPPY_INCLUDE_DIR
+)
+
+mark_as_advanced(
+ SNAPPY_LIBS
+ SNAPPY_INCLUDE_DIR
+)
diff --git a/caffe-crfrnn/CMakeScripts/lint.cmake b/caffe-crfrnn/CMakeScripts/lint.cmake
new file mode 100644
index 00000000..04df3409
--- /dev/null
+++ b/caffe-crfrnn/CMakeScripts/lint.cmake
@@ -0,0 +1,48 @@
+
+set(CMAKE_SOURCE_DIR ../)
+set(LINT_COMMAND ${CMAKE_SOURCE_DIR}/scripts/cpp_lint.py)
+set(SRC_FILE_EXTENSIONS h hpp hu c cpp cu cc)
+set(EXCLUDE_FILE_EXTENSTIONS pb.h pb.cc)
+set(LINT_DIRS include src/caffe examples tools python matlab)
+
+# find all files of interest
+foreach(ext ${SRC_FILE_EXTENSIONS})
+ foreach(dir ${LINT_DIRS})
+ file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/${dir}/*.${ext})
+ set(LINT_SOURCES ${LINT_SOURCES} ${FOUND_FILES})
+ endforeach()
+endforeach()
+
+# find all files that should be excluded
+foreach(ext ${EXCLUDE_FILE_EXTENSTIONS})
+ file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/*.${ext})
+ set(EXCLUDED_FILES ${EXCLUDED_FILES} ${FOUND_FILES})
+endforeach()
+
+# exclude generated pb files
+list(REMOVE_ITEM LINT_SOURCES ${EXCLUDED_FILES})
+
+execute_process(
+ COMMAND ${LINT_COMMAND} ${LINT_SOURCES}
+ ERROR_VARIABLE LINT_OUTPUT
+ ERROR_STRIP_TRAILING_WHITESPACE
+)
+
+string(REPLACE "\n" ";" LINT_OUTPUT ${LINT_OUTPUT})
+
+list(GET LINT_OUTPUT -1 LINT_RESULT)
+list(REMOVE_AT LINT_OUTPUT -1)
+string(REPLACE " " ";" LINT_RESULT ${LINT_RESULT})
+list(GET LINT_RESULT -1 NUM_ERRORS)
+if(NUM_ERRORS GREATER 0)
+ foreach(msg ${LINT_OUTPUT})
+ string(FIND ${msg} "Done" result)
+ if(result LESS 0)
+ message(STATUS ${msg})
+ endif()
+ endforeach()
+ message(FATAL_ERROR "Lint found ${NUM_ERRORS} errors!")
+else()
+ message(STATUS "Lint did not find any errors!")
+endif()
+
diff --git a/caffe-crfrnn/INSTALL.md b/caffe-crfrnn/INSTALL.md
new file mode 100644
index 00000000..42fcf027
--- /dev/null
+++ b/caffe-crfrnn/INSTALL.md
@@ -0,0 +1,7 @@
+# Installation
+
+See http://caffe.berkeleyvision.org/installation.html for the latest
+installation instructions.
+
+Check the issue tracker in case you need help:
+https://github.com/BVLC/caffe/issues
diff --git a/caffe-crfrnn/LICENSE b/caffe-crfrnn/LICENSE
new file mode 100644
index 00000000..9a715d03
--- /dev/null
+++ b/caffe-crfrnn/LICENSE
@@ -0,0 +1,49 @@
+COPYRIGHT
+All contributions by the University of Oxford:
+Copyright (c) 2015, All rights reserved.
+
+All contributions by Baidu Institute of Deep Learning:
+Copyright (c) 2015, All rights reserved.
+
+All contributions by the University of California:
+Copyright (c) 2014, The Regents of the University of California (Regents)
+All rights reserved.
+
+All other contributions:
+Copyright (c) 2014, the respective contributors
+All rights reserved.
+
+Caffe uses a shared copyright model: each contributor holds copyright over
+their contributions to Caffe. The project versioning records all such
+contribution and copyright details. If a contributor wants to further mark
+their specific copyright on a particular contribution, they should indicate
+their copyright solely in the commit message of the change when it is
+committed.
+
+LICENSE
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+CONTRIBUTION AGREEMENT
+
+By contributing to the BVLC/caffe repository through pull-request, comment,
+or otherwise, the contributor releases their content to the
+license and copyright terms herein.
diff --git a/caffe-crfrnn/Makefile b/caffe-crfrnn/Makefile
new file mode 100644
index 00000000..ea97392d
--- /dev/null
+++ b/caffe-crfrnn/Makefile
@@ -0,0 +1,584 @@
+PROJECT := caffe
+
+CONFIG_FILE := Makefile.config
+include $(CONFIG_FILE)
+
+BUILD_DIR_LINK := $(BUILD_DIR)
+RELEASE_BUILD_DIR ?= .$(BUILD_DIR)_release
+DEBUG_BUILD_DIR ?= .$(BUILD_DIR)_debug
+
+DEBUG ?= 0
+ifeq ($(DEBUG), 1)
+ BUILD_DIR := $(DEBUG_BUILD_DIR)
+ OTHER_BUILD_DIR := $(RELEASE_BUILD_DIR)
+else
+ BUILD_DIR := $(RELEASE_BUILD_DIR)
+ OTHER_BUILD_DIR := $(DEBUG_BUILD_DIR)
+endif
+
+# All of the directories containing code.
+SRC_DIRS := $(shell find * -type d -exec bash -c "find {} -maxdepth 1 \
+ \( -name '*.cpp' -o -name '*.proto' \) | grep -q ." \; -print)
+
+# The target shared library name
+LIB_BUILD_DIR := $(BUILD_DIR)/lib
+STATIC_NAME := $(LIB_BUILD_DIR)/lib$(PROJECT).a
+DYNAMIC_NAME := $(LIB_BUILD_DIR)/lib$(PROJECT).so
+
+##############################
+# Get all source files
+##############################
+# CXX_SRCS are the source files excluding the test ones.
+CXX_SRCS := $(shell find src/$(PROJECT) ! -name "test_*.cpp" -name "*.cpp")
+# CU_SRCS are the cuda source files
+CU_SRCS := $(shell find src/$(PROJECT) ! -name "test_*.cu" -name "*.cu")
+# TEST_SRCS are the test source files
+TEST_MAIN_SRC := src/$(PROJECT)/test/test_caffe_main.cpp
+TEST_SRCS := $(shell find src/$(PROJECT) -name "test_*.cpp")
+TEST_SRCS := $(filter-out $(TEST_MAIN_SRC), $(TEST_SRCS))
+TEST_CU_SRCS := $(shell find src/$(PROJECT) -name "test_*.cu")
+GTEST_SRC := src/gtest/gtest-all.cpp
+# TOOL_SRCS are the source files for the tool binaries
+TOOL_SRCS := $(shell find tools -name "*.cpp")
+# EXAMPLE_SRCS are the source files for the example binaries
+EXAMPLE_SRCS := $(shell find examples -name "*.cpp")
+# BUILD_INCLUDE_DIR contains any generated header files we want to include.
+BUILD_INCLUDE_DIR := $(BUILD_DIR)/src
+# PROTO_SRCS are the protocol buffer definitions
+PROTO_SRC_DIR := src/$(PROJECT)/proto
+PROTO_SRCS := $(wildcard $(PROTO_SRC_DIR)/*.proto)
+# PROTO_BUILD_DIR will contain the .cc and obj files generated from
+# PROTO_SRCS; PROTO_BUILD_INCLUDE_DIR will contain the .h header files
+PROTO_BUILD_DIR := $(BUILD_DIR)/$(PROTO_SRC_DIR)
+PROTO_BUILD_INCLUDE_DIR := $(BUILD_INCLUDE_DIR)/$(PROJECT)/proto
+# NONGEN_CXX_SRCS includes all source/header files except those generated
+# automatically (e.g., by proto).
+NONGEN_CXX_SRCS := $(shell find \
+ src/$(PROJECT) \
+ include/$(PROJECT) \
+ python/$(PROJECT) \
+ matlab/$(PROJECT) \
+ examples \
+ tools \
+ -name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh")
+LINT_SCRIPT := scripts/cpp_lint.py
+LINT_OUTPUT_DIR := $(BUILD_DIR)/.lint
+LINT_EXT := lint.txt
+LINT_OUTPUTS := $(addsuffix .$(LINT_EXT), $(addprefix $(LINT_OUTPUT_DIR)/, $(NONGEN_CXX_SRCS)))
+EMPTY_LINT_REPORT := $(BUILD_DIR)/.$(LINT_EXT)
+NONEMPTY_LINT_REPORT := $(BUILD_DIR)/$(LINT_EXT)
+# PY$(PROJECT)_SRC is the python wrapper for $(PROJECT)
+PY$(PROJECT)_SRC := python/$(PROJECT)/_$(PROJECT).cpp
+PY$(PROJECT)_HXX_SRC := python/$(PROJECT)/_$(PROJECT).hpp
+PY$(PROJECT)_SO := python/$(PROJECT)/_$(PROJECT).so
+# MAT$(PROJECT)_SRC is the matlab wrapper for $(PROJECT)
+MAT$(PROJECT)_SRC := matlab/$(PROJECT)/mat$(PROJECT).cpp
+ifneq ($(MATLAB_DIR),)
+ MAT_SO_EXT := $(shell $(MATLAB_DIR)/bin/mexext)
+endif
+MAT$(PROJECT)_SO := matlab/$(PROJECT)/$(PROJECT).$(MAT_SO_EXT)
+
+##############################
+# Derive generated files
+##############################
+# The generated files for protocol buffers
+PROTO_GEN_HEADER_SRCS := $(addprefix $(PROTO_BUILD_DIR)/, \
+ $(notdir ${PROTO_SRCS:.proto=.pb.h}))
+PROTO_GEN_HEADER := $(addprefix $(PROTO_BUILD_INCLUDE_DIR)/, \
+ $(notdir ${PROTO_SRCS:.proto=.pb.h}))
+PROTO_GEN_CC := $(addprefix $(BUILD_DIR)/, ${PROTO_SRCS:.proto=.pb.cc})
+PY_PROTO_BUILD_DIR := python/$(PROJECT)/proto
+PY_PROTO_INIT := python/$(PROJECT)/proto/__init__.py
+PROTO_GEN_PY := $(foreach file,${PROTO_SRCS:.proto=_pb2.py}, \
+ $(PY_PROTO_BUILD_DIR)/$(notdir $(file)))
+# The objects corresponding to the source files
+# These objects will be linked into the final shared library, so we
+# exclude the tool, example, and test objects.
+CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o})
+CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o})
+PROTO_OBJS := ${PROTO_GEN_CC:.cc=.o}
+OBJS := $(PROTO_OBJS) $(CXX_OBJS) $(CU_OBJS)
+# tool, example, and test objects
+TOOL_OBJS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o})
+TOOL_BUILD_DIR := $(BUILD_DIR)/tools
+TEST_CXX_BUILD_DIR := $(BUILD_DIR)/src/$(PROJECT)/test
+TEST_CU_BUILD_DIR := $(BUILD_DIR)/cuda/src/$(PROJECT)/test
+TEST_CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o})
+TEST_CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o})
+TEST_OBJS := $(TEST_CXX_OBJS) $(TEST_CU_OBJS)
+GTEST_OBJ := $(addprefix $(BUILD_DIR)/, ${GTEST_SRC:.cpp=.o})
+EXAMPLE_OBJS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o})
+# Output files for automatic dependency generation
+DEPS := ${CXX_OBJS:.o=.d} ${CU_OBJS:.o=.d} ${TEST_CXX_OBJS:.o=.d} \
+ ${TEST_CU_OBJS:.o=.d}
+# tool, example, and test bins
+TOOL_BINS := ${TOOL_OBJS:.o=.bin}
+EXAMPLE_BINS := ${EXAMPLE_OBJS:.o=.bin}
+# symlinks to tool bins without the ".bin" extension
+TOOL_BIN_LINKS := ${TOOL_BINS:.bin=}
+# Put the test binaries in build/test for convenience.
+TEST_BIN_DIR := $(BUILD_DIR)/test
+TEST_CU_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, \
+ $(foreach obj,$(TEST_CU_OBJS),$(basename $(notdir $(obj))))))
+TEST_CXX_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, \
+ $(foreach obj,$(TEST_CXX_OBJS),$(basename $(notdir $(obj))))))
+TEST_BINS := $(TEST_CXX_BINS) $(TEST_CU_BINS)
+# TEST_ALL_BIN is the test binary that links caffe statically.
+TEST_ALL_BIN := $(TEST_BIN_DIR)/test_all.testbin
+# TEST_ALL_DYNINK_BIN is the test binary that links caffe as a dynamic library.
+TEST_ALL_DYNLINK_BIN := $(TEST_BIN_DIR)/test_all_dynamic_link.testbin
+
+##############################
+# Derive compiler warning dump locations
+##############################
+WARNS_EXT := warnings.txt
+CXX_WARNS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o.$(WARNS_EXT)})
+CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o.$(WARNS_EXT)})
+TOOL_WARNS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o.$(WARNS_EXT)})
+EXAMPLE_WARNS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o.$(WARNS_EXT)})
+TEST_WARNS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o.$(WARNS_EXT)})
+TEST_CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o.$(WARNS_EXT)})
+ALL_CXX_WARNS := $(CXX_WARNS) $(TOOL_WARNS) $(EXAMPLE_WARNS) $(TEST_WARNS)
+ALL_CU_WARNS := $(CU_WARNS) $(TEST_CU_WARNS)
+ALL_WARNS := $(ALL_CXX_WARNS) $(ALL_CU_WARNS)
+
+EMPTY_WARN_REPORT := $(BUILD_DIR)/.$(WARNS_EXT)
+NONEMPTY_WARN_REPORT := $(BUILD_DIR)/$(WARNS_EXT)
+
+##############################
+# Derive include and lib directories
+##############################
+CUDA_INCLUDE_DIR := $(CUDA_DIR)/include
+
+CUDA_LIB_DIR :=
+# add /lib64 only if it exists
+ifneq ("$(wildcard $(CUDA_DIR)/lib64)","")
+ CUDA_LIB_DIR += $(CUDA_DIR)/lib64
+endif
+CUDA_LIB_DIR += $(CUDA_DIR)/lib
+
+INCLUDE_DIRS += $(BUILD_INCLUDE_DIR) ./src ./include
+ifneq ($(CPU_ONLY), 1)
+ INCLUDE_DIRS += $(CUDA_INCLUDE_DIR)
+ LIBRARY_DIRS += $(CUDA_LIB_DIR)
+ LIBRARIES := cudart cublas curand
+endif
+LIBRARIES += glog gflags protobuf leveldb snappy \
+ lmdb boost_system hdf5_hl hdf5 m \
+ opencv_core opencv_highgui opencv_imgproc
+PYTHON_LIBRARIES := boost_python python2.7
+WARNINGS := -Wall -Wno-sign-compare
+
+##############################
+# Set build directories
+##############################
+
+DISTRIBUTE_SUBDIRS := $(DISTRIBUTE_DIR)/bin $(DISTRIBUTE_DIR)/lib
+DIST_ALIASES := dist
+ifneq ($(strip $(DISTRIBUTE_DIR)),distribute)
+ DIST_ALIASES += distribute
+endif
+
+ALL_BUILD_DIRS := $(sort $(BUILD_DIR) $(addprefix $(BUILD_DIR)/, $(SRC_DIRS)) \
+ $(addprefix $(BUILD_DIR)/cuda/, $(SRC_DIRS)) \
+ $(LIB_BUILD_DIR) $(TEST_BIN_DIR) $(PY_PROTO_BUILD_DIR) $(LINT_OUTPUT_DIR) \
+ $(DISTRIBUTE_SUBDIRS) $(PROTO_BUILD_INCLUDE_DIR))
+
+##############################
+# Set directory for Doxygen-generated documentation
+##############################
+DOXYGEN_CONFIG_FILE ?= ./.Doxyfile
+# should be the same as OUTPUT_DIRECTORY in the .Doxyfile
+DOXYGEN_OUTPUT_DIR ?= ./doxygen
+DOXYGEN_COMMAND ?= doxygen
+# All the files that might have Doxygen documentation.
+DOXYGEN_SOURCES := $(shell find \
+ src/$(PROJECT) \
+ include/$(PROJECT) \
+ python/ \
+ matlab/ \
+ examples \
+ tools \
+ -name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh" -or \
+ -name "*.py" -or -name "*.m")
+DOXYGEN_SOURCES += $(DOXYGEN_CONFIG_FILE)
+
+
+##############################
+# Configure build
+##############################
+
+# Determine platform
+UNAME := $(shell uname -s)
+ifeq ($(UNAME), Linux)
+ LINUX := 1
+else ifeq ($(UNAME), Darwin)
+ OSX := 1
+endif
+
+# Linux
+ifeq ($(LINUX), 1)
+ CXX ?= /usr/bin/g++
+ GCCVERSION := $(shell $(CXX) -dumpversion | cut -f1,2 -d.)
+ # older versions of gcc are too dumb to build boost with -Wuninitalized
+ ifeq ($(shell echo $(GCCVERSION) \< 4.6 | bc), 1)
+ WARNINGS += -Wno-uninitialized
+ endif
+ # boost::thread is reasonably called boost_thread (compare OS X)
+ # We will also explicitly add stdc++ to the link target.
+ LIBRARIES += boost_thread stdc++
+endif
+
+# OS X:
+# clang++ instead of g++
+# libstdc++ instead of libc++ for CUDA compatibility on 10.9
+ifeq ($(OSX), 1)
+ CXX := /usr/bin/clang++
+ # clang throws this warning for cuda headers
+ WARNINGS += -Wno-unneeded-internal-declaration
+ ifneq ($(findstring 10.9, $(shell sw_vers -productVersion)),)
+ CXXFLAGS += -stdlib=libstdc++
+ LINKFLAGS += -stdlib=libstdc++
+ endif
+ # boost::thread is called boost_thread-mt to mark multithreading on OS X
+ LIBRARIES += boost_thread-mt
+ NVCCFLAGS += -DOSX
+endif
+
+# Custom compiler
+ifdef CUSTOM_CXX
+ CXX := $(CUSTOM_CXX)
+endif
+
+# Static linking
+ifneq (,$(findstring clang++,$(CXX)))
+ STATIC_LINK_COMMAND := -Wl,-force_load $(STATIC_NAME)
+else ifneq (,$(findstring g++,$(CXX)))
+ STATIC_LINK_COMMAND := -Wl,--whole-archive $(STATIC_NAME) -Wl,--no-whole-archive
+else
+ $(error Cannot static link with the $(CXX) compiler.)
+endif
+
+# Debugging
+ifeq ($(DEBUG), 1)
+ COMMON_FLAGS += -DDEBUG -g -O0
+ NVCCFLAGS += -G
+else
+ COMMON_FLAGS += -DNDEBUG -O2
+endif
+
+# cuDNN acceleration configuration.
+ifeq ($(USE_CUDNN), 1)
+ LIBRARIES += cudnn
+ COMMON_FLAGS += -DUSE_CUDNN
+endif
+
+# CPU-only configuration
+ifeq ($(CPU_ONLY), 1)
+ OBJS := $(PROTO_OBJS) $(CXX_OBJS)
+ TEST_OBJS := $(TEST_CXX_OBJS)
+ TEST_BINS := $(TEST_CXX_BINS)
+ ALL_WARNS := $(ALL_CXX_WARNS)
+ TEST_FILTER := --gtest_filter="-*GPU*"
+ COMMON_FLAGS += -DCPU_ONLY
+endif
+
+# BLAS configuration (default = ATLAS)
+BLAS ?= atlas
+ifeq ($(BLAS), mkl)
+ # MKL
+ LIBRARIES += mkl_rt
+ COMMON_FLAGS += -DUSE_MKL
+ MKL_DIR ?= /opt/intel/mkl
+ BLAS_INCLUDE ?= $(MKL_DIR)/include
+ BLAS_LIB ?= $(MKL_DIR)/lib $(MKL_DIR)/lib/intel64
+else ifeq ($(BLAS), open)
+ # OpenBLAS
+ LIBRARIES += openblas
+else
+ # ATLAS
+ ifeq ($(LINUX), 1)
+ ifeq ($(BLAS), atlas)
+ # Linux simply has cblas and atlas
+ LIBRARIES += cblas atlas
+ endif
+ else ifeq ($(OSX), 1)
+ # OS X packages atlas as the vecLib framework
+ BLAS_INCLUDE ?= /System/Library/Frameworks/vecLib.framework/Versions/Current/Headers/
+ LIBRARIES += cblas
+ LDFLAGS += -framework vecLib
+ endif
+endif
+INCLUDE_DIRS += $(BLAS_INCLUDE)
+LIBRARY_DIRS += $(BLAS_LIB)
+
+LIBRARY_DIRS += $(LIB_BUILD_DIR)
+
+# Automatic dependency generation (nvcc is handled separately)
+CXXFLAGS += -MMD -MP
+
+# Complete build flags.
+COMMON_FLAGS += $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
+CXXFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
+NVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
+# mex may invoke an older gcc that is too liberal with -Wuninitalized
+MATLAB_CXXFLAGS := $(CXXFLAGS) -Wno-uninitialized
+LINKFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
+
+USE_PKG_CONFIG ?= 0
+ifeq ($(USE_PKG_CONFIG), 1)
+ PKG_CONFIG := $(shell pkg-config opencv --libs)
+else
+ PKG_CONFIG :=
+endif
+LDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) $(PKG_CONFIG) \
+ $(foreach library,$(LIBRARIES),-l$(library))
+PYTHON_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON_LIBRARIES),-l$(library))
+
+# 'superclean' target recursively* deletes all files ending with an extension
+# in $(SUPERCLEAN_EXTS) below. This may be useful if you've built older
+# versions of Caffe that do not place all generated files in a location known
+# to the 'clean' target.
+#
+# 'supercleanlist' will list the files to be deleted by make superclean.
+#
+# * Recursive with the exception that symbolic links are never followed, per the
+# default behavior of 'find'.
+SUPERCLEAN_EXTS := .so .a .o .bin .testbin .pb.cc .pb.h _pb2.py .cuo
+
+##############################
+# Define build targets
+##############################
+.PHONY: all test clean docs linecount lint lintclean tools examples $(DIST_ALIASES) \
+ py mat py$(PROJECT) mat$(PROJECT) proto runtest \
+ superclean supercleanlist supercleanfiles warn everything
+
+all: $(STATIC_NAME) $(DYNAMIC_NAME) tools examples
+
+everything: all py$(PROJECT) mat$(PROJECT) test warn lint runtest
+
+linecount:
+ cloc --read-lang-def=$(PROJECT).cloc \
+ src/$(PROJECT) include/$(PROJECT) tools examples \
+ python matlab
+
+lint: $(EMPTY_LINT_REPORT)
+
+lintclean:
+ @ $(RM) -r $(LINT_OUTPUT_DIR) $(EMPTY_LINT_REPORT) $(NONEMPTY_LINT_REPORT)
+
+docs: $(DOXYGEN_OUTPUT_DIR)
+ @ cd ./docs ; ln -sfn ../$(DOXYGEN_OUTPUT_DIR)/html doxygen
+
+$(DOXYGEN_OUTPUT_DIR): $(DOXYGEN_CONFIG_FILE) $(DOXYGEN_SOURCES)
+ $(DOXYGEN_COMMAND) $(DOXYGEN_CONFIG_FILE)
+
+$(EMPTY_LINT_REPORT): $(LINT_OUTPUTS) | $(BUILD_DIR)
+ @ cat $(LINT_OUTPUTS) > $@
+ @ if [ -s "$@" ]; then \
+ cat $@; \
+ mv $@ $(NONEMPTY_LINT_REPORT); \
+ echo "Found one or more lint errors."; \
+ exit 1; \
+ fi; \
+ $(RM) $(NONEMPTY_LINT_REPORT); \
+ echo "No lint errors!";
+
+$(LINT_OUTPUTS): $(LINT_OUTPUT_DIR)/%.lint.txt : % $(LINT_SCRIPT) | $(LINT_OUTPUT_DIR)
+ @ mkdir -p $(dir $@)
+ @ python $(LINT_SCRIPT) $< 2>&1 \
+ | grep -v "^Done processing " \
+ | grep -v "^Total errors found: 0" \
+ > $@ \
+ || true
+
+test: $(TEST_ALL_BIN) $(TEST_ALL_DYNLINK_BIN) $(TEST_BINS)
+
+tools: $(TOOL_BINS) $(TOOL_BIN_LINKS)
+
+examples: $(EXAMPLE_BINS)
+
+py$(PROJECT): py
+
+py: $(PY$(PROJECT)_SO) $(PROTO_GEN_PY)
+
+$(PY$(PROJECT)_SO): $(PY$(PROJECT)_SRC) $(STATIC_NAME) $(PY$(PROJECT)_HXX_SRC)
+ @ echo CXX $<
+ $(Q)$(CXX) -shared -o $@ $(PY$(PROJECT)_SRC) \
+ $(STATIC_LINK_COMMAND) $(LINKFLAGS) $(PYTHON_LDFLAGS)
+
+mat$(PROJECT): mat
+
+mat: $(MAT$(PROJECT)_SO)
+
+$(MAT$(PROJECT)_SO): $(MAT$(PROJECT)_SRC) $(STATIC_NAME)
+ @ if [ -z "$(MATLAB_DIR)" ]; then \
+ echo "MATLAB_DIR must be specified in $(CONFIG_FILE)" \
+ "to build mat$(PROJECT)."; \
+ exit 1; \
+ fi
+ @ echo MEX $<
+ $(Q)$(MATLAB_DIR)/bin/mex $(MAT$(PROJECT)_SRC) \
+ CXX="$(CXX)" \
+ CXXFLAGS="\$$CXXFLAGS $(MATLAB_CXXFLAGS)" \
+ CXXLIBS="\$$CXXLIBS $(STATIC_LINK_COMMAND) $(LDFLAGS)" -output $@
+
+runtest: $(TEST_ALL_BIN) $(TEST_ALL_DYNLINK_BIN)
+ $(TEST_ALL_BIN) $(TEST_GPUID) --gtest_shuffle $(TEST_FILTER) && \
+ $(TEST_ALL_DYNLINK_BIN) $(TEST_GPUID) --gtest_shuffle $(TEST_FILTER)
+
+warn: $(EMPTY_WARN_REPORT)
+
+$(EMPTY_WARN_REPORT): $(ALL_WARNS) | $(BUILD_DIR)
+ @ cat $(ALL_WARNS) > $@
+ @ if [ -s "$@" ]; then \
+ cat $@; \
+ mv $@ $(NONEMPTY_WARN_REPORT); \
+ echo "Compiler produced one or more warnings."; \
+ exit 1; \
+ fi; \
+ $(RM) $(NONEMPTY_WARN_REPORT); \
+ echo "No compiler warnings!";
+
+$(ALL_WARNS): %.o.$(WARNS_EXT) : %.o
+
+$(BUILD_DIR_LINK): $(BUILD_DIR)/.linked
+
+# Create a target ".linked" in this BUILD_DIR to tell Make that the "build" link
+# is currently correct, then delete the one in the OTHER_BUILD_DIR in case it
+# exists and $(DEBUG) is toggled later.
+$(BUILD_DIR)/.linked:
+ @ mkdir -p $(BUILD_DIR)
+ @ $(RM) $(OTHER_BUILD_DIR)/.linked
+ @ $(RM) -r $(BUILD_DIR_LINK)
+ @ ln -s $(BUILD_DIR) $(BUILD_DIR_LINK)
+ @ touch $@
+
+$(ALL_BUILD_DIRS): | $(BUILD_DIR_LINK)
+ @ mkdir -p $@
+
+$(DYNAMIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
+ @ echo LD $<
+ $(Q)$(CXX) -shared -o $@ $(OBJS) $(LINKFLAGS) $(LDFLAGS)
+
+$(STATIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
+ @ echo AR $<
+ $(Q)ar rcs $@ $(OBJS)
+
+$(BUILD_DIR)/%.o: %.cpp | $(ALL_BUILD_DIRS)
+ @ echo CXX $<
+ $(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
+ || (cat $@.$(WARNS_EXT); exit 1)
+ @ cat $@.$(WARNS_EXT)
+
+$(PROTO_BUILD_DIR)/%.pb.o: $(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_GEN_HEADER) \
+ | $(PROTO_BUILD_DIR)
+ @ echo CXX $<
+ $(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
+ || (cat $@.$(WARNS_EXT); exit 1)
+ @ cat $@.$(WARNS_EXT)
+
+$(BUILD_DIR)/cuda/%.o: %.cu | $(ALL_BUILD_DIRS)
+ @ echo NVCC $<
+ $(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -M $< -o ${@:.o=.d} \
+ -odir $(@D)
+ $(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -c $< -o $@ 2> $@.$(WARNS_EXT) \
+ || (cat $@.$(WARNS_EXT); exit 1)
+ @ cat $@.$(WARNS_EXT)
+
+$(TEST_ALL_BIN): $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) $(STATIC_NAME) \
+ | $(TEST_BIN_DIR)
+ @ echo CXX/LD -o $@ $<
+ $(Q)$(CXX) $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) $(STATIC_LINK_COMMAND) \
+ -o $@ $(LINKFLAGS) $(LDFLAGS)
+
+$(TEST_ALL_DYNLINK_BIN): $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) $(DYNAMIC_NAME) \
+ | $(TEST_BIN_DIR)
+ @ echo CXX/LD -o $@ $<
+ $(Q)$(CXX) $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \
+ -o $@ $(LINKFLAGS) $(LDFLAGS) -l$(PROJECT) -Wl,-rpath,$(LIB_BUILD_DIR)
+
+$(TEST_CU_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CU_BUILD_DIR)/%.o \
+ $(GTEST_OBJ) $(STATIC_NAME) | $(TEST_BIN_DIR)
+ @ echo LD $<
+ $(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) $(STATIC_LINK_COMMAND) \
+ -o $@ $(LINKFLAGS) $(LDFLAGS)
+
+$(TEST_CXX_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CXX_BUILD_DIR)/%.o \
+ $(GTEST_OBJ) $(STATIC_NAME) | $(TEST_BIN_DIR)
+ @ echo LD $<
+ $(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) $(STATIC_LINK_COMMAND) \
+ -o $@ $(LINKFLAGS) $(LDFLAGS)
+
+# Target for extension-less symlinks to tool binaries with extension '*.bin'.
+$(TOOL_BUILD_DIR)/%: $(TOOL_BUILD_DIR)/%.bin | $(TOOL_BUILD_DIR)
+ @ $(RM) $@
+ @ ln -s $(abspath $<) $@
+
+$(TOOL_BINS) $(EXAMPLE_BINS): %.bin : %.o $(STATIC_NAME)
+ @ echo LD $<
+ $(Q)$(CXX) $< $(STATIC_LINK_COMMAND) -o $@ $(LINKFLAGS) $(LDFLAGS)
+
+proto: $(PROTO_GEN_CC) $(PROTO_GEN_HEADER)
+
+$(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_BUILD_DIR)/%.pb.h : \
+ $(PROTO_SRC_DIR)/%.proto | $(PROTO_BUILD_DIR)
+ @ echo PROTOC $<
+ $(Q)protoc --proto_path=$(PROTO_SRC_DIR) --cpp_out=$(PROTO_BUILD_DIR) $<
+
+$(PY_PROTO_BUILD_DIR)/%_pb2.py : $(PROTO_SRC_DIR)/%.proto \
+ $(PY_PROTO_INIT) | $(PY_PROTO_BUILD_DIR)
+ @ echo PROTOC \(python\) $<
+ $(Q)protoc --proto_path=$(PROTO_SRC_DIR) --python_out=$(PY_PROTO_BUILD_DIR) $<
+
+$(PY_PROTO_INIT): | $(PY_PROTO_BUILD_DIR)
+ touch $(PY_PROTO_INIT)
+
+clean:
+ @- $(RM) -rf $(ALL_BUILD_DIRS)
+ @- $(RM) -rf $(OTHER_BUILD_DIR)
+ @- $(RM) -rf $(BUILD_DIR_LINK)
+ @- $(RM) -rf $(DISTRIBUTE_DIR)
+ @- $(RM) $(PY$(PROJECT)_SO)
+ @- $(RM) $(MAT$(PROJECT)_SO)
+
+supercleanfiles:
+ $(eval SUPERCLEAN_FILES := $(strip \
+ $(foreach ext,$(SUPERCLEAN_EXTS), $(shell find . -name '*$(ext)' \
+ -not -path './data/*'))))
+
+supercleanlist: supercleanfiles
+ @ \
+ if [ -z "$(SUPERCLEAN_FILES)" ]; then \
+ echo "No generated files found."; \
+ else \
+ echo $(SUPERCLEAN_FILES) | tr ' ' '\n'; \
+ fi
+
+superclean: clean supercleanfiles
+ @ \
+ if [ -z "$(SUPERCLEAN_FILES)" ]; then \
+ echo "No generated files found."; \
+ else \
+ echo "Deleting the following generated files:"; \
+ echo $(SUPERCLEAN_FILES) | tr ' ' '\n'; \
+ $(RM) $(SUPERCLEAN_FILES); \
+ fi
+
+$(DIST_ALIASES): $(DISTRIBUTE_DIR)
+
+$(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)
+ # add include
+ cp -r include $(DISTRIBUTE_DIR)/
+ mkdir -p $(DISTRIBUTE_DIR)/include/caffe/proto
+ cp $(PROTO_GEN_HEADER_SRCS) $(DISTRIBUTE_DIR)/include/caffe/proto
+ # add tool and example binaries
+ cp $(TOOL_BINS) $(DISTRIBUTE_DIR)/bin
+ cp $(EXAMPLE_BINS) $(DISTRIBUTE_DIR)/bin
+ # add libraries
+ cp $(STATIC_NAME) $(DISTRIBUTE_DIR)/lib
+ cp $(DYNAMIC_NAME) $(DISTRIBUTE_DIR)/lib
+ # add python - it's not the standard way, indeed...
+ cp -r python $(DISTRIBUTE_DIR)/python
+
+-include $(DEPS)
diff --git a/caffe-crfrnn/Makefile.config b/caffe-crfrnn/Makefile.config
new file mode 100755
index 00000000..e38918e3
--- /dev/null
+++ b/caffe-crfrnn/Makefile.config
@@ -0,0 +1,78 @@
+## Refer to http://caffe.berkeleyvision.org/installation.html
+# Contributions simplifying and improving our build system are welcome!
+
+# cuDNN acceleration switch (uncomment to build with cuDNN).
+USE_CUDNN := 1
+
+# CPU-only switch (uncomment to build without GPU support).
+# CPU_ONLY := 1
+
+# To customize your choice of compiler, uncomment and set the following.
+# N.B. the default for Linux is g++ and the default for OSX is clang++
+# CUSTOM_CXX := g++
+
+# CUDA directory contains bin/ and lib/ directories that we need.
+CUDA_DIR := /usr/local/cuda
+# On Ubuntu 14.04, if cuda tools are installed via
+# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
+# CUDA_DIR := /usr
+
+# CUDA architecture setting: going with all of them (up to CUDA 5.5 compatible).
+# For the latest architecture, you need to install CUDA >= 6.0 and uncomment
+# the *_50 lines below.
+CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
+ -gencode arch=compute_35,code=sm_35
+ #-gencode arch=compute_50,code=sm_50 \
+ #-gencode arch=compute_50,code=compute_50
+
+# BLAS choice:
+# atlas for ATLAS (default)
+# mkl for MKL
+# open for OpenBlas
+BLAS := mkl
+# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
+# Leave commented to accept the defaults for your choice of BLAS
+# (which should work)!
+BLAS_INCLUDE := /home/bittnt/intel/compilers_and_libraries_2016.0.109/linux/mkl/include
+BLAS_LIB := /home/bittnt/intel/compilers_and_libraries_2016.0.109/linux/mkl/lib/intel64
+
+# This is required only if you will compile the matlab interface.
+# MATLAB directory should contain the mex binary in /bin.
+MATLAB_DIR := /usr/local/MATLAB/R2015a
+# MATLAB_DIR := /Applications/MATLAB_R2012b.app
+
+# NOTE: this is required only if you will compile the python interface.
+# We need to be able to find Python.h and numpy/arrayobject.h.
+#PYTHON_INCLUDE := /usr/include/python2.7 \
+# /usr/lib/python2.7/dist-packages/numpy/core/include
+# Anaconda Python distribution is quite popular. Include path:
+# Verify anaconda location, sometimes it's in root.
+ANACONDA_HOME := $(HOME)/anaconda
+PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
+ $(ANACONDA_HOME)/include/python2.7 \
+ $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \
+
+# We need to be able to find libpythonX.X.so or .dylib.
+#PYTHON_LIB := /usr/lib
+PYTHON_LIB := $(ANACONDA_HOME)/lib
+
+# Whatever else you find you need goes here.
+INCLUDE_DIRS := $(PYTHON_INCLUDE) /home/bittnt/common/include /home/bittnt/crf-rnn/cuda/include /usr/local/cuda/include
+LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /home/bittnt/common/lib /home/bittnt/crf-rnn/cuda/lib64 /usr/local/cuda/lib
+#LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /home/bittnt/common/lib /home/bittnt/Documents/ConvMean/caffe-fcn-sadeep/cudnnv2rc3
+
+# Uncomment to use `pkg-config` to specify OpenCV library paths.
+# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
+# USE_PKG_CONFIG := 1
+
+BUILD_DIR := build
+DISTRIBUTE_DIR := distribute
+
+# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
+#DEBUG := 1
+
+# The ID of the GPU that 'make runtest' will use to run unit tests.
+TEST_GPUID := 0
+
+# enable pretty build (comment to see full commands)
+Q ?= @
diff --git a/caffe-crfrnn/Makefile.config.example b/caffe-crfrnn/Makefile.config.example
new file mode 100644
index 00000000..0c996038
--- /dev/null
+++ b/caffe-crfrnn/Makefile.config.example
@@ -0,0 +1,79 @@
+## Refer to http://caffe.berkeleyvision.org/installation.html
+# Contributions simplifying and improving our build system are welcome!
+
+# cuDNN acceleration switch (uncomment to build with cuDNN).
+# USE_CUDNN := 1
+
+# CPU-only switch (uncomment to build without GPU support).
+# CPU_ONLY := 1
+
+# To customize your choice of compiler, uncomment and set the following.
+# N.B. the default for Linux is g++ and the default for OSX is clang++
+# CUSTOM_CXX := g++
+
+# CUDA directory contains bin/ and lib/ directories that we need.
+CUDA_DIR := /usr/local/cuda
+# On Ubuntu 14.04, if cuda tools are installed via
+# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
+# CUDA_DIR := /usr
+
+# CUDA architecture setting: going with all of them (up to CUDA 5.5 compatible).
+# For the latest architecture, you need to install CUDA >= 6.0 and uncomment
+# the *_50 lines below.
+CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
+ -gencode arch=compute_20,code=sm_21 \
+ -gencode arch=compute_30,code=sm_30 \
+ -gencode arch=compute_35,code=sm_35 \
+ #-gencode arch=compute_50,code=sm_50 \
+ #-gencode arch=compute_50,code=compute_50
+
+# BLAS choice:
+# atlas for ATLAS (default)
+# mkl for MKL
+# open for OpenBlas
+BLAS := atlas
+# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
+# Leave commented to accept the defaults for your choice of BLAS
+# (which should work)!
+# BLAS_INCLUDE := /path/to/your/blas
+# BLAS_LIB := /path/to/your/blas
+
+# This is required only if you will compile the matlab interface.
+# MATLAB directory should contain the mex binary in /bin.
+# MATLAB_DIR := /usr/local
+# MATLAB_DIR := /Applications/MATLAB_R2012b.app
+
+# NOTE: this is required only if you will compile the python interface.
+# We need to be able to find Python.h and numpy/arrayobject.h.
+PYTHON_INCLUDE := /usr/include/python2.7 \
+ /usr/lib/python2.7/dist-packages/numpy/core/include
+# Anaconda Python distribution is quite popular. Include path:
+# Verify anaconda location, sometimes it's in root.
+# ANACONDA_HOME := $(HOME)/anaconda
+# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
+ # $(ANACONDA_HOME)/include/python2.7 \
+ # $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \
+
+# We need to be able to find libpythonX.X.so or .dylib.
+PYTHON_LIB := /usr/lib
+# PYTHON_LIB := $(ANACONDA_HOME)/lib
+
+# Whatever else you find you need goes here.
+INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
+LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
+
+# Uncomment to use `pkg-config` to specify OpenCV library paths.
+# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
+# USE_PKG_CONFIG := 1
+
+BUILD_DIR := build
+DISTRIBUTE_DIR := distribute
+
+# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
+# DEBUG := 1
+
+# The ID of the GPU that 'make runtest' will use to run unit tests.
+TEST_GPUID := 0
+
+# enable pretty build (comment to see full commands)
+Q ?= @
diff --git a/caffe-crfrnn/README.md b/caffe-crfrnn/README.md
new file mode 100644
index 00000000..5b5cb1f4
--- /dev/null
+++ b/caffe-crfrnn/README.md
@@ -0,0 +1,5 @@
+This is Caffe with several unmerged PRs.
+
+Everything here is subject to change, including the history of this branch.
+
+See `future.sh` for details.
diff --git a/caffe-crfrnn/caffe.cloc b/caffe-crfrnn/caffe.cloc
new file mode 100644
index 00000000..a36ab619
--- /dev/null
+++ b/caffe-crfrnn/caffe.cloc
@@ -0,0 +1,53 @@
+Bourne Shell
+ filter remove_matches ^\s*#
+ filter remove_inline #.*$
+ extension sh
+ script_exe sh
+C
+ filter remove_matches ^\s*//
+ filter call_regexp_common C
+ filter remove_inline //.*$
+ extension c
+ extension ec
+ extension pgc
+C++
+ filter remove_matches ^\s*//
+ filter remove_inline //.*$
+ filter call_regexp_common C
+ extension C
+ extension cc
+ extension cpp
+ extension cxx
+ extension pcc
+C/C++ Header
+ filter remove_matches ^\s*//
+ filter call_regexp_common C
+ filter remove_inline //.*$
+ extension H
+ extension h
+ extension hh
+ extension hpp
+CUDA
+ filter remove_matches ^\s*//
+ filter remove_inline //.*$
+ filter call_regexp_common C
+ extension cu
+Python
+ filter remove_matches ^\s*#
+ filter docstring_to_C
+ filter call_regexp_common C
+ filter remove_inline #.*$
+ extension py
+make
+ filter remove_matches ^\s*#
+ filter remove_inline #.*$
+ extension Gnumakefile
+ extension Makefile
+ extension am
+ extension gnumakefile
+ extension makefile
+ filename Gnumakefile
+ filename Makefile
+ filename gnumakefile
+ filename makefile
+ script_exe make
diff --git a/caffe-crfrnn/cmake/ConfigGen.cmake b/caffe-crfrnn/cmake/ConfigGen.cmake
new file mode 100644
index 00000000..566d6ca0
--- /dev/null
+++ b/caffe-crfrnn/cmake/ConfigGen.cmake
@@ -0,0 +1,104 @@
+
+################################################################################################
+# Helper function to fetch caffe includes which will be passed to dependent projects
+# Usage:
+# caffe_get_current_includes()
+function(caffe_get_current_includes includes_variable)
+ get_property(current_includes DIRECTORY PROPERTY INCLUDE_DIRECTORIES)
+ caffe_convert_absolute_paths(current_includes)
+
+ # remove at most one ${PROJECT_BINARY_DIR} include added for caffe_config.h
+ list(FIND current_includes ${PROJECT_BINARY_DIR} __index)
+ list(REMOVE_AT current_includes ${__index})
+
+ # removing numpy includes (since not required for client libs)
+ set(__toremove "")
+ foreach(__i ${current_includes})
+ if(${__i} MATCHES "python")
+ list(APPEND __toremove ${__i})
+ endif()
+ endforeach()
+ if(__toremove)
+ list(REMOVE_ITEM current_includes ${__toremove})
+ endif()
+
+ caffe_list_unique(current_includes)
+ set(${includes_variable} ${current_includes} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Helper function to get all list items that begin with given prefix
+# Usage:
+# caffe_get_items_with_prefix( )
+function(caffe_get_items_with_prefix prefix list_variable output_variable)
+ set(__result "")
+ foreach(__e ${${list_variable}})
+ if(__e MATCHES "^${prefix}.*")
+ list(APPEND __result ${__e})
+ endif()
+ endforeach()
+ set(${output_variable} ${__result} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Function for generation Caffe build- and install- tree export config files
+# Usage:
+# caffe_generate_export_configs()
+function(caffe_generate_export_configs)
+ set(install_cmake_suffix "share/Caffe")
+
+ # ---[ Configure build-tree CaffeConfig.cmake file ]---
+ caffe_get_current_includes(Caffe_INCLUDE_DIRS)
+
+ set(Caffe_DEFINITIONS "")
+ if(NOT HAVE_CUDA)
+ set(HAVE_CUDA FALSE)
+ list(APPEND Caffe_DEFINITIONS -DCPU_ONLY)
+ endif()
+
+ if(NOT HAVE_CUDNN)
+ set(HAVE_CUDNN FALSE)
+ else()
+ list(APPEND DEFINITIONS -DUSE_CUDNN)
+ endif()
+
+ if(BLAS STREQUAL "MKL" OR BLAS STREQUAL "mkl")
+ list(APPEND Caffe_DEFINITIONS -DUSE_MKL)
+ endif()
+
+ configure_file("cmake/Templates/CaffeConfig.cmake.in" "${PROJECT_BINARY_DIR}/CaffeConfig.cmake" @ONLY)
+
+ # Add targets to the build-tree export set
+ export(TARGETS caffe proto FILE "${PROJECT_BINARY_DIR}/CaffeTargets.cmake")
+ export(PACKAGE Caffe)
+
+ # ---[ Configure install-tree CaffeConfig.cmake file ]---
+
+ # remove source and build dir includes
+ caffe_get_items_with_prefix(${PROJECT_SOURCE_DIR} Caffe_INCLUDE_DIRS __insource)
+ caffe_get_items_with_prefix(${PROJECT_BINARY_DIR} Caffe_INCLUDE_DIRS __inbinary)
+ list(REMOVE_ITEM Caffe_INCLUDE_DIRS ${__insource} ${__inbinary})
+
+ # add `install` include folder
+ set(lines
+ "get_filename_component(__caffe_include \"\${Caffe_CMAKE_DIR}/../../include\" ABSOLUTE)\n"
+ "list(APPEND Caffe_INCLUDE_DIRS \${__caffe_include})\n"
+ "unset(__caffe_include)\n")
+ string(REPLACE ";" "" Caffe_INSTALL_INCLUDE_DIR_APPEND_COMMAND ${lines})
+
+ configure_file("cmake/Templates/CaffeConfig.cmake.in" "${PROJECT_BINARY_DIR}/cmake/CaffeConfig.cmake" @ONLY)
+
+ # Install the CaffeConfig.cmake and export set to use with install-tree
+ install(FILES "${PROJECT_BINARY_DIR}/cmake/CaffeConfig.cmake" DESTINATION ${install_cmake_suffix})
+ install(EXPORT CaffeTargets DESTINATION ${install_cmake_suffix})
+
+ # ---[ Configure and install version file ]---
+
+ # TODO: Lines below are commented because Caffe does't declare its version in headers.
+ # When the declarations are added, modify `caffe_extract_caffe_version()` macro and uncomment
+
+ # configure_file(cmake/Templates/CaffeConfigVersion.cmake.in "${PROJECT_BINARY_DIR}/CaffeConfigVersion.cmake" @ONLY)
+ # install(FILES "${PROJECT_BINARY_DIR}/CaffeConfigVersion.cmake" DESTINATION ${install_cmake_suffix})
+endfunction()
+
+
diff --git a/caffe-crfrnn/cmake/Cuda.cmake b/caffe-crfrnn/cmake/Cuda.cmake
new file mode 100644
index 00000000..ff58d31c
--- /dev/null
+++ b/caffe-crfrnn/cmake/Cuda.cmake
@@ -0,0 +1,254 @@
+if(CPU_ONLY)
+ return()
+endif()
+
+# Known NVIDIA GPU achitectures Caffe can be compiled for.
+# This list will be used for CUDA_ARCH_NAME = All option
+set(Caffe_known_gpu_archs "20 21(20) 30 35 50")
+
+################################################################################################
+# A function for automatic detection of GPUs installed (if autodetection is enabled)
+# Usage:
+# caffe_detect_installed_gpus(out_variable)
+function(caffe_detect_installed_gpus out_variable)
+ if(NOT CUDA_gpu_detect_output)
+ set(__cufile ${PROJECT_BINARY_DIR}/detect_cuda_archs.cu)
+
+ file(WRITE ${__cufile} ""
+ "#include \n"
+ "int main()\n"
+ "{\n"
+ " int count = 0;\n"
+ " if (cudaSuccess != cudaGetDeviceCount(&count)) return -1;\n"
+ " if (count == 0) return -1;\n"
+ " for (int device = 0; device < count; ++device)\n"
+ " {\n"
+ " cudaDeviceProp prop;\n"
+ " if (cudaSuccess == cudaGetDeviceProperties(&prop, device))\n"
+ " std::printf(\"%d.%d \", prop.major, prop.minor);\n"
+ " }\n"
+ " return 0;\n"
+ "}\n")
+
+ execute_process(COMMAND "${CUDA_NVCC_EXECUTABLE}" "--run" "${__cufile}"
+ WORKING_DIRECTORY "${PROJECT_BINARY_DIR}/CMakeFiles/"
+ RESULT_VARIABLE __nvcc_res OUTPUT_VARIABLE __nvcc_out
+ ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
+
+ if(__nvcc_res EQUAL 0)
+ string(REPLACE "2.1" "2.1(2.0)" __nvcc_out "${__nvcc_out}")
+ set(CUDA_gpu_detect_output ${__nvcc_out} CACHE INTERNAL "Returned GPU architetures from caffe_detect_gpus tool" FORCE)
+ endif()
+ endif()
+
+ if(NOT CUDA_gpu_detect_output)
+ message(STATUS "Automatic GPU detection failed. Building for all known architectures.")
+ set(${out_variable} ${Caffe_known_gpu_archs} PARENT_SCOPE)
+ else()
+ set(${out_variable} ${CUDA_gpu_detect_output} PARENT_SCOPE)
+ endif()
+endfunction()
+
+
+################################################################################################
+# Function for selecting GPU arch flags for nvcc based on CUDA_ARCH_NAME
+# Usage:
+# caffe_select_nvcc_arch_flags(out_variable)
+function(caffe_select_nvcc_arch_flags out_variable)
+ # List of arch names
+ set(__archs_names "Fermi" "Kepler" "Maxwell" "All" "Manual")
+ set(__archs_name_default "All")
+ if(NOT CMAKE_CROSSCOMPILING)
+ list(APPEND __archs_names "Auto")
+ set(__archs_name_default "Auto")
+ endif()
+
+ # set CUDA_ARCH_NAME strings (so it will be seen as dropbox in CMake-Gui)
+ set(CUDA_ARCH_NAME ${__archs_name_default} CACHE STRING "Select target NVIDIA GPU achitecture.")
+ set_property( CACHE CUDA_ARCH_NAME PROPERTY STRINGS "" ${__archs_names} )
+ mark_as_advanced(CUDA_ARCH_NAME)
+
+ # verify CUDA_ARCH_NAME value
+ if(NOT ";${__archs_names};" MATCHES ";${CUDA_ARCH_NAME};")
+ string(REPLACE ";" ", " __archs_names "${__archs_names}")
+ message(FATAL_ERROR "Only ${__archs_names} architeture names are supported.")
+ endif()
+
+ if(${CUDA_ARCH_NAME} STREQUAL "Manual")
+ set(CUDA_ARCH_BIN ${Caffe_known_gpu_archs} CACHE STRING "Specify 'real' GPU architectures to build binaries for, BIN(PTX) format is supported")
+ set(CUDA_ARCH_PTX "50" CACHE STRING "Specify 'virtual' PTX architectures to build PTX intermediate code for")
+ mark_as_advanced(CUDA_ARCH_BIN CUDA_ARCH_PTX)
+ else()
+ unset(CUDA_ARCH_BIN CACHE)
+ unset(CUDA_ARCH_PTX CACHE)
+ endif()
+
+ if(${CUDA_ARCH_NAME} STREQUAL "Fermi")
+ set(__cuda_arch_bin "20 21(20)")
+ elseif(${CUDA_ARCH_NAME} STREQUAL "Kepler")
+ set(__cuda_arch_bin "30 35")
+ elseif(${CUDA_ARCH_NAME} STREQUAL "Maxwell")
+ set(__cuda_arch_bin "50")
+ elseif(${CUDA_ARCH_NAME} STREQUAL "All")
+ set(__cuda_arch_bin ${Caffe_known_gpu_archs})
+ elseif(${CUDA_ARCH_NAME} STREQUAL "Auto")
+ caffe_detect_installed_gpus(__cuda_arch_bin)
+ else() # (${CUDA_ARCH_NAME} STREQUAL "Manual")
+ set(__cuda_arch_bin ${CUDA_ARCH_BIN})
+ endif()
+
+ # remove dots and convert to lists
+ string(REGEX REPLACE "\\." "" __cuda_arch_bin "${__cuda_arch_bin}")
+ string(REGEX REPLACE "\\." "" __cuda_arch_ptx "${CUDA_ARCH_PTX}")
+ string(REGEX MATCHALL "[0-9()]+" __cuda_arch_bin "${__cuda_arch_bin}")
+ string(REGEX MATCHALL "[0-9]+" __cuda_arch_ptx "${__cuda_arch_ptx}")
+ caffe_list_unique(__cuda_arch_bin __cuda_arch_ptx)
+
+ set(__nvcc_flags "")
+ set(__nvcc_archs_readable "")
+
+ # Tell NVCC to add binaries for the specified GPUs
+ foreach(__arch ${__cuda_arch_bin})
+ if(__arch MATCHES "([0-9]+)\\(([0-9]+)\\)")
+ # User explicitly specified PTX for the concrete BIN
+ list(APPEND __nvcc_flags -gencode arch=compute_${CMAKE_MATCH_2},code=sm_${CMAKE_MATCH_1})
+ list(APPEND __nvcc_archs_readable sm_${CMAKE_MATCH_1})
+ else()
+ # User didn't explicitly specify PTX for the concrete BIN, we assume PTX=BIN
+ list(APPEND __nvcc_flags -gencode arch=compute_${__arch},code=sm_${__arch})
+ list(APPEND __nvcc_archs_readable sm_${__arch})
+ endif()
+ endforeach()
+
+ # Tell NVCC to add PTX intermediate code for the specified architectures
+ foreach(__arch ${__cuda_arch_ptx})
+ list(APPEND __nvcc_flags -gencode arch=compute_${__arch},code=compute_${__arch})
+ list(APPEND __nvcc_archs_readable compute_${__arch})
+ endforeach()
+
+ string(REPLACE ";" " " __nvcc_archs_readable "${__nvcc_archs_readable}")
+ set(${out_variable} ${__nvcc_flags} PARENT_SCOPE)
+ set(${out_variable}_readable ${__nvcc_archs_readable} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Short command for cuda comnpilation
+# Usage:
+# caffe_cuda_compile( )
+macro(caffe_cuda_compile objlist_variable)
+ foreach(var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG)
+ set(${var}_backup_in_cuda_compile_ "${${var}}")
+
+ # we remove /EHa as it generates warnings under windows
+ string(REPLACE "/EHa" "" ${var} "${${var}}")
+
+ endforeach()
+
+ if(UNIX OR APPLE)
+ list(APPEND CUDA_NVCC_FLAGS -Xcompiler -fPIC)
+ endif()
+
+ if(APPLE)
+ list(APPEND CUDA_NVCC_FLAGS -Xcompiler -Wno-unused-function)
+ endif()
+
+ cuda_compile(cuda_objcs ${ARGN})
+
+ foreach(var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG)
+ set(${var} "${${var}_backup_in_cuda_compile_}")
+ unset(${var}_backup_in_cuda_compile_)
+ endforeach()
+
+ set(${objlist_variable} ${cuda_objcs})
+endmacro()
+
+################################################################################################
+# Short command for cuDNN detection. Believe it soon will be a part of CUDA toolkit distribution.
+# That's why not FindcuDNN.cmake file, but just the macro
+# Usage:
+# detect_cuDNN()
+function(detect_cuDNN)
+ set(CUDNN_ROOT "" CACHE PATH "CUDNN root folder")
+
+ find_path(CUDNN_INCLUDE cudnn.h
+ PATHS ${CUDNN_ROOT} $ENV{CUDNN_ROOT} ${CUDA_TOOLKIT_INCLUDE}
+ DOC "Path to cuDNN include directory." )
+
+ get_filename_component(__libpath_hist ${CUDA_CUDART_LIBRARY} PATH)
+ find_library(CUDNN_LIBRARY NAMES libcudnn.so # libcudnn_static.a
+ PATHS ${CUDNN_ROOT} $ENV{CUDNN_ROOT} ${CUDNN_INCLUDE} ${__libpath_hist}
+ DOC "Path to cuDNN library.")
+
+ if(CUDNN_INCLUDE AND CUDNN_LIBRARY)
+ set(HAVE_CUDNN TRUE PARENT_SCOPE)
+ set(CUDNN_FOUND TRUE PARENT_SCOPE)
+
+ mark_as_advanced(CUDNN_INCLUDE CUDNN_LIBRARY CUDNN_ROOT)
+ message(STATUS "Found cuDNN (include: ${CUDNN_INCLUDE}, library: ${CUDNN_LIBRARY})")
+ endif()
+endfunction()
+
+
+################################################################################################
+### Non macro section
+################################################################################################
+
+find_package(CUDA 5.5 QUIET)
+find_cuda_helper_libs(curand) # cmake 2.8.7 compartibility which doesn't search for curand
+
+if(NOT CUDA_FOUND)
+ return()
+endif()
+
+set(HAVE_CUDA TRUE)
+message(STATUS "CUDA detected: " ${CUDA_VERSION})
+include_directories(SYSTEM ${CUDA_INCLUDE_DIRS})
+list(APPEND Caffe_LINKER_LIBS ${CUDA_CUDART_LIBRARY}
+ ${CUDA_curand_LIBRARY} ${CUDA_CUBLAS_LIBRARIES})
+
+# cudnn detection
+if(USE_CUDNN)
+ detect_cuDNN()
+ if(HAVE_CUDNN)
+ add_definitions(-DUSE_CUDNN)
+ include_directories(SYSTEM ${CUDNN_INCLUDE})
+ list(APPEND Caffe_LINKER_LIBS ${CUDNN_LIBRARY})
+ endif()
+endif()
+
+# setting nvcc arch flags
+caffe_select_nvcc_arch_flags(NVCC_FLAGS_EXTRA)
+list(APPEND CUDA_NVCC_FLAGS ${NVCC_FLAGS_EXTRA})
+message(STATUS "Added CUDA NVCC flags for: ${NVCC_FLAGS_EXTRA_readable}")
+
+# Boost 1.55 workaround, see https://svn.boost.org/trac/boost/ticket/9392 or
+# https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/src/picongpu/CMakeLists.txt
+if(Boost_VERSION EQUAL 105500)
+ message(STATUS "Cuda + Boost 1.55: Applying noinline work around")
+ # avoid warning for CMake >= 2.8.12
+ set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} \"-DBOOST_NOINLINE=__attribute__((noinline))\" ")
+endif()
+
+# disable some nvcc diagnostic that apears in boost, glog, glags, opencv, etc.
+foreach(diag cc_clobber_ignored integer_sign_change useless_using_declaration set_but_not_used)
+ list(APPEND CUDA_NVCC_FLAGS -Xcudafe --diag_suppress=${diag})
+endforeach()
+
+# setting default testing device
+if(NOT CUDA_TEST_DEVICE)
+ set(CUDA_TEST_DEVICE -1)
+endif()
+
+mark_as_advanced(CUDA_BUILD_CUBIN CUDA_BUILD_EMULATION CUDA_VERBOSE_BUILD)
+mark_as_advanced(CUDA_SDK_ROOT_DIR CUDA_SEPARABLE_COMPILATION)
+
+# Handle clang/libc++ issue
+if(APPLE)
+ caffe_detect_darwin_version(OSX_VERSION)
+
+ # OSX 10.9 and higher uses clang/libc++ by default which is incompartible with old CUDA toolkits
+ if(OSX_VERSION VERSION_GREATER 10.8)
+ # enabled by default if and only if CUDA version is less than 7.0
+ caffe_option(USE_libstdcpp "Use libstdc++ instead of libc++" (CUDA_VERSION VERSION_LESS 7.0))
+ endif()
+endif()
diff --git a/caffe-crfrnn/cmake/Dependencies.cmake b/caffe-crfrnn/cmake/Dependencies.cmake
new file mode 100644
index 00000000..7c86dd55
--- /dev/null
+++ b/caffe-crfrnn/cmake/Dependencies.cmake
@@ -0,0 +1,158 @@
+# This list is required for static linking and exported to CaffeConfig.cmake
+set(Caffe_LINKER_LIBS "")
+
+# ---[ Boost
+find_package(Boost 1.46 REQUIRED COMPONENTS system thread)
+include_directories(SYSTEM ${Boost_INCLUDE_DIR})
+list(APPEND Caffe_LINKER_LIBS ${Boost_LIBRARIES})
+
+# ---[ Threads
+find_package(Threads REQUIRED)
+list(APPEND Caffe_LINKER_LIBS ${CMAKE_THREAD_LIBS_INIT})
+
+# ---[ Google-glog
+include("cmake/External/glog.cmake")
+include_directories(SYSTEM ${GLOG_INCLUDE_DIRS})
+list(APPEND Caffe_LINKER_LIBS ${GLOG_LIBRARIES})
+
+# ---[ Google-gflags
+include("cmake/External/gflags.cmake")
+include_directories(SYSTEM ${GFLAGS_INCLUDE_DIRS})
+list(APPEND Caffe_LINKER_LIBS ${GFLAGS_LIBRARIES})
+
+# ---[ Google-protobuf
+include(cmake/ProtoBuf.cmake)
+
+# ---[ HDF5
+find_package(HDF5 COMPONENTS HL REQUIRED)
+include_directories(SYSTEM ${HDF5_INCLUDE_DIRS} ${HDF5_HL_INCLUDE_DIR})
+list(APPEND Caffe_LINKER_LIBS ${HDF5_LIBRARIES})
+
+# ---[ LMDB
+find_package(LMDB REQUIRED)
+include_directories(SYSTEM ${LMDB_INCLUDE_DIR})
+list(APPEND Caffe_LINKER_LIBS ${LMDB_LIBRARIES})
+
+# ---[ LevelDB
+find_package(LevelDB REQUIRED)
+include_directories(SYSTEM ${LevelDB_INCLUDE})
+list(APPEND Caffe_LINKER_LIBS ${LevelDB_LIBRARIES})
+
+# ---[ Snappy
+find_package(Snappy REQUIRED)
+include_directories(SYSTEM ${Snappy_INCLUDE_DIR})
+list(APPEND Caffe_LINKER_LIBS ${Snappy_LIBRARIES})
+
+# ---[ CUDA
+include(cmake/Cuda.cmake)
+if(NOT HAVE_CUDA)
+ if(CPU_ONLY)
+ message("-- CUDA is disabled. Building without it...")
+ else()
+ message("-- CUDA is not detected by cmake. Building without it...")
+ endif()
+
+ # TODO: remove this not cross platform define in future. Use caffe_config.h instead.
+ add_definitions(-DCPU_ONLY)
+endif()
+
+# ---[ OpenCV
+find_package(OpenCV QUIET COMPONENTS core highgui imgproc imgcodecs)
+if(NOT OpenCV_FOUND) # if not OpenCV 3.x, then imgcodecs are not found
+ find_package(OpenCV REQUIRED COMPONENTS core highgui imgproc)
+endif()
+include_directories(SYSTEM ${OpenCV_INCLUDE_DIRS})
+list(APPEND Caffe_LINKER_LIBS ${OpenCV_LIBS})
+message(STATUS "OpenCV found (${OpenCV_CONFIG_PATH})")
+
+# ---[ BLAS
+if(NOT APPLE)
+ set(BLAS "Atlas" CACHE STRING "Selected BLAS library")
+ set_property(CACHE BLAS PROPERTY STRINGS "Atlas;Open;MKL")
+
+ if(BLAS STREQUAL "Atlas" OR BLAS STREQUAL "atlas")
+ find_package(Atlas REQUIRED)
+ include_directories(SYSTEM ${Atlas_INCLUDE_DIR})
+ list(APPEND Caffe_LINKER_LIBS ${Atlas_LIBRARIES})
+ elseif(BLAS STREQUAL "Open" OR BLAS STREQUAL "open")
+ find_package(OpenBLAS REQUIRED)
+ include_directories(SYSTEM ${OpenBLAS_INCLUDE_DIR})
+ list(APPEND Caffe_LINKER_LIBS ${OpenBLAS_LIB})
+ elseif(BLAS STREQUAL "MKL" OR BLAS STREQUAL "mkl")
+ find_package(MKL REQUIRED)
+ include_directories(SYSTEM ${MKL_INCLUDE_DIR})
+ list(APPEND Caffe_LINKER_LIBS ${MKL_LIBRARIES})
+ add_definitions(-DUSE_MKL)
+ endif()
+elseif(APPLE)
+ find_package(vecLib REQUIRED)
+ include_directories(SYSTEM ${vecLib_INCLUDE_DIR})
+ list(APPEND Caffe_LINKER_LIBS ${vecLib_LINKER_LIBS})
+endif()
+
+# ---[ Python
+if(BUILD_python)
+ if(NOT "${python_version}" VERSION_LESS "3.0.0")
+ # use python3
+ find_package(PythonInterp 3.0)
+ find_package(PythonLibs 3.0)
+ find_package(NumPy 1.7.1)
+ # Find the matching boost python implementation
+ set(version ${PYTHONLIBS_VERSION_STRING})
+
+ STRING( REPLACE "." "" boost_py_version ${version} )
+ find_package(Boost 1.46 COMPONENTS "python-py${boost_py_version}")
+ set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})
+
+ while(NOT "${version}" STREQUAL "" AND NOT Boost_PYTHON_FOUND)
+ STRING( REGEX REPLACE "([0-9.]+).[0-9]+" "\\1" version ${version} )
+
+ STRING( REPLACE "." "" boost_py_version ${version} )
+ find_package(Boost 1.46 COMPONENTS "python-py${boost_py_version}")
+ set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})
+
+ STRING( REGEX MATCHALL "([0-9.]+).[0-9]+" has_more_version ${version} )
+ if("${has_more_version}" STREQUAL "")
+ break()
+ endif()
+ endwhile()
+ if(NOT Boost_PYTHON_FOUND)
+ find_package(Boost 1.46 COMPONENTS python)
+ endif()
+ else()
+ # disable Python 3 search
+ find_package(PythonInterp 2.7)
+ find_package(PythonLibs 2.7)
+ find_package(NumPy 1.7.1)
+ find_package(Boost 1.46 COMPONENTS python)
+ endif()
+ if(PYTHONLIBS_FOUND AND NUMPY_FOUND AND Boost_PYTHON_FOUND)
+ set(HAVE_PYTHON TRUE)
+ if(BUILD_python_layer)
+ add_definitions(-DWITH_PYTHON_LAYER)
+ include_directories(SYSTEM ${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIR} ${Boost_INCLUDE_DIRS})
+ list(APPEND Caffe_LINKER_LIBS ${PYTHON_LIBRARIES} ${Boost_LIBRARIES})
+ endif()
+ endif()
+endif()
+
+# ---[ Matlab
+if(BUILD_matlab)
+ find_package(MatlabMex)
+ if(MATLABMEX_FOUND)
+ set(HAVE_MATLAB TRUE)
+ endif()
+
+ # sudo apt-get install liboctave-dev
+ find_program(Octave_compiler NAMES mkoctfile DOC "Octave C++ compiler")
+
+ if(HAVE_MATLAB AND Octave_compiler)
+ set(Matlab_build_mex_using "Matlab" CACHE STRING "Select Matlab or Octave if both detected")
+ set_property(CACHE Matlab_build_mex_using PROPERTY STRINGS "Matlab;Octave")
+ endif()
+endif()
+
+# ---[ Doxygen
+if(BUILD_docs)
+ find_package(Doxygen)
+endif()
diff --git a/caffe-crfrnn/cmake/External/gflags.cmake b/caffe-crfrnn/cmake/External/gflags.cmake
new file mode 100644
index 00000000..e3dba04f
--- /dev/null
+++ b/caffe-crfrnn/cmake/External/gflags.cmake
@@ -0,0 +1,56 @@
+if (NOT __GFLAGS_INCLUDED) # guard against multiple includes
+ set(__GFLAGS_INCLUDED TRUE)
+
+ # use the system-wide gflags if present
+ find_package(GFlags)
+ if (GFLAGS_FOUND)
+ set(GFLAGS_EXTERNAL FALSE)
+ else()
+ # gflags will use pthreads if it's available in the system, so we must link with it
+ find_package(Threads)
+
+ # build directory
+ set(gflags_PREFIX ${CMAKE_BINARY_DIR}/external/gflags-prefix)
+ # install directory
+ set(gflags_INSTALL ${CMAKE_BINARY_DIR}/external/gflags-install)
+
+ # we build gflags statically, but want to link it into the caffe shared library
+ # this requires position-independent code
+ if (UNIX)
+ set(GFLAGS_EXTRA_COMPILER_FLAGS "-fPIC")
+ endif()
+
+ set(GFLAGS_CXX_FLAGS ${CMAKE_CXX_FLAGS} ${GFLAGS_EXTRA_COMPILER_FLAGS})
+ set(GFLAGS_C_FLAGS ${CMAKE_C_FLAGS} ${GFLAGS_EXTRA_COMPILER_FLAGS})
+
+ ExternalProject_Add(gflags
+ PREFIX ${gflags_PREFIX}
+ GIT_REPOSITORY "https://github.com/gflags/gflags.git"
+ GIT_TAG "v2.1.2"
+ UPDATE_COMMAND ""
+ INSTALL_DIR ${gflags_INSTALL}
+ CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
+ -DCMAKE_INSTALL_PREFIX=${gflags_INSTALL}
+ -DBUILD_SHARED_LIBS=OFF
+ -DBUILD_STATIC_LIBS=ON
+ -DBUILD_PACKAGING=OFF
+ -DBUILD_TESTING=OFF
+ -DBUILD_NC_TESTS=OFF
+ -BUILD_CONFIG_TESTS=OFF
+ -DINSTALL_HEADERS=ON
+ -DCMAKE_C_FLAGS=${GFLAGS_C_FLAGS}
+ -DCMAKE_CXX_FLAGS=${GFLAGS_CXX_FLAGS}
+ LOG_DOWNLOAD 1
+ LOG_INSTALL 1
+ )
+
+ set(GFLAGS_FOUND TRUE)
+ set(GFLAGS_INCLUDE_DIRS ${gflags_INSTALL}/include)
+ set(GFLAGS_LIBRARIES ${gflags_INSTALL}/lib/libgflags.a ${CMAKE_THREAD_LIBS_INIT})
+ set(GFLAGS_LIBRARY_DIRS ${gflags_INSTALL}/lib)
+ set(GFLAGS_EXTERNAL TRUE)
+
+ list(APPEND external_project_dependencies gflags)
+ endif()
+
+endif()
diff --git a/caffe-crfrnn/cmake/External/glog.cmake b/caffe-crfrnn/cmake/External/glog.cmake
new file mode 100644
index 00000000..a44672f2
--- /dev/null
+++ b/caffe-crfrnn/cmake/External/glog.cmake
@@ -0,0 +1,56 @@
+# glog depends on gflags
+include("cmake/External/gflags.cmake")
+
+if (NOT __GLOG_INCLUDED)
+ set(__GLOG_INCLUDED TRUE)
+
+ # try the system-wide glog first
+ find_package(Glog)
+ if (GLOG_FOUND)
+ set(GLOG_EXTERNAL FALSE)
+ else()
+ # fetch and build glog from github
+
+ # build directory
+ set(glog_PREFIX ${CMAKE_BINARY_DIR}/external/glog-prefix)
+ # install directory
+ set(glog_INSTALL ${CMAKE_BINARY_DIR}/external/glog-install)
+
+ # we build glog statically, but want to link it into the caffe shared library
+ # this requires position-independent code
+ if (UNIX)
+ set(GLOG_EXTRA_COMPILER_FLAGS "-fPIC")
+ endif()
+
+ set(GLOG_CXX_FLAGS ${CMAKE_CXX_FLAGS} ${GLOG_EXTRA_COMPILER_FLAGS})
+ set(GLOG_C_FLAGS ${CMAKE_C_FLAGS} ${GLOG_EXTRA_COMPILER_FLAGS})
+
+ # depend on gflags if we're also building it
+ if (GFLAGS_EXTERNAL)
+ set(GLOG_DEPENDS gflags)
+ endif()
+
+ ExternalProject_Add(glog
+ DEPENDS ${GLOG_DEPENDS}
+ PREFIX ${glog_PREFIX}
+ GIT_REPOSITORY "https://github.com/google/glog"
+ GIT_TAG "v0.3.4"
+ UPDATE_COMMAND ""
+ INSTALL_DIR ${gflags_INSTALL}
+ CONFIGURE_COMMAND env "CFLAGS=${GLOG_C_FLAGS}" "CXXFLAGS=${GLOG_CXX_FLAGS}" ${glog_PREFIX}/src/glog/configure --prefix=${glog_INSTALL} --enable-shared=no --enable-static=yes --with-gflags=${GFLAGS_LIBRARY_DIRS}/..
+ LOG_DOWNLOAD 1
+ LOG_CONFIGURE 1
+ LOG_INSTALL 1
+ )
+
+ set(GLOG_FOUND TRUE)
+ set(GLOG_INCLUDE_DIRS ${glog_INSTALL}/include)
+ set(GLOG_LIBRARIES ${GFLAGS_LIBRARIES} ${glog_INSTALL}/lib/libglog.a)
+ set(GLOG_LIBRARY_DIRS ${glog_INSTALL}/lib)
+ set(GLOG_EXTERNAL TRUE)
+
+ list(APPEND external_project_dependencies glog)
+ endif()
+
+endif()
+
diff --git a/caffe-crfrnn/cmake/Misc.cmake b/caffe-crfrnn/cmake/Misc.cmake
new file mode 100644
index 00000000..9dd2609b
--- /dev/null
+++ b/caffe-crfrnn/cmake/Misc.cmake
@@ -0,0 +1,52 @@
+# ---[ Configuration types
+set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "Possible configurations" FORCE)
+mark_as_advanced(CMAKE_CONFIGURATION_TYPES)
+
+if(DEFINED CMAKE_BUILD_TYPE)
+ set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS ${CMAKE_CONFIGURATION_TYPES})
+endif()
+
+# --[ If user doesn't specify build type then assume release
+if("${CMAKE_BUILD_TYPE}" STREQUAL "")
+ set(CMAKE_BUILD_TYPE Release)
+endif()
+
+if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
+ set(CMAKE_COMPILER_IS_CLANGXX TRUE)
+endif()
+
+# ---[ Solution folders
+caffe_option(USE_PROJECT_FOLDERS "IDE Solution folders" (MSVC_IDE OR CMAKE_GENERATOR MATCHES Xcode) )
+
+if(USE_PROJECT_FOLDERS)
+ set_property(GLOBAL PROPERTY USE_FOLDERS ON)
+ set_property(GLOBAL PROPERTY PREDEFINED_TARGETS_FOLDER "CMakeTargets")
+endif()
+
+# ---[ Install options
+if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
+ set(CMAKE_INSTALL_PREFIX "${PROJECT_BINARY_DIR}/install" CACHE PATH "Default install path" FORCE)
+endif()
+
+# ---[ RPATH settings
+set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE CACHE BOOLEAN "Use link paths for shared library rpath")
+set(CMAKE_MACOSX_RPATH TRUE)
+
+list(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES ${CMAKE_INSTALL_PREFIX}/lib __is_systtem_dir)
+if(${__is_systtem_dir} STREQUAL -1)
+ set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/lib)
+endif()
+
+# ---[ Funny target
+if(UNIX OR APPLE)
+ add_custom_target(symlink_to_build COMMAND "ln" "-sf" "${PROJECT_BINARY_DIR}" "${PROJECT_SOURCE_DIR}/build"
+ COMMENT "Adding symlink: /build -> ${PROJECT_BINARY_DIR}" )
+endif()
+
+# ---[ Set debug postfix
+set(Caffe_DEBUG_POSTFIX "-d")
+
+set(Caffe_POSTFIX "")
+if(CMAKE_BUILD_TYPE MATCHES "Debug")
+ set(Caffe_POSTFIX ${Caffe_DEBUG_POSTFIX})
+endif()
diff --git a/caffe-crfrnn/cmake/Modules/FindAtlas.cmake b/caffe-crfrnn/cmake/Modules/FindAtlas.cmake
new file mode 100644
index 00000000..6e156435
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindAtlas.cmake
@@ -0,0 +1,52 @@
+# Find the Atlas (and Lapack) libraries
+#
+# The following variables are optionally searched for defaults
+# Atlas_ROOT_DIR: Base directory where all Atlas components are found
+#
+# The following are set after configuration is done:
+# Atlas_FOUND
+# Atlas_INCLUDE_DIRS
+# Atlas_LIBRARIES
+# Atlas_LIBRARYRARY_DIRS
+
+set(Atlas_INCLUDE_SEARCH_PATHS
+ /usr/include/atlas
+ /usr/include/atlas-base
+ $ENV{Atlas_ROOT_DIR}
+ $ENV{Atlas_ROOT_DIR}/include
+)
+
+set(Atlas_LIB_SEARCH_PATHS
+ /usr/lib/atlas
+ /usr/lib/atlas-base
+ $ENV{Atlas_ROOT_DIR}
+ $ENV{Atlas_ROOT_DIR}/lib
+)
+
+find_path(Atlas_CBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
+find_path(Atlas_CLAPACK_INCLUDE_DIR NAMES clapack.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
+
+find_library(Atlas_CBLAS_LIBRARY NAMES ptcblas_r ptcblas cblas_r cblas PATHS ${Atlas_LIB_SEARCH_PATHS})
+find_library(Atlas_BLAS_LIBRARY NAMES atlas_r atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
+find_library(Atlas_LAPACK_LIBRARY NAMES alapack_r alapack lapack_atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
+
+set(LOOKED_FOR
+ Atlas_CBLAS_INCLUDE_DIR
+ Atlas_CLAPACK_INCLUDE_DIR
+
+ Atlas_CBLAS_LIBRARY
+ Atlas_BLAS_LIBRARY
+ Atlas_LAPACK_LIBRARY
+)
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(Atlas DEFAULT_MSG ${LOOKED_FOR})
+
+if(ATLAS_FOUND)
+ set(Atlas_INCLUDE_DIR ${Atlas_CBLAS_INCLUDE_DIR} ${Atlas_CLAPACK_INCLUDE_DIR})
+ set(Atlas_LIBRARIES ${Atlas_LAPACK_LIBRARY} ${Atlas_CBLAS_LIBRARY} ${Atlas_BLAS_LIBRARY})
+ mark_as_advanced(${LOOKED_FOR})
+
+ message(STATUS "Found Atlas (include: ${Atlas_CBLAS_INCLUDE_DIR}, library: ${Atlas_BLAS_LIBRARY})")
+endif(ATLAS_FOUND)
+
diff --git a/caffe-crfrnn/cmake/Modules/FindGFlags.cmake b/caffe-crfrnn/cmake/Modules/FindGFlags.cmake
new file mode 100644
index 00000000..29b60f05
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindGFlags.cmake
@@ -0,0 +1,50 @@
+# - Try to find GFLAGS
+#
+# The following variables are optionally searched for defaults
+# GFLAGS_ROOT_DIR: Base directory where all GFLAGS components are found
+#
+# The following are set after configuration is done:
+# GFLAGS_FOUND
+# GFLAGS_INCLUDE_DIRS
+# GFLAGS_LIBRARIES
+# GFLAGS_LIBRARYRARY_DIRS
+
+include(FindPackageHandleStandardArgs)
+
+set(GFLAGS_ROOT_DIR "" CACHE PATH "Folder contains Gflags")
+
+# We are testing only a couple of files in the include directories
+if(WIN32)
+ find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
+ PATHS ${GFLAGS_ROOT_DIR}/src/windows)
+else()
+ find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
+ PATHS ${GFLAGS_ROOT_DIR})
+endif()
+
+if(MSVC)
+ find_library(GFLAGS_LIBRARY_RELEASE
+ NAMES libgflags
+ PATHS ${GFLAGS_ROOT_DIR}
+ PATH_SUFFIXES Release)
+
+ find_library(GFLAGS_LIBRARY_DEBUG
+ NAMES libgflags-debug
+ PATHS ${GFLAGS_ROOT_DIR}
+ PATH_SUFFIXES Debug)
+
+ set(GFLAGS_LIBRARY optimized ${GFLAGS_LIBRARY_RELEASE} debug ${GFLAGS_LIBRARY_DEBUG})
+else()
+ find_library(GFLAGS_LIBRARY gflags)
+endif()
+
+find_package_handle_standard_args(GFlags DEFAULT_MSG GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)
+
+
+if(GFLAGS_FOUND)
+ set(GFLAGS_INCLUDE_DIRS ${GFLAGS_INCLUDE_DIR})
+ set(GFLAGS_LIBRARIES ${GFLAGS_LIBRARY})
+ message(STATUS "Found gflags (include: ${GFLAGS_INCLUDE_DIR}, library: ${GFLAGS_LIBRARY})")
+ mark_as_advanced(GFLAGS_LIBRARY_DEBUG GFLAGS_LIBRARY_RELEASE
+ GFLAGS_LIBRARY GFLAGS_INCLUDE_DIR GFLAGS_ROOT_DIR)
+endif()
diff --git a/caffe-crfrnn/cmake/Modules/FindGlog.cmake b/caffe-crfrnn/cmake/Modules/FindGlog.cmake
new file mode 100644
index 00000000..99abbe47
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindGlog.cmake
@@ -0,0 +1,48 @@
+# - Try to find Glog
+#
+# The following variables are optionally searched for defaults
+# GLOG_ROOT_DIR: Base directory where all GLOG components are found
+#
+# The following are set after configuration is done:
+# GLOG_FOUND
+# GLOG_INCLUDE_DIRS
+# GLOG_LIBRARIES
+# GLOG_LIBRARYRARY_DIRS
+
+include(FindPackageHandleStandardArgs)
+
+set(GLOG_ROOT_DIR "" CACHE PATH "Folder contains Google glog")
+
+if(WIN32)
+ find_path(GLOG_INCLUDE_DIR glog/logging.h
+ PATHS ${GLOG_ROOT_DIR}/src/windows)
+else()
+ find_path(GLOG_INCLUDE_DIR glog/logging.h
+ PATHS ${GLOG_ROOT_DIR})
+endif()
+
+if(MSVC)
+ find_library(GLOG_LIBRARY_RELEASE libglog_static
+ PATHS ${GLOG_ROOT_DIR}
+ PATH_SUFFIXES Release)
+
+ find_library(GLOG_LIBRARY_DEBUG libglog_static
+ PATHS ${GLOG_ROOT_DIR}
+ PATH_SUFFIXES Debug)
+
+ set(GLOG_LIBRARY optimized ${GLOG_LIBRARY_RELEASE} debug ${GLOG_LIBRARY_DEBUG})
+else()
+ find_library(GLOG_LIBRARY glog
+ PATHS ${GLOG_ROOT_DIR}
+ PATH_SUFFIXES lib lib64)
+endif()
+
+find_package_handle_standard_args(Glog DEFAULT_MSG GLOG_INCLUDE_DIR GLOG_LIBRARY)
+
+if(GLOG_FOUND)
+ set(GLOG_INCLUDE_DIRS ${GLOG_INCLUDE_DIR})
+ set(GLOG_LIBRARIES ${GLOG_LIBRARY})
+ message(STATUS "Found glog (include: ${GLOG_INCLUDE_DIR}, library: ${GLOG_LIBRARY})")
+ mark_as_advanced(GLOG_ROOT_DIR GLOG_LIBRARY_RELEASE GLOG_LIBRARY_DEBUG
+ GLOG_LIBRARY GLOG_INCLUDE_DIR)
+endif()
diff --git a/caffe-crfrnn/cmake/Modules/FindLAPACK.cmake b/caffe-crfrnn/cmake/Modules/FindLAPACK.cmake
new file mode 100644
index 00000000..9641c45d
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindLAPACK.cmake
@@ -0,0 +1,190 @@
+# - Find LAPACK library
+# This module finds an installed fortran library that implements the LAPACK
+# linear-algebra interface (see http://www.netlib.org/lapack/).
+#
+# The approach follows that taken for the autoconf macro file, acx_lapack.m4
+# (distributed at http://ac-archive.sourceforge.net/ac-archive/acx_lapack.html).
+#
+# This module sets the following variables:
+# LAPACK_FOUND - set to true if a library implementing the LAPACK interface is found
+# LAPACK_LIBRARIES - list of libraries (using full path name) for LAPACK
+
+# Note: I do not think it is a good idea to mixup different BLAS/LAPACK versions
+# Hence, this script wants to find a Lapack library matching your Blas library
+
+# Do nothing if LAPACK was found before
+IF(NOT LAPACK_FOUND)
+
+SET(LAPACK_LIBRARIES)
+SET(LAPACK_INFO)
+
+IF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
+ FIND_PACKAGE(BLAS)
+ELSE(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
+ FIND_PACKAGE(BLAS REQUIRED)
+ENDIF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
+
+# Old search lapack script
+include(CheckFortranFunctionExists)
+
+macro(Check_Lapack_Libraries LIBRARIES _prefix _name _flags _list _blas)
+ # This macro checks for the existence of the combination of fortran libraries
+ # given by _list. If the combination is found, this macro checks (using the
+ # Check_Fortran_Function_Exists macro) whether can link against that library
+ # combination using the name of a routine given by _name using the linker
+ # flags given by _flags. If the combination of libraries is found and passes
+ # the link test, LIBRARIES is set to the list of complete library paths that
+ # have been found. Otherwise, LIBRARIES is set to FALSE.
+ # N.B. _prefix is the prefix applied to the names of all cached variables that
+ # are generated internally and marked advanced by this macro.
+ set(_libraries_work TRUE)
+ set(${LIBRARIES})
+ set(_combined_name)
+ foreach(_library ${_list})
+ set(_combined_name ${_combined_name}_${_library})
+ if(_libraries_work)
+ if (WIN32)
+ find_library(${_prefix}_${_library}_LIBRARY
+ NAMES ${_library} PATHS ENV LIB PATHS ENV PATH)
+ else (WIN32)
+ if(APPLE)
+ find_library(${_prefix}_${_library}_LIBRARY
+ NAMES ${_library}
+ PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64
+ ENV DYLD_LIBRARY_PATH)
+ else(APPLE)
+ find_library(${_prefix}_${_library}_LIBRARY
+ NAMES ${_library}
+ PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64
+ ENV LD_LIBRARY_PATH)
+ endif(APPLE)
+ endif(WIN32)
+ mark_as_advanced(${_prefix}_${_library}_LIBRARY)
+ set(${LIBRARIES} ${${LIBRARIES}} ${${_prefix}_${_library}_LIBRARY})
+ set(_libraries_work ${${_prefix}_${_library}_LIBRARY})
+ endif(_libraries_work)
+ endforeach(_library ${_list})
+ if(_libraries_work)
+ # Test this combination of libraries.
+ set(CMAKE_REQUIRED_LIBRARIES ${_flags} ${${LIBRARIES}} ${_blas})
+ if (CMAKE_Fortran_COMPILER_WORKS)
+ check_fortran_function_exists(${_name} ${_prefix}${_combined_name}_WORKS)
+ else (CMAKE_Fortran_COMPILER_WORKS)
+ check_function_exists("${_name}_" ${_prefix}${_combined_name}_WORKS)
+ endif (CMAKE_Fortran_COMPILER_WORKS)
+ set(CMAKE_REQUIRED_LIBRARIES)
+ mark_as_advanced(${_prefix}${_combined_name}_WORKS)
+ set(_libraries_work ${${_prefix}${_combined_name}_WORKS})
+ endif(_libraries_work)
+ if(NOT _libraries_work)
+ set(${LIBRARIES} FALSE)
+ endif(NOT _libraries_work)
+endmacro(Check_Lapack_Libraries)
+
+
+if(BLAS_FOUND)
+
+ # Intel MKL
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "mkl"))
+ IF(MKL_LAPACK_LIBRARIES)
+ SET(LAPACK_LIBRARIES ${MKL_LAPACK_LIBRARIES} ${MKL_LIBRARIES})
+ ELSE(MKL_LAPACK_LIBRARIES)
+ SET(LAPACK_LIBRARIES ${MKL_LIBRARIES})
+ ENDIF(MKL_LAPACK_LIBRARIES)
+ SET(LAPACK_INCLUDE_DIR ${MKL_INCLUDE_DIR})
+ SET(LAPACK_INFO "mkl")
+ ENDIF()
+
+ # OpenBlas
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "open"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" OPEN_LAPACK_WORKS)
+ if(OPEN_LAPACK_WORKS)
+ SET(LAPACK_INFO "open")
+ else()
+ message(STATUS "It seems OpenBlas has not been compiled with Lapack support")
+ endif()
+ endif()
+
+ # GotoBlas
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "goto"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" GOTO_LAPACK_WORKS)
+ if(GOTO_LAPACK_WORKS)
+ SET(LAPACK_INFO "goto")
+ else()
+ message(STATUS "It seems GotoBlas has not been compiled with Lapack support")
+ endif()
+ endif()
+
+ # ACML
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "acml"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" ACML_LAPACK_WORKS)
+ if(ACML_LAPACK_WORKS)
+ SET(LAPACK_INFO "acml")
+ else()
+ message(STATUS "Strangely, this ACML library does not support Lapack?!")
+ endif()
+ endif()
+
+ # Accelerate
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "accelerate"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" ACCELERATE_LAPACK_WORKS)
+ if(ACCELERATE_LAPACK_WORKS)
+ SET(LAPACK_INFO "accelerate")
+ else()
+ message(STATUS "Strangely, this Accelerate library does not support Lapack?!")
+ endif()
+ endif()
+
+ # vecLib
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "veclib"))
+ SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
+ check_function_exists("cheev_" VECLIB_LAPACK_WORKS)
+ if(VECLIB_LAPACK_WORKS)
+ SET(LAPACK_INFO "veclib")
+ else()
+ message(STATUS "Strangely, this vecLib library does not support Lapack?!")
+ endif()
+ endif()
+
+ # Generic LAPACK library?
+ IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "generic"))
+ check_lapack_libraries(
+ LAPACK_LIBRARIES
+ LAPACK
+ cheev
+ ""
+ "lapack"
+ "${BLAS_LIBRARIES}"
+ )
+ if(LAPACK_LIBRARIES)
+ SET(LAPACK_INFO "generic")
+ endif(LAPACK_LIBRARIES)
+ endif()
+
+else(BLAS_FOUND)
+ message(STATUS "LAPACK requires BLAS")
+endif(BLAS_FOUND)
+
+if(LAPACK_INFO)
+ set(LAPACK_FOUND TRUE)
+else(LAPACK_INFO)
+ set(LAPACK_FOUND FALSE)
+endif(LAPACK_INFO)
+
+IF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)
+ message(FATAL_ERROR "Cannot find a library with LAPACK API. Please specify library location.")
+ENDIF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)
+IF(NOT LAPACK_FIND_QUIETLY)
+ IF(LAPACK_FOUND)
+ MESSAGE(STATUS "Found a library with LAPACK API. (${LAPACK_INFO})")
+ ELSE(LAPACK_FOUND)
+ MESSAGE(STATUS "Cannot find a library with LAPACK API. Not using LAPACK.")
+ ENDIF(LAPACK_FOUND)
+ENDIF(NOT LAPACK_FIND_QUIETLY)
+
+# Do nothing if LAPACK was found before
+ENDIF(NOT LAPACK_FOUND)
diff --git a/caffe-crfrnn/cmake/Modules/FindLMDB.cmake b/caffe-crfrnn/cmake/Modules/FindLMDB.cmake
new file mode 100644
index 00000000..8a817fd6
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindLMDB.cmake
@@ -0,0 +1,28 @@
+# Try to find the LMBD libraries and headers
+# LMDB_FOUND - system has LMDB lib
+# LMDB_INCLUDE_DIR - the LMDB include directory
+# LMDB_LIBRARIES - Libraries needed to use LMDB
+
+# FindCWD based on FindGMP by:
+# Copyright (c) 2006, Laurent Montel,
+#
+# Redistribution and use is allowed according to the terms of the BSD license.
+
+# Adapted from FindCWD by:
+# Copyright 2013 Conrad Steenberg
+# Aug 31, 2013
+
+find_path(LMDB_INCLUDE_DIR NAMES lmdb.h PATHS "$ENV{LMDB_DIR}/include")
+find_library(LMDB_LIBRARIES NAMES lmdb PATHS "$ENV{LMDB_DIR}/lib" )
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(LMDB DEFAULT_MSG LMDB_INCLUDE_DIR LMDB_LIBRARIES)
+
+if(LMDB_FOUND)
+ message(STATUS "Found lmdb (include: ${LMDB_INCLUDE_DIR}, library: ${LMDB_LIBRARIES})")
+ mark_as_advanced(LMDB_INCLUDE_DIR LMDB_LIBRARIES)
+
+ caffe_parse_header(${LMDB_INCLUDE_DIR}/lmdb.h
+ LMDB_VERSION_LINES MDB_VERSION_MAJOR MDB_VERSION_MINOR MDB_VERSION_PATCH)
+ set(LMDB_VERSION "${MDB_VERSION_MAJOR}.${MDB_VERSION_MINOR}.${MDB_VERSION_PATCH}")
+endif()
diff --git a/caffe-crfrnn/cmake/Modules/FindLevelDB.cmake b/caffe-crfrnn/cmake/Modules/FindLevelDB.cmake
new file mode 100644
index 00000000..97f08ac9
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindLevelDB.cmake
@@ -0,0 +1,44 @@
+# - Find LevelDB
+#
+# LevelDB_INCLUDES - List of LevelDB includes
+# LevelDB_LIBRARIES - List of libraries when using LevelDB.
+# LevelDB_FOUND - True if LevelDB found.
+
+# Look for the header file.
+find_path(LevelDB_INCLUDE NAMES leveldb/db.h
+ PATHS $ENV{LEVELDB_ROOT}/include /opt/local/include /usr/local/include /usr/include
+ DOC "Path in which the file leveldb/db.h is located." )
+
+# Look for the library.
+find_library(LevelDB_LIBRARY NAMES leveldb
+ PATHS /usr/lib $ENV{LEVELDB_ROOT}/lib
+ DOC "Path to leveldb library." )
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(LevelDB DEFAULT_MSG LevelDB_INCLUDE LevelDB_LIBRARY)
+
+if(LEVELDB_FOUND)
+ message(STATUS "Found LevelDB (include: ${LevelDB_INCLUDE}, library: ${LevelDB_LIBRARY})")
+ set(LevelDB_INCLUDES ${LevelDB_INCLUDE})
+ set(LevelDB_LIBRARIES ${LevelDB_LIBRARY})
+ mark_as_advanced(LevelDB_INCLUDE LevelDB_LIBRARY)
+
+ if(EXISTS "${LevelDB_INCLUDE}/leveldb/db.h")
+ file(STRINGS "${LevelDB_INCLUDE}/leveldb/db.h" __version_lines
+ REGEX "static const int k[^V]+Version[ \t]+=[ \t]+[0-9]+;")
+
+ foreach(__line ${__version_lines})
+ if(__line MATCHES "[^k]+kMajorVersion[ \t]+=[ \t]+([0-9]+);")
+ set(LEVELDB_VERSION_MAJOR ${CMAKE_MATCH_1})
+ elseif(__line MATCHES "[^k]+kMinorVersion[ \t]+=[ \t]+([0-9]+);")
+ set(LEVELDB_VERSION_MINOR ${CMAKE_MATCH_1})
+ endif()
+ endforeach()
+
+ if(LEVELDB_VERSION_MAJOR AND LEVELDB_VERSION_MINOR)
+ set(LEVELDB_VERSION "${LEVELDB_VERSION_MAJOR}.${LEVELDB_VERSION_MINOR}")
+ endif()
+
+ caffe_clear_vars(__line __version_lines)
+ endif()
+endif()
diff --git a/caffe-crfrnn/cmake/Modules/FindMKL.cmake b/caffe-crfrnn/cmake/Modules/FindMKL.cmake
new file mode 100644
index 00000000..d2012db5
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindMKL.cmake
@@ -0,0 +1,110 @@
+# Find the MKL libraries
+#
+# Options:
+#
+# MKL_USE_SINGLE_DYNAMIC_LIBRARY : use single dynamic library interface
+# MKL_USE_STATIC_LIBS : use static libraries
+# MKL_MULTI_THREADED : use multi-threading
+#
+# This module defines the following variables:
+#
+# MKL_FOUND : True mkl is found
+# MKL_INCLUDE_DIR : unclude directory
+# MKL_LIBRARIES : the libraries to link against.
+
+
+# ---[ Options
+caffe_option(MKL_USE_SINGLE_DYNAMIC_LIBRARY "Use single dynamic library interface" ON)
+caffe_option(MKL_USE_STATIC_LIBS "Use static libraries" OFF IF NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)
+caffe_option(MKL_MULTI_THREADED "Use multi-threading" ON IF NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)
+
+# ---[ Root folders
+set(INTEL_ROOT "/opt/intel" CACHE PATH "Folder contains intel libs")
+find_path(MKL_ROOT include/mkl.h PATHS $ENV{MKL_ROOT} ${INTEL_ROOT}/mkl
+ DOC "Folder contains MKL")
+
+# ---[ Find include dir
+find_path(MKL_INCLUDE_DIR mkl.h PATHS ${MKL_ROOT} PATH_SUFFIXES include)
+set(__looked_for MKL_INCLUDE_DIR)
+
+# ---[ Find libraries
+if(CMAKE_SIZEOF_VOID_P EQUAL 4)
+ set(__path_suffixes lib lib/ia32)
+else()
+ set(__path_suffixes lib lib/intel64)
+endif()
+
+set(__mkl_libs "")
+if(MKL_USE_SINGLE_DYNAMIC_LIBRARY)
+ list(APPEND __mkl_libs rt)
+else()
+ if(CMAKE_SIZEOF_VOID_P EQUAL 4)
+ if(WIN32)
+ list(APPEND __mkl_libs intel_c)
+ else()
+ list(APPEND __mkl_libs intel gf)
+ endif()
+ else()
+ list(APPEND __mkl_libs intel_lp64 gf_lp64)
+ endif()
+
+ if(MKL_MULTI_THREADED)
+ list(APPEND __mkl_libs intel_thread)
+ else()
+ list(APPEND __mkl_libs sequential)
+ endif()
+
+ list(APPEND __mkl_libs core cdft_core)
+endif()
+
+
+foreach (__lib ${__mkl_libs})
+ set(__mkl_lib "mkl_${__lib}")
+ string(TOUPPER ${__mkl_lib} __mkl_lib_upper)
+
+ if(MKL_USE_STATIC_LIBS)
+ set(__mkl_lib "lib${__mkl_lib}.a")
+ endif()
+
+ find_library(${__mkl_lib_upper}_LIBRARY
+ NAMES ${__mkl_lib}
+ PATHS ${MKL_ROOT} "${MKL_INCLUDE_DIR}/.."
+ PATH_SUFFIXES ${__path_suffixes}
+ DOC "The path to Intel(R) MKL ${__mkl_lib} library")
+ mark_as_advanced(${__mkl_lib_upper}_LIBRARY)
+
+ list(APPEND __looked_for ${__mkl_lib_upper}_LIBRARY)
+ list(APPEND MKL_LIBRARIES ${${__mkl_lib_upper}_LIBRARY})
+endforeach()
+
+
+if(NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)
+ if (MKL_USE_STATIC_LIBS)
+ set(__iomp5_libs iomp5 libiomp5mt.lib)
+ else()
+ set(__iomp5_libs iomp5 libiomp5md.lib)
+ endif()
+
+ if(WIN32)
+ find_path(INTEL_INCLUDE_DIR omp.h PATHS ${INTEL_ROOT} PATH_SUFFIXES include)
+ list(APPEND __looked_for INTEL_INCLUDE_DIR)
+ endif()
+
+ find_library(MKL_RTL_LIBRARY ${__iomp5_libs}
+ PATHS ${INTEL_RTL_ROOT} ${INTEL_ROOT}/compiler ${MKL_ROOT}/.. ${MKL_ROOT}/../compiler
+ PATH_SUFFIXES ${__path_suffixes}
+ DOC "Path to Path to OpenMP runtime library")
+
+ list(APPEND __looked_for MKL_RTL_LIBRARY)
+ list(APPEND MKL_LIBRARIES ${MKL_RTL_LIBRARY})
+endif()
+
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(MKL DEFAULT_MSG ${__looked_for})
+
+if(MKL_FOUND)
+ message(STATUS "Found MKL (include: ${MKL_INCLUDE_DIR}, lib: ${MKL_LIBRARIES}")
+endif()
+
+caffe_clear_vars(__looked_for __mkl_libs __path_suffixes __lib_suffix __iomp5_libs)
diff --git a/caffe-crfrnn/cmake/Modules/FindMatlabMex.cmake b/caffe-crfrnn/cmake/Modules/FindMatlabMex.cmake
new file mode 100644
index 00000000..28ae65e7
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindMatlabMex.cmake
@@ -0,0 +1,48 @@
+# This module looks for MatlabMex compiler
+# Defines variables:
+# Matlab_DIR - Matlab root dir
+# Matlab_mex - path to mex compiler
+# Matlab_mexext - path to mexext
+
+if(MSVC)
+ foreach(__ver "9.30" "7.14" "7.11" "7.10" "7.9" "7.8" "7.7")
+ get_filename_component(__matlab_root "[HKEY_LOCAL_MACHINE\\SOFTWARE\\MathWorks\\MATLAB\\${__ver};MATLABROOT]" ABSOLUTE)
+ if(__matlab_root)
+ break()
+ endif()
+ endforeach()
+endif()
+
+if(APPLE)
+ foreach(__ver "R2014b" "R2014a" "R2013b" "R2013a" "R2012b" "R2012a" "R2011b" "R2011a" "R2010b" "R2010a")
+ if(EXISTS /Applications/MATLAB_${__ver}.app)
+ set(__matlab_root /Applications/MATLAB_${__ver}.app)
+ break()
+ endif()
+ endforeach()
+endif()
+
+if(UNIX)
+ execute_process(COMMAND which matlab OUTPUT_STRIP_TRAILING_WHITESPACE
+ OUTPUT_VARIABLE __out RESULT_VARIABLE __res)
+
+ if(__res MATCHES 0) # Suppress `readlink` warning if `which` returned nothing
+ execute_process(COMMAND which matlab COMMAND xargs readlink
+ COMMAND xargs dirname COMMAND xargs dirname COMMAND xargs echo -n
+ OUTPUT_VARIABLE __matlab_root OUTPUT_STRIP_TRAILING_WHITESPACE)
+ endif()
+endif()
+
+
+find_path(Matlab_DIR NAMES bin/mex bin/mexext PATHS ${__matlab_root}
+ DOC "Matlab directory" NO_DEFAULT_PATH)
+
+find_program(Matlab_mex NAMES mex mex.bat HINTS ${Matlab_DIR} PATH_SUFFIXES bin NO_DEFAULT_PATH)
+find_program(Matlab_mexext NAMES mexext mexext.bat HINTS ${Matlab_DIR} PATH_SUFFIXES bin NO_DEFAULT_PATH)
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(MatlabMex DEFAULT_MSG Matlab_mex Matlab_mexext)
+
+if(MATLABMEX_FOUND)
+ mark_as_advanced(Matlab_mex Matlab_mexext)
+endif()
diff --git a/caffe-crfrnn/cmake/Modules/FindNumPy.cmake b/caffe-crfrnn/cmake/Modules/FindNumPy.cmake
new file mode 100644
index 00000000..a671494c
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindNumPy.cmake
@@ -0,0 +1,58 @@
+# - Find the NumPy libraries
+# This module finds if NumPy is installed, and sets the following variables
+# indicating where it is.
+#
+# TODO: Update to provide the libraries and paths for linking npymath lib.
+#
+# NUMPY_FOUND - was NumPy found
+# NUMPY_VERSION - the version of NumPy found as a string
+# NUMPY_VERSION_MAJOR - the major version number of NumPy
+# NUMPY_VERSION_MINOR - the minor version number of NumPy
+# NUMPY_VERSION_PATCH - the patch version number of NumPy
+# NUMPY_VERSION_DECIMAL - e.g. version 1.6.1 is 10601
+# NUMPY_INCLUDE_DIR - path to the NumPy include files
+
+unset(NUMPY_VERSION)
+unset(NUMPY_INCLUDE_DIR)
+
+if(PYTHONINTERP_FOUND)
+ execute_process(COMMAND "${PYTHON_EXECUTABLE}" "-c"
+ "import numpy as n; print(n.__version__); print(n.get_include());"
+ RESULT_VARIABLE __result
+ OUTPUT_VARIABLE __output
+ OUTPUT_STRIP_TRAILING_WHITESPACE)
+
+ if(__result MATCHES 0)
+ string(REGEX REPLACE ";" "\\\\;" __values ${__output})
+ string(REGEX REPLACE "\r?\n" ";" __values ${__values})
+ list(GET __values 0 NUMPY_VERSION)
+ list(GET __values 1 NUMPY_INCLUDE_DIR)
+
+ string(REGEX MATCH "^([0-9])+\\.([0-9])+\\.([0-9])+" __ver_check "${NUMPY_VERSION}")
+ if(NOT "${__ver_check}" STREQUAL "")
+ set(NUMPY_VERSION_MAJOR ${CMAKE_MATCH_1})
+ set(NUMPY_VERSION_MINOR ${CMAKE_MATCH_2})
+ set(NUMPY_VERSION_PATCH ${CMAKE_MATCH_3})
+ math(EXPR NUMPY_VERSION_DECIMAL
+ "(${NUMPY_VERSION_MAJOR} * 10000) + (${NUMPY_VERSION_MINOR} * 100) + ${NUMPY_VERSION_PATCH}")
+ string(REGEX REPLACE "\\\\" "/" NUMPY_INCLUDE_DIR ${NUMPY_INCLUDE_DIR})
+ else()
+ unset(NUMPY_VERSION)
+ unset(NUMPY_INCLUDE_DIR)
+ message(STATUS "Requested NumPy version and include path, but got instead:\n${__output}\n")
+ endif()
+ endif()
+else()
+ message(STATUS "To find NumPy Python interpretator is required to be found.")
+endif()
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(NumPy REQUIRED_VARS NUMPY_INCLUDE_DIR NUMPY_VERSION
+ VERSION_VAR NUMPY_VERSION)
+
+if(NUMPY_FOUND)
+ message(STATUS "NumPy ver. ${NUMPY_VERSION} found (include: ${NUMPY_INCLUDE_DIR})")
+endif()
+
+caffe_clear_vars(__result __output __error_value __values __ver_check __error_value)
+
diff --git a/caffe-crfrnn/cmake/Modules/FindOpenBLAS.cmake b/caffe-crfrnn/cmake/Modules/FindOpenBLAS.cmake
new file mode 100644
index 00000000..a6512ae7
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindOpenBLAS.cmake
@@ -0,0 +1,64 @@
+
+
+SET(Open_BLAS_INCLUDE_SEARCH_PATHS
+ /usr/include
+ /usr/include/openblas
+ /usr/include/openblas-base
+ /usr/local/include
+ /usr/local/include/openblas
+ /usr/local/include/openblas-base
+ /opt/OpenBLAS/include
+ $ENV{OpenBLAS_HOME}
+ $ENV{OpenBLAS_HOME}/include
+)
+
+SET(Open_BLAS_LIB_SEARCH_PATHS
+ /lib/
+ /lib/openblas-base
+ /lib64/
+ /usr/lib
+ /usr/lib/openblas-base
+ /usr/lib64
+ /usr/local/lib
+ /usr/local/lib64
+ /opt/OpenBLAS/lib
+ $ENV{OpenBLAS}cd
+ $ENV{OpenBLAS}/lib
+ $ENV{OpenBLAS_HOME}
+ $ENV{OpenBLAS_HOME}/lib
+ )
+
+FIND_PATH(OpenBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Open_BLAS_INCLUDE_SEARCH_PATHS})
+FIND_LIBRARY(OpenBLAS_LIB NAMES openblas PATHS ${Open_BLAS_LIB_SEARCH_PATHS})
+
+SET(OpenBLAS_FOUND ON)
+
+# Check include files
+IF(NOT OpenBLAS_INCLUDE_DIR)
+ SET(OpenBLAS_FOUND OFF)
+ MESSAGE(STATUS "Could not find OpenBLAS include. Turning OpenBLAS_FOUND off")
+ENDIF()
+
+# Check libraries
+IF(NOT OpenBLAS_LIB)
+ SET(OpenBLAS_FOUND OFF)
+ MESSAGE(STATUS "Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off")
+ENDIF()
+
+IF (OpenBLAS_FOUND)
+ IF (NOT OpenBLAS_FIND_QUIETLY)
+ MESSAGE(STATUS "Found OpenBLAS libraries: ${OpenBLAS_LIB}")
+ MESSAGE(STATUS "Found OpenBLAS include: ${OpenBLAS_INCLUDE_DIR}")
+ ENDIF (NOT OpenBLAS_FIND_QUIETLY)
+ELSE (OpenBLAS_FOUND)
+ IF (OpenBLAS_FIND_REQUIRED)
+ MESSAGE(FATAL_ERROR "Could not find OpenBLAS")
+ ENDIF (OpenBLAS_FIND_REQUIRED)
+ENDIF (OpenBLAS_FOUND)
+
+MARK_AS_ADVANCED(
+ OpenBLAS_INCLUDE_DIR
+ OpenBLAS_LIB
+ OpenBLAS
+)
+
diff --git a/caffe-crfrnn/cmake/Modules/FindSnappy.cmake b/caffe-crfrnn/cmake/Modules/FindSnappy.cmake
new file mode 100644
index 00000000..eff2a864
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindSnappy.cmake
@@ -0,0 +1,28 @@
+# Find the Snappy libraries
+#
+# The following variables are optionally searched for defaults
+# Snappy_ROOT_DIR: Base directory where all Snappy components are found
+#
+# The following are set after configuration is done:
+# SNAPPY_FOUND
+# Snappy_INCLUDE_DIR
+# Snappy_LIBRARIES
+
+find_path(Snappy_INCLUDE_DIR NAMES snappy.h
+ PATHS ${SNAPPY_ROOT_DIR} ${SNAPPY_ROOT_DIR}/include)
+
+find_library(Snappy_LIBRARIES NAMES snappy
+ PATHS ${SNAPPY_ROOT_DIR} ${SNAPPY_ROOT_DIR}/lib)
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(Snappy DEFAULT_MSG Snappy_INCLUDE_DIR Snappy_LIBRARIES)
+
+if(SNAPPY_FOUND)
+ message(STATUS "Found Snappy (include: ${Snappy_INCLUDE_DIR}, library: ${Snappy_LIBRARIES})")
+ mark_as_advanced(Snappy_INCLUDE_DIR Snappy_LIBRARIES)
+
+ caffe_parse_header(${Snappy_INCLUDE_DIR}/snappy-stubs-public.h
+ SNAPPY_VERION_LINES SNAPPY_MAJOR SNAPPY_MINOR SNAPPY_PATCHLEVEL)
+ set(Snappy_VERSION "${SNAPPY_MAJOR}.${SNAPPY_MINOR}.${SNAPPY_PATCHLEVEL}")
+endif()
+
diff --git a/caffe-crfrnn/cmake/Modules/FindvecLib.cmake b/caffe-crfrnn/cmake/Modules/FindvecLib.cmake
new file mode 100644
index 00000000..9600da43
--- /dev/null
+++ b/caffe-crfrnn/cmake/Modules/FindvecLib.cmake
@@ -0,0 +1,34 @@
+# Find the vecLib libraries as part of Accelerate.framework or as standalon framework
+#
+# The following are set after configuration is done:
+# VECLIB_FOUND
+# vecLib_INCLUDE_DIR
+# vecLib_LINKER_LIBS
+
+
+if(NOT APPLE)
+ return()
+endif()
+
+set(__veclib_include_suffix "Frameworks/vecLib.framework/Versions/Current/Headers")
+
+find_path(vecLib_INCLUDE_DIR vecLib.h
+ DOC "vecLib include directory"
+ PATHS /System/Library/${__veclib_include_suffix}
+ /System/Library/Frameworks/Accelerate.framework/Versions/Current/${__veclib_include_suffix}
+ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/)
+
+include(FindPackageHandleStandardArgs)
+find_package_handle_standard_args(vecLib DEFAULT_MSG vecLib_INCLUDE_DIR)
+
+if(VECLIB_FOUND)
+ if(vecLib_INCLUDE_DIR MATCHES "^/System/Library/Frameworks/vecLib.framework.*")
+ set(vecLib_LINKER_LIBS -lcblas "-framework vecLib")
+ message(STATUS "Found standalone vecLib.framework")
+ else()
+ set(vecLib_LINKER_LIBS -lcblas "-framework Accelerate")
+ message(STATUS "Found vecLib as part of Accelerate.framework")
+ endif()
+
+ mark_as_advanced(vecLib_INCLUDE_DIR)
+endif()
diff --git a/caffe-crfrnn/cmake/ProtoBuf.cmake b/caffe-crfrnn/cmake/ProtoBuf.cmake
new file mode 100644
index 00000000..fc799bd3
--- /dev/null
+++ b/caffe-crfrnn/cmake/ProtoBuf.cmake
@@ -0,0 +1,90 @@
+# Finds Google Protocol Buffers library and compilers and extends
+# the standard cmake script with version and python generation support
+
+find_package( Protobuf REQUIRED )
+include_directories(SYSTEM ${PROTOBUF_INCLUDE_DIR})
+list(APPEND Caffe_LINKER_LIBS ${PROTOBUF_LIBRARIES})
+
+# As of Ubuntu 14.04 protoc is no longer a part of libprotobuf-dev package
+# and should be installed separately as in: sudo apt-get install protobuf-compiler
+if(EXISTS ${PROTOBUF_PROTOC_EXECUTABLE})
+ message(STATUS "Found PROTOBUF Compiler: ${PROTOBUF_PROTOC_EXECUTABLE}")
+else()
+ message(FATAL_ERROR "Could not find PROTOBUF Compiler")
+endif()
+
+if(PROTOBUF_FOUND)
+ # fetches protobuf version
+ caffe_parse_header(${PROTOBUF_INCLUDE_DIR}/google/protobuf/stubs/common.h VERION_LINE GOOGLE_PROTOBUF_VERSION)
+ string(REGEX MATCH "([0-9])00([0-9])00([0-9])" PROTOBUF_VERSION ${GOOGLE_PROTOBUF_VERSION})
+ set(PROTOBUF_VERSION "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}.${CMAKE_MATCH_3}")
+ unset(GOOGLE_PROTOBUF_VERSION)
+endif()
+
+# place where to generate protobuf sources
+set(proto_gen_folder "${PROJECT_BINARY_DIR}/include/caffe/proto")
+include_directories(SYSTEM "${PROJECT_BINARY_DIR}/include")
+
+set(PROTOBUF_GENERATE_CPP_APPEND_PATH TRUE)
+
+################################################################################################
+# Modification of standard 'protobuf_generate_cpp()' with output dir parameter and python support
+# Usage:
+# caffe_protobuf_generate_cpp_py( )
+function(caffe_protobuf_generate_cpp_py output_dir srcs_var hdrs_var python_var)
+ if(NOT ARGN)
+ message(SEND_ERROR "Error: caffe_protobuf_generate_cpp_py() called without any proto files")
+ return()
+ endif()
+
+ if(PROTOBUF_GENERATE_CPP_APPEND_PATH)
+ # Create an include path for each file specified
+ foreach(fil ${ARGN})
+ get_filename_component(abs_fil ${fil} ABSOLUTE)
+ get_filename_component(abs_path ${abs_fil} PATH)
+ list(FIND _protoc_include ${abs_path} _contains_already)
+ if(${_contains_already} EQUAL -1)
+ list(APPEND _protoc_include -I ${abs_path})
+ endif()
+ endforeach()
+ else()
+ set(_protoc_include -I ${CMAKE_CURRENT_SOURCE_DIR})
+ endif()
+
+ if(DEFINED PROTOBUF_IMPORT_DIRS)
+ foreach(dir ${PROTOBUF_IMPORT_DIRS})
+ get_filename_component(abs_path ${dir} ABSOLUTE)
+ list(FIND _protoc_include ${abs_path} _contains_already)
+ if(${_contains_already} EQUAL -1)
+ list(APPEND _protoc_include -I ${abs_path})
+ endif()
+ endforeach()
+ endif()
+
+ set(${srcs_var})
+ set(${hdrs_var})
+ set(${python_var})
+ foreach(fil ${ARGN})
+ get_filename_component(abs_fil ${fil} ABSOLUTE)
+ get_filename_component(fil_we ${fil} NAME_WE)
+
+ list(APPEND ${srcs_var} "${output_dir}/${fil_we}.pb.cc")
+ list(APPEND ${hdrs_var} "${output_dir}/${fil_we}.pb.h")
+ list(APPEND ${python_var} "${output_dir}/${fil_we}_pb2.py")
+
+ add_custom_command(
+ OUTPUT "${output_dir}/${fil_we}.pb.cc"
+ "${output_dir}/${fil_we}.pb.h"
+ "${output_dir}/${fil_we}_pb2.py"
+ COMMAND ${CMAKE_COMMAND} -E make_directory "${output_dir}"
+ COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --cpp_out ${output_dir} ${_protoc_include} ${abs_fil}
+ COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --python_out ${output_dir} ${_protoc_include} ${abs_fil}
+ DEPENDS ${abs_fil}
+ COMMENT "Running C++/Python protocol buffer compiler on ${fil}" VERBATIM )
+ endforeach()
+
+ set_source_files_properties(${${srcs_var}} ${${hdrs_var}} ${${python_var}} PROPERTIES GENERATED TRUE)
+ set(${srcs_var} ${${srcs_var}} PARENT_SCOPE)
+ set(${hdrs_var} ${${hdrs_var}} PARENT_SCOPE)
+ set(${python_var} ${${python_var}} PARENT_SCOPE)
+endfunction()
diff --git a/caffe-crfrnn/cmake/Summary.cmake b/caffe-crfrnn/cmake/Summary.cmake
new file mode 100644
index 00000000..e094ac00
--- /dev/null
+++ b/caffe-crfrnn/cmake/Summary.cmake
@@ -0,0 +1,168 @@
+################################################################################################
+# Caffe status report function.
+# Automatically align right column and selects text based on condition.
+# Usage:
+# caffe_status()
+# caffe_status( [ ...])
+# caffe_status( THEN ELSE )
+function(caffe_status text)
+ set(status_cond)
+ set(status_then)
+ set(status_else)
+
+ set(status_current_name "cond")
+ foreach(arg ${ARGN})
+ if(arg STREQUAL "THEN")
+ set(status_current_name "then")
+ elseif(arg STREQUAL "ELSE")
+ set(status_current_name "else")
+ else()
+ list(APPEND status_${status_current_name} ${arg})
+ endif()
+ endforeach()
+
+ if(DEFINED status_cond)
+ set(status_placeholder_length 23)
+ string(RANDOM LENGTH ${status_placeholder_length} ALPHABET " " status_placeholder)
+ string(LENGTH "${text}" status_text_length)
+ if(status_text_length LESS status_placeholder_length)
+ string(SUBSTRING "${text}${status_placeholder}" 0 ${status_placeholder_length} status_text)
+ elseif(DEFINED status_then OR DEFINED status_else)
+ message(STATUS "${text}")
+ set(status_text "${status_placeholder}")
+ else()
+ set(status_text "${text}")
+ endif()
+
+ if(DEFINED status_then OR DEFINED status_else)
+ if(${status_cond})
+ string(REPLACE ";" " " status_then "${status_then}")
+ string(REGEX REPLACE "^[ \t]+" "" status_then "${status_then}")
+ message(STATUS "${status_text} ${status_then}")
+ else()
+ string(REPLACE ";" " " status_else "${status_else}")
+ string(REGEX REPLACE "^[ \t]+" "" status_else "${status_else}")
+ message(STATUS "${status_text} ${status_else}")
+ endif()
+ else()
+ string(REPLACE ";" " " status_cond "${status_cond}")
+ string(REGEX REPLACE "^[ \t]+" "" status_cond "${status_cond}")
+ message(STATUS "${status_text} ${status_cond}")
+ endif()
+ else()
+ message(STATUS "${text}")
+ endif()
+endfunction()
+
+
+################################################################################################
+# Function for fetching Caffe version from git and headers
+# Usage:
+# caffe_extract_caffe_version()
+function(caffe_extract_caffe_version)
+ set(Caffe_GIT_VERSION "unknown")
+ find_package(Git)
+ if(GIT_FOUND)
+ execute_process(COMMAND ${GIT_EXECUTABLE} describe --tags --always --dirty
+ ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE
+ WORKING_DIRECTORY "${PROJECT_SOURCE_DIR}"
+ OUTPUT_VARIABLE Caffe_GIT_VERSION
+ RESULT_VARIABLE __git_result)
+ if(NOT ${__git_result} EQUAL 0)
+ set(Caffe_GIT_VERSION "unknown")
+ endif()
+ endif()
+
+ set(Caffe_GIT_VERSION ${Caffe_GIT_VERSION} PARENT_SCOPE)
+ set(Caffe_VERSION " (Caffe doesn't declare its version in headers)" PARENT_SCOPE)
+
+ # caffe_parse_header(${Caffe_INCLUDE_DIR}/caffe/version.hpp Caffe_VERSION_LINES CAFFE_MAJOR CAFFE_MINOR CAFFE_PATCH)
+ # set(Caffe_VERSION "${CAFFE_MAJOR}.${CAFFE_MINOR}.${CAFFE_PATCH}" PARENT_SCOPE)
+
+ # or for #define Caffe_VERSION "x.x.x"
+ # caffe_parse_header_single_define(Caffe ${Caffe_INCLUDE_DIR}/caffe/version.hpp Caffe_VERSION)
+ # set(Caffe_VERSION ${Caffe_VERSION_STRING} PARENT_SCOPE)
+
+endfunction()
+
+
+################################################################################################
+# Prints accumulated caffe configuration summary
+# Usage:
+# caffe_print_configuration_summary()
+
+function(caffe_print_configuration_summary)
+ caffe_extract_caffe_version()
+ set(Caffe_VERSION ${Caffe_VERSION} PARENT_SCOPE)
+
+ caffe_merge_flag_lists(__flags_rel CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS)
+ caffe_merge_flag_lists(__flags_deb CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS)
+
+ caffe_status("")
+ caffe_status("******************* Caffe Configuration Summary *******************")
+ caffe_status("General:")
+ caffe_status(" Version : ${Caffe_VERSION}")
+ caffe_status(" Git : ${Caffe_GIT_VERSION}")
+ caffe_status(" System : ${CMAKE_SYSTEM_NAME}")
+ caffe_status(" C++ compiler : ${CMAKE_CXX_COMPILER}")
+ caffe_status(" Release CXX flags : ${__flags_rel}")
+ caffe_status(" Debug CXX flags : ${__flags_deb}")
+ caffe_status(" Build type : ${CMAKE_BUILD_TYPE}")
+ caffe_status("")
+ caffe_status(" BUILD_SHARED_LIBS : ${BUILD_SHARED_LIBS}")
+ caffe_status(" BUILD_python : ${BUILD_python}")
+ caffe_status(" BUILD_matlab : ${BUILD_matlab}")
+ caffe_status(" BUILD_docs : ${BUILD_docs}")
+ caffe_status(" CPU_ONLY : ${CPU_ONLY}")
+ caffe_status("")
+ caffe_status("Dependencies:")
+ caffe_status(" BLAS : " APPLE THEN "Yes (vecLib)" ELSE "Yes (${BLAS})")
+ caffe_status(" Boost : Yes (ver. ${Boost_MAJOR_VERSION}.${Boost_MINOR_VERSION})")
+ caffe_status(" glog : Yes")
+ caffe_status(" gflags : Yes")
+ caffe_status(" protobuf : " PROTOBUF_FOUND THEN "Yes (ver. ${PROTOBUF_VERSION})" ELSE "No" )
+ caffe_status(" lmdb : " LMDB_FOUND THEN "Yes (ver. ${LMDB_VERSION})" ELSE "No")
+ caffe_status(" Snappy : " SNAPPY_FOUND THEN "Yes (ver. ${Snappy_VERSION})" ELSE "No" )
+ caffe_status(" LevelDB : " LEVELDB_FOUND THEN "Yes (ver. ${LEVELDB_VERSION})" ELSE "No")
+ caffe_status(" OpenCV : Yes (ver. ${OpenCV_VERSION})")
+ caffe_status(" CUDA : " HAVE_CUDA THEN "Yes (ver. ${CUDA_VERSION})" ELSE "No" )
+ caffe_status("")
+ if(HAVE_CUDA)
+ caffe_status("NVIDIA CUDA:")
+ caffe_status(" Target GPU(s) : ${CUDA_ARCH_NAME}" )
+ caffe_status(" GPU arch(s) : ${NVCC_FLAGS_EXTRA_readable}")
+ if(USE_CUDNN)
+ caffe_status(" cuDNN : " HAVE_CUDNN THEN "Yes" ELSE "Not found")
+ else()
+ caffe_status(" cuDNN : Disabled")
+ endif()
+ caffe_status("")
+ endif()
+ if(HAVE_PYTHON)
+ caffe_status("Python:")
+ caffe_status(" Interpreter :" PYTHON_EXECUTABLE THEN "${PYTHON_EXECUTABLE} (ver. ${PYTHON_VERSION_STRING})" ELSE "No")
+ caffe_status(" Libraries :" PYTHONLIBS_FOUND THEN "${PYTHON_LIBRARIES} (ver ${PYTHONLIBS_VERSION_STRING})" ELSE "No")
+ caffe_status(" NumPy :" NUMPY_FOUND THEN "${NUMPY_INCLUDE_DIR} (ver ${NUMPY_VERSION})" ELSE "No")
+ caffe_status("")
+ endif()
+ if(BUILD_matlab)
+ caffe_status("Matlab:")
+ caffe_status(" Matlab :" HAVE_MATLAB THEN "Yes (${Matlab_mex}, ${Matlab_mexext}" ELSE "No")
+ caffe_status(" Octave :" Octave_compiler THEN "Yes (${Octave_compiler})" ELSE "No")
+ if(HAVE_MATLAB AND Octave_compiler)
+ caffe_status(" Build mex using : ${Matlab_build_mex_using}")
+ endif()
+ caffe_status("")
+ endif()
+ if(BUILD_docs)
+ caffe_status("Documentaion:")
+ caffe_status(" Doxygen :" DOXYGEN_FOUND THEN "${DOXYGEN_EXECUTABLE} (${DOXYGEN_VERSION})" ELSE "No")
+ caffe_status(" config_file : ${DOXYGEN_config_file}")
+
+ caffe_status("")
+ endif()
+ caffe_status("Install:")
+ caffe_status(" Install path : ${CMAKE_INSTALL_PREFIX}")
+ caffe_status("")
+endfunction()
+
diff --git a/caffe-crfrnn/cmake/Targets.cmake b/caffe-crfrnn/cmake/Targets.cmake
new file mode 100644
index 00000000..4fc9456e
--- /dev/null
+++ b/caffe-crfrnn/cmake/Targets.cmake
@@ -0,0 +1,173 @@
+################################################################################################
+# Defines global Caffe_LINK flag, This flag is required to prevent linker from excluding
+# some objects which are not addressed directly but are registered via static constructors
+if(BUILD_SHARED_LIBS)
+ set(Caffe_LINK caffe)
+else()
+ if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
+ set(Caffe_LINK -Wl,-force_load caffe)
+ elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
+ set(Caffe_LINK -Wl,--whole-archive caffe -Wl,--no-whole-archive)
+ endif()
+endif()
+
+################################################################################################
+# Convenient command to setup source group for IDEs that support this feature (VS, XCode)
+# Usage:
+# caffe_source_group( GLOB[_RECURSE] )
+function(caffe_source_group group)
+ cmake_parse_arguments(CAFFE_SOURCE_GROUP "" "" "GLOB;GLOB_RECURSE" ${ARGN})
+ if(CAFFE_SOURCE_GROUP_GLOB)
+ file(GLOB srcs1 ${CAFFE_SOURCE_GROUP_GLOB})
+ source_group(${group} FILES ${srcs1})
+ endif()
+
+ if(CAFFE_SOURCE_GROUP_GLOB_RECURSE)
+ file(GLOB_RECURSE srcs2 ${CAFFE_SOURCE_GROUP_GLOB_RECURSE})
+ source_group(${group} FILES ${srcs2})
+ endif()
+endfunction()
+
+################################################################################################
+# Collecting sources from globbing and appending to output list variable
+# Usage:
+# caffe_collect_sources( GLOB[_RECURSE] )
+function(caffe_collect_sources variable)
+ cmake_parse_arguments(CAFFE_COLLECT_SOURCES "" "" "GLOB;GLOB_RECURSE" ${ARGN})
+ if(CAFFE_COLLECT_SOURCES_GLOB)
+ file(GLOB srcs1 ${CAFFE_COLLECT_SOURCES_GLOB})
+ set(${variable} ${variable} ${srcs1})
+ endif()
+
+ if(CAFFE_COLLECT_SOURCES_GLOB_RECURSE)
+ file(GLOB_RECURSE srcs2 ${CAFFE_COLLECT_SOURCES_GLOB_RECURSE})
+ set(${variable} ${variable} ${srcs2})
+ endif()
+endfunction()
+
+################################################################################################
+# Short command getting caffe sources (assuming standard Caffe code tree)
+# Usage:
+# caffe_pickup_caffe_sources()
+function(caffe_pickup_caffe_sources root)
+ # put all files in source groups (visible as subfolder in many IDEs)
+ caffe_source_group("Include" GLOB "${root}/include/caffe/*.h*")
+ caffe_source_group("Include\\Util" GLOB "${root}/include/caffe/util/*.h*")
+ caffe_source_group("Include" GLOB "${PROJECT_BINARY_DIR}/caffe_config.h*")
+ caffe_source_group("Source" GLOB "${root}/src/caffe/*.cpp")
+ caffe_source_group("Source\\Util" GLOB "${root}/src/caffe/util/*.cpp")
+ caffe_source_group("Source\\Layers" GLOB "${root}/src/caffe/layers/*.cpp")
+ caffe_source_group("Source\\Cuda" GLOB "${root}/src/caffe/layers/*.cu")
+ caffe_source_group("Source\\Cuda" GLOB "${root}/src/caffe/util/*.cu")
+ caffe_source_group("Source\\Proto" GLOB "${root}/src/caffe/proto/*.proto")
+
+ # source groups for test target
+ caffe_source_group("Include" GLOB "${root}/include/caffe/test/test_*.h*")
+ caffe_source_group("Source" GLOB "${root}/src/caffe/test/test_*.cpp")
+ caffe_source_group("Source\\Cuda" GLOB "${root}/src/caffe/test/test_*.cu")
+
+ # collect files
+ file(GLOB test_hdrs ${root}/include/caffe/test/test_*.h*)
+ file(GLOB test_srcs ${root}/src/caffe/test/test_*.cpp)
+ file(GLOB_RECURSE hdrs ${root}/include/caffe/*.h*)
+ file(GLOB_RECURSE srcs ${root}/src/caffe/*.cpp)
+ list(REMOVE_ITEM hdrs ${test_hdrs})
+ list(REMOVE_ITEM srcs ${test_srcs})
+
+ # adding headers to make the visible in some IDEs (Qt, VS, Xcode)
+ list(APPEND srcs ${hdrs} ${PROJECT_BINARY_DIR}/caffe_config.h)
+ list(APPEND test_srcs ${test_hdrs})
+
+ # collect cuda files
+ file(GLOB test_cuda ${root}/src/caffe/test/test_*.cu)
+ file(GLOB_RECURSE cuda ${root}/src/caffe/*.cu)
+ list(REMOVE_ITEM cuda 6.5)
+
+ # add proto to make them editable in IDEs too
+ file(GLOB_RECURSE proto_files ${root}/src/caffe/*.proto)
+ list(APPEND srcs ${proto_files})
+
+ # convet to absolute paths
+ caffe_convert_absolute_paths(srcs)
+ caffe_convert_absolute_paths(cuda)
+ caffe_convert_absolute_paths(test_srcs)
+ caffe_convert_absolute_paths(test_cuda)
+
+ # propogate to parent scope
+ set(srcs ${srcs} PARENT_SCOPE)
+ set(cuda ${cuda} PARENT_SCOPE)
+ set(test_srcs ${test_srcs} PARENT_SCOPE)
+ set(test_cuda ${test_cuda} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Short command for setting defeault target properties
+# Usage:
+# caffe_default_properties()
+function(caffe_default_properties target)
+ set_target_properties(${target} PROPERTIES
+ DEBUG_POSTFIX ${Caffe_DEBUG_POSTFIX}
+ ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/lib"
+ LIBRARY_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/lib"
+ RUNTIME_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/bin")
+ # make sure we build all external depepdencies first
+ if (DEFINED external_project_dependencies)
+ add_dependencies(${target} ${external_project_dependencies})
+ endif()
+endfunction()
+
+################################################################################################
+# Short command for setting runtime directory for build target
+# Usage:
+# caffe_set_runtime_directory( )
+function(caffe_set_runtime_directory target dir)
+ set_target_properties(${target} PROPERTIES
+ RUNTIME_OUTPUT_DIRECTORY "${dir}")
+endfunction()
+
+################################################################################################
+# Short command for setting solution folder property for target
+# Usage:
+# caffe_set_solution_folder( )
+function(caffe_set_solution_folder target folder)
+ if(USE_PROJECT_FOLDERS)
+ set_target_properties(${target} PROPERTIES FOLDER "${folder}")
+ endif()
+endfunction()
+
+################################################################################################
+# Reads lines from input file, prepends source directory to each line and writes to output file
+# Usage:
+# caffe_configure_testdatafile()
+function(caffe_configure_testdatafile file)
+ file(STRINGS ${file} __lines)
+ set(result "")
+ foreach(line ${__lines})
+ set(result "${result}${PROJECT_SOURCE_DIR}/${line}\n")
+ endforeach()
+ file(WRITE ${file}.gen.cmake ${result})
+endfunction()
+
+################################################################################################
+# Filter out all files that are not included in selected list
+# Usage:
+# caffe_leave_only_selected_tests( )
+function(caffe_leave_only_selected_tests file_list)
+ if(NOT ARGN)
+ return() # blank list means leave all
+ endif()
+ string(REPLACE "," ";" __selected ${ARGN})
+ list(APPEND __selected caffe_main)
+
+ set(result "")
+ foreach(f ${${file_list}})
+ get_filename_component(name ${f} NAME_WE)
+ string(REGEX REPLACE "^test_" "" name ${name})
+ list(FIND __selected ${name} __index)
+ if(NOT __index EQUAL -1)
+ list(APPEND result ${f})
+ endif()
+ endforeach()
+ set(${file_list} ${result} PARENT_SCOPE)
+endfunction()
+
diff --git a/caffe-crfrnn/cmake/Templates/CaffeConfig.cmake.in b/caffe-crfrnn/cmake/Templates/CaffeConfig.cmake.in
new file mode 100644
index 00000000..8f23742e
--- /dev/null
+++ b/caffe-crfrnn/cmake/Templates/CaffeConfig.cmake.in
@@ -0,0 +1,58 @@
+# Config file for the Caffe package.
+#
+# Note:
+# Caffe and this config file depends on opencv,
+# so put `find_package(OpenCV)` before searching Caffe
+# via `find_package(Caffe)`. All other lib/includes
+# dependencies are hard coded in the file
+#
+# After successful configuration the following variables
+# will be defined:
+#
+# Caffe_INCLUDE_DIRS - Caffe include directories
+# Caffe_LIBRARIES - libraries to link against
+# Caffe_DEFINITIONS - a list of definitions to pass to compiler
+#
+# Caffe_HAVE_CUDA - signals about CUDA support
+# Caffe_HAVE_CUDNN - signals about cuDNN support
+
+
+# OpenCV dependency
+
+if(NOT OpenCV_FOUND)
+ set(Caffe_OpenCV_CONFIG_PATH "@OpenCV_CONFIG_PATH@")
+ if(Caffe_OpenCV_CONFIG_PATH)
+ get_filename_component(Caffe_OpenCV_CONFIG_PATH ${Caffe_OpenCV_CONFIG_PATH} ABSOLUTE)
+
+ if(EXISTS ${Caffe_OpenCV_CONFIG_PATH} AND NOT TARGET opencv_core)
+ message(STATUS "Caffe: using OpenCV config from ${Caffe_OpenCV_CONFIG_PATH}")
+ include(${Caffe_OpenCV_CONFIG_PATH}/OpenCVModules.cmake)
+ endif()
+
+ else()
+ find_package(OpenCV REQUIRED)
+ endif()
+ unset(Caffe_OpenCV_CONFIG_PATH)
+endif()
+
+# Compute paths
+get_filename_component(Caffe_CMAKE_DIR "${CMAKE_CURRENT_LIST_FILE}" PATH)
+set(Caffe_INCLUDE_DIRS "@Caffe_INCLUDE_DIRS@")
+
+@Caffe_INSTALL_INCLUDE_DIR_APPEND_COMMAND@
+
+# Our library dependencies
+if(NOT TARGET caffe AND NOT caffe_BINARY_DIR)
+ include("${Caffe_CMAKE_DIR}/CaffeTargets.cmake")
+endif()
+
+# List of IMPORTED libs created by CaffeTargets.cmake
+set(Caffe_LIBRARIES caffe)
+
+# Definitions
+set(Caffe_DEFINITIONS "@Caffe_DEFINITIONS@")
+
+# Cuda support variables
+set(Caffe_CPU_ONLY @CPU_ONLY@)
+set(Caffe_HAVE_CUDA @HAVE_CUDA@)
+set(Caffe_HAVE_CUDNN @HAVE_CUDNN@)
diff --git a/caffe-crfrnn/cmake/Templates/CaffeConfigVersion.cmake.in b/caffe-crfrnn/cmake/Templates/CaffeConfigVersion.cmake.in
new file mode 100644
index 00000000..19f85309
--- /dev/null
+++ b/caffe-crfrnn/cmake/Templates/CaffeConfigVersion.cmake.in
@@ -0,0 +1,11 @@
+set(PACKAGE_VERSION "@Caffe_VERSION@")
+
+# Check whether the requested PACKAGE_FIND_VERSION is compatible
+if("${PACKAGE_VERSION}" VERSION_LESS "${PACKAGE_FIND_VERSION}")
+ set(PACKAGE_VERSION_COMPATIBLE FALSE)
+else()
+ set(PACKAGE_VERSION_COMPATIBLE TRUE)
+ if ("${PACKAGE_VERSION}" VERSION_EQUAL "${PACKAGE_FIND_VERSION}")
+ set(PACKAGE_VERSION_EXACT TRUE)
+ endif()
+endif()
diff --git a/caffe-crfrnn/cmake/Templates/caffe_config.h.in b/caffe-crfrnn/cmake/Templates/caffe_config.h.in
new file mode 100644
index 00000000..6039e8f6
--- /dev/null
+++ b/caffe-crfrnn/cmake/Templates/caffe_config.h.in
@@ -0,0 +1,32 @@
+/* Sources directory */
+#define SOURCE_FOLDER "${PROJECT_SOURCE_DIR}"
+
+/* Binaries directory */
+#define BINARY_FOLDER "${PROJECT_BINARY_DIR}"
+
+/* NVIDA Cuda */
+#cmakedefine HAVE_CUDA
+
+/* NVIDA cuDNN */
+#cmakedefine HAVE_CUDNN
+#cmakedefine USE_CUDNN
+
+/* NVIDA cuDNN */
+#cmakedefine CPU_ONLY
+
+/* Test device */
+#define CUDA_TEST_DEVICE ${CUDA_TEST_DEVICE}
+
+/* Temporary (TODO: remove) */
+#if 1
+ #define CMAKE_SOURCE_DIR SOURCE_FOLDER "/src/"
+ #define EXAMPLES_SOURCE_DIR BINARY_FOLDER "/examples/"
+ #define CMAKE_EXT ".gen.cmake"
+#else
+ #define CMAKE_SOURCE_DIR "src/"
+ #define EXAMPLES_SOURCE_DIR "examples/"
+ #define CMAKE_EXT ""
+#endif
+
+/* Matlab */
+#cmakedefine HAVE_MATLAB
diff --git a/caffe-crfrnn/cmake/Utils.cmake b/caffe-crfrnn/cmake/Utils.cmake
new file mode 100644
index 00000000..a1bde1ae
--- /dev/null
+++ b/caffe-crfrnn/cmake/Utils.cmake
@@ -0,0 +1,381 @@
+################################################################################################
+# Command alias for debugging messages
+# Usage:
+# dmsg()
+function(dmsg)
+ message(STATUS ${ARGN})
+endfunction()
+
+################################################################################################
+# Removes duplicates from list(s)
+# Usage:
+# caffe_list_unique( [] [...])
+macro(caffe_list_unique)
+ foreach(__lst ${ARGN})
+ if(${__lst})
+ list(REMOVE_DUPLICATES ${__lst})
+ endif()
+ endforeach()
+endmacro()
+
+################################################################################################
+# Clears variables from list
+# Usage:
+# caffe_clear_vars()
+macro(caffe_clear_vars)
+ foreach(_var ${ARGN})
+ unset(${_var})
+ endforeach()
+endmacro()
+
+################################################################################################
+# Removes duplicates from string
+# Usage:
+# caffe_string_unique()
+function(caffe_string_unique __string)
+ if(${__string})
+ set(__list ${${__string}})
+ separate_arguments(__list)
+ list(REMOVE_DUPLICATES __list)
+ foreach(__e ${__list})
+ set(__str "${__str} ${__e}")
+ endforeach()
+ set(${__string} ${__str} PARENT_SCOPE)
+ endif()
+endfunction()
+
+################################################################################################
+# Prints list element per line
+# Usage:
+# caffe_print_list()
+function(caffe_print_list)
+ foreach(e ${ARGN})
+ message(STATUS ${e})
+ endforeach()
+endfunction()
+
+################################################################################################
+# Function merging lists of compiler flags to single string.
+# Usage:
+# caffe_merge_flag_lists(out_variable [] [] ...)
+function(caffe_merge_flag_lists out_var)
+ set(__result "")
+ foreach(__list ${ARGN})
+ foreach(__flag ${${__list}})
+ string(STRIP ${__flag} __flag)
+ set(__result "${__result} ${__flag}")
+ endforeach()
+ endforeach()
+ string(STRIP ${__result} __result)
+ set(${out_var} ${__result} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Converts all paths in list to absolute
+# Usage:
+# caffe_convert_absolute_paths()
+function(caffe_convert_absolute_paths variable)
+ set(__dlist "")
+ foreach(__s ${${variable}})
+ get_filename_component(__abspath ${__s} ABSOLUTE)
+ list(APPEND __list ${__abspath})
+ endforeach()
+ set(${variable} ${__list} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Reads set of version defines from the header file
+# Usage:
+# caffe_parse_header( ..)
+macro(caffe_parse_header FILENAME FILE_VAR)
+ set(vars_regex "")
+ set(__parnet_scope OFF)
+ set(__add_cache OFF)
+ foreach(name ${ARGN})
+ if("${name}" STREQUAL "PARENT_SCOPE")
+ set(__parnet_scope ON)
+ elseif("${name}" STREQUAL "CACHE")
+ set(__add_cache ON)
+ elseif(vars_regex)
+ set(vars_regex "${vars_regex}|${name}")
+ else()
+ set(vars_regex "${name}")
+ endif()
+ endforeach()
+ if(EXISTS "${FILENAME}")
+ file(STRINGS "${FILENAME}" ${FILE_VAR} REGEX "#define[ \t]+(${vars_regex})[ \t]+[0-9]+" )
+ else()
+ unset(${FILE_VAR})
+ endif()
+ foreach(name ${ARGN})
+ if(NOT "${name}" STREQUAL "PARENT_SCOPE" AND NOT "${name}" STREQUAL "CACHE")
+ if(${FILE_VAR})
+ if(${FILE_VAR} MATCHES ".+[ \t]${name}[ \t]+([0-9]+).*")
+ string(REGEX REPLACE ".+[ \t]${name}[ \t]+([0-9]+).*" "\\1" ${name} "${${FILE_VAR}}")
+ else()
+ set(${name} "")
+ endif()
+ if(__add_cache)
+ set(${name} ${${name}} CACHE INTERNAL "${name} parsed from ${FILENAME}" FORCE)
+ elseif(__parnet_scope)
+ set(${name} "${${name}}" PARENT_SCOPE)
+ endif()
+ else()
+ unset(${name} CACHE)
+ endif()
+ endif()
+ endforeach()
+endmacro()
+
+################################################################################################
+# Reads single version define from the header file and parses it
+# Usage:
+# caffe_parse_header_single_define( )
+function(caffe_parse_header_single_define LIBNAME HDR_PATH VARNAME)
+ set(${LIBNAME}_H "")
+ if(EXISTS "${HDR_PATH}")
+ file(STRINGS "${HDR_PATH}" ${LIBNAME}_H REGEX "^#define[ \t]+${VARNAME}[ \t]+\"[^\"]*\".*$" LIMIT_COUNT 1)
+ endif()
+
+ if(${LIBNAME}_H)
+ string(REGEX REPLACE "^.*[ \t]${VARNAME}[ \t]+\"([0-9]+).*$" "\\1" ${LIBNAME}_VERSION_MAJOR "${${LIBNAME}_H}")
+ string(REGEX REPLACE "^.*[ \t]${VARNAME}[ \t]+\"[0-9]+\\.([0-9]+).*$" "\\1" ${LIBNAME}_VERSION_MINOR "${${LIBNAME}_H}")
+ string(REGEX REPLACE "^.*[ \t]${VARNAME}[ \t]+\"[0-9]+\\.[0-9]+\\.([0-9]+).*$" "\\1" ${LIBNAME}_VERSION_PATCH "${${LIBNAME}_H}")
+ set(${LIBNAME}_VERSION_MAJOR ${${LIBNAME}_VERSION_MAJOR} ${ARGN} PARENT_SCOPE)
+ set(${LIBNAME}_VERSION_MINOR ${${LIBNAME}_VERSION_MINOR} ${ARGN} PARENT_SCOPE)
+ set(${LIBNAME}_VERSION_PATCH ${${LIBNAME}_VERSION_PATCH} ${ARGN} PARENT_SCOPE)
+ set(${LIBNAME}_VERSION_STRING "${${LIBNAME}_VERSION_MAJOR}.${${LIBNAME}_VERSION_MINOR}.${${LIBNAME}_VERSION_PATCH}" PARENT_SCOPE)
+
+ # append a TWEAK version if it exists:
+ set(${LIBNAME}_VERSION_TWEAK "")
+ if("${${LIBNAME}_H}" MATCHES "^.*[ \t]${VARNAME}[ \t]+\"[0-9]+\\.[0-9]+\\.[0-9]+\\.([0-9]+).*$")
+ set(${LIBNAME}_VERSION_TWEAK "${CMAKE_MATCH_1}" ${ARGN} PARENT_SCOPE)
+ endif()
+ if(${LIBNAME}_VERSION_TWEAK)
+ set(${LIBNAME}_VERSION_STRING "${${LIBNAME}_VERSION_STRING}.${${LIBNAME}_VERSION_TWEAK}" ${ARGN} PARENT_SCOPE)
+ else()
+ set(${LIBNAME}_VERSION_STRING "${${LIBNAME}_VERSION_STRING}" ${ARGN} PARENT_SCOPE)
+ endif()
+ endif()
+endfunction()
+
+########################################################################################################
+# An option that the user can select. Can accept condition to control when option is available for user.
+# Usage:
+# caffe_option( "doc string" [IF ])
+function(caffe_option variable description value)
+ set(__value ${value})
+ set(__condition "")
+ set(__varname "__value")
+ foreach(arg ${ARGN})
+ if(arg STREQUAL "IF" OR arg STREQUAL "if")
+ set(__varname "__condition")
+ else()
+ list(APPEND ${__varname} ${arg})
+ endif()
+ endforeach()
+ unset(__varname)
+ if("${__condition}" STREQUAL "")
+ set(__condition 2 GREATER 1)
+ endif()
+
+ if(${__condition})
+ if("${__value}" MATCHES ";")
+ if(${__value})
+ option(${variable} "${description}" ON)
+ else()
+ option(${variable} "${description}" OFF)
+ endif()
+ elseif(DEFINED ${__value})
+ if(${__value})
+ option(${variable} "${description}" ON)
+ else()
+ option(${variable} "${description}" OFF)
+ endif()
+ else()
+ option(${variable} "${description}" ${__value})
+ endif()
+ else()
+ unset(${variable} CACHE)
+ endif()
+endfunction()
+
+################################################################################################
+# Utility macro for comparing two lists. Used for CMake debugging purposes
+# Usage:
+# caffe_compare_lists( [description])
+function(caffe_compare_lists list1 list2 desc)
+ set(__list1 ${${list1}})
+ set(__list2 ${${list2}})
+ list(SORT __list1)
+ list(SORT __list2)
+ list(LENGTH __list1 __len1)
+ list(LENGTH __list2 __len2)
+
+ if(NOT ${__len1} EQUAL ${__len2})
+ message(FATAL_ERROR "Lists are not equal. ${__len1} != ${__len2}. ${desc}")
+ endif()
+
+ foreach(__i RANGE 1 ${__len1})
+ math(EXPR __index "${__i}- 1")
+ list(GET __list1 ${__index} __item1)
+ list(GET __list2 ${__index} __item2)
+ if(NOT ${__item1} STREQUAL ${__item2})
+ message(FATAL_ERROR "Lists are not equal. Differ at element ${__index}. ${desc}")
+ endif()
+ endforeach()
+endfunction()
+
+################################################################################################
+# Command for disabling warnings for different platforms (see below for gcc and VisualStudio)
+# Usage:
+# caffe_warnings_disable( -Wshadow /wd4996 ..,)
+macro(caffe_warnings_disable)
+ set(_flag_vars "")
+ set(_msvc_warnings "")
+ set(_gxx_warnings "")
+
+ foreach(arg ${ARGN})
+ if(arg MATCHES "^CMAKE_")
+ list(APPEND _flag_vars ${arg})
+ elseif(arg MATCHES "^/wd")
+ list(APPEND _msvc_warnings ${arg})
+ elseif(arg MATCHES "^-W")
+ list(APPEND _gxx_warnings ${arg})
+ endif()
+ endforeach()
+
+ if(NOT _flag_vars)
+ set(_flag_vars CMAKE_C_FLAGS CMAKE_CXX_FLAGS)
+ endif()
+
+ if(MSVC AND _msvc_warnings)
+ foreach(var ${_flag_vars})
+ foreach(warning ${_msvc_warnings})
+ set(${var} "${${var}} ${warning}")
+ endforeach()
+ endforeach()
+ elseif((CMAKE_COMPILER_IS_GNUCXX OR CMAKE_COMPILER_IS_CLANGXX) AND _gxx_warnings)
+ foreach(var ${_flag_vars})
+ foreach(warning ${_gxx_warnings})
+ if(NOT warning MATCHES "^-Wno-")
+ string(REPLACE "${warning}" "" ${var} "${${var}}")
+ string(REPLACE "-W" "-Wno-" warning "${warning}")
+ endif()
+ set(${var} "${${var}} ${warning}")
+ endforeach()
+ endforeach()
+ endif()
+ caffe_clear_vars(_flag_vars _msvc_warnings _gxx_warnings)
+endmacro()
+
+################################################################################################
+# Helper function get current definitions
+# Usage:
+# caffe_get_current_definitions()
+function(caffe_get_current_definitions definitions_var)
+ get_property(current_definitions DIRECTORY PROPERTY COMPILE_DEFINITIONS)
+ set(result "")
+
+ foreach(d ${current_definitions})
+ list(APPEND result -D${d})
+ endforeach()
+
+ caffe_list_unique(result)
+ set(${definitions_var} ${result} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Helper function get current includes/definitions
+# Usage:
+# caffe_get_current_cflags()
+function(caffe_get_current_cflags cflags_var)
+ get_property(current_includes DIRECTORY PROPERTY INCLUDE_DIRECTORIES)
+ caffe_convert_absolute_paths(current_includes)
+ caffe_get_current_definitions(cflags)
+
+ foreach(i ${current_includes})
+ list(APPEND cflags "-I${i}")
+ endforeach()
+
+ caffe_list_unique(cflags)
+ set(${cflags_var} ${cflags} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Helper function to parse current linker libs into link directories, libflags and osx frameworks
+# Usage:
+# caffe_parse_linker_libs( )
+function(caffe_parse_linker_libs Caffe_LINKER_LIBS_variable folders_var flags_var frameworks_var)
+
+ set(__unspec "")
+ set(__debug "")
+ set(__optimized "")
+ set(__framework "")
+ set(__varname "__unspec")
+
+ # split libs into debug, optimized, unspecified and frameworks
+ foreach(list_elem ${${Caffe_LINKER_LIBS_variable}})
+ if(list_elem STREQUAL "debug")
+ set(__varname "__debug")
+ elseif(list_elem STREQUAL "optimized")
+ set(__varname "__optimized")
+ elseif(list_elem MATCHES "^-framework[ \t]+([^ \t].*)")
+ list(APPEND __framework -framework ${CMAKE_MATCH_1})
+ else()
+ list(APPEND ${__varname} ${list_elem})
+ set(__varname "__unspec")
+ endif()
+ endforeach()
+
+ # attach debug or optimized libs to unspecified according to current configuration
+ if(CMAKE_BUILD_TYPE MATCHES "Debug")
+ set(__libs ${__unspec} ${__debug})
+ else()
+ set(__libs ${__unspec} ${__optimized})
+ endif()
+
+ set(libflags "")
+ set(folders "")
+
+ # convert linker libraries list to link flags
+ foreach(lib ${__libs})
+ if(TARGET ${lib})
+ list(APPEND folders $)
+ list(APPEND libflags -l${lib})
+ elseif(lib MATCHES "^-l.*")
+ list(APPEND libflags ${lib})
+ elseif(IS_ABSOLUTE ${lib})
+ get_filename_component(name_we ${lib} NAME_WE)
+ get_filename_component(folder ${lib} PATH)
+
+ string(REGEX MATCH "^lib(.*)" __match ${name_we})
+ list(APPEND libflags -l${CMAKE_MATCH_1})
+ list(APPEND folders ${folder})
+ else()
+ message(FATAL_ERROR "Logic error. Need to update cmake script")
+ endif()
+ endforeach()
+
+ caffe_list_unique(libflags folders)
+
+ set(${folders_var} ${folders} PARENT_SCOPE)
+ set(${flags_var} ${libflags} PARENT_SCOPE)
+ set(${frameworks_var} ${__framework} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+# Helper function to detect Darwin version, i.e. 10.8, 10.9, 10.10, ....
+# Usage:
+# caffe_detect_darwin_version()
+function(caffe_detect_darwin_version output_var)
+ if(APPLE)
+ execute_process(COMMAND /usr/bin/sw_vers -productVersion
+ RESULT_VARIABLE __sw_vers OUTPUT_VARIABLE __sw_vers_out
+ ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
+
+ set(${output_var} ${__sw_vers_out} PARENT_SCOPE)
+ else()
+ set(${output_var} "" PARENT_SCOPE)
+ endif()
+endfunction()
diff --git a/caffe-crfrnn/cmake/lint.cmake b/caffe-crfrnn/cmake/lint.cmake
new file mode 100644
index 00000000..70a00657
--- /dev/null
+++ b/caffe-crfrnn/cmake/lint.cmake
@@ -0,0 +1,50 @@
+
+set(CMAKE_SOURCE_DIR ..)
+set(LINT_COMMAND ${CMAKE_SOURCE_DIR}/scripts/cpp_lint.py)
+set(SRC_FILE_EXTENSIONS h hpp hu c cpp cu cc)
+set(EXCLUDE_FILE_EXTENSTIONS pb.h pb.cc)
+set(LINT_DIRS include src/caffe examples tools python matlab)
+
+cmake_policy(SET CMP0009 NEW) # suppress cmake warning
+
+# find all files of interest
+foreach(ext ${SRC_FILE_EXTENSIONS})
+ foreach(dir ${LINT_DIRS})
+ file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/${dir}/*.${ext})
+ set(LINT_SOURCES ${LINT_SOURCES} ${FOUND_FILES})
+ endforeach()
+endforeach()
+
+# find all files that should be excluded
+foreach(ext ${EXCLUDE_FILE_EXTENSTIONS})
+ file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/*.${ext})
+ set(EXCLUDED_FILES ${EXCLUDED_FILES} ${FOUND_FILES})
+endforeach()
+
+# exclude generated pb files
+list(REMOVE_ITEM LINT_SOURCES ${EXCLUDED_FILES})
+
+execute_process(
+ COMMAND ${LINT_COMMAND} ${LINT_SOURCES}
+ ERROR_VARIABLE LINT_OUTPUT
+ ERROR_STRIP_TRAILING_WHITESPACE
+)
+
+string(REPLACE "\n" ";" LINT_OUTPUT ${LINT_OUTPUT})
+
+list(GET LINT_OUTPUT -1 LINT_RESULT)
+list(REMOVE_AT LINT_OUTPUT -1)
+string(REPLACE " " ";" LINT_RESULT ${LINT_RESULT})
+list(GET LINT_RESULT -1 NUM_ERRORS)
+if(NUM_ERRORS GREATER 0)
+ foreach(msg ${LINT_OUTPUT})
+ string(FIND ${msg} "Done" result)
+ if(result LESS 0)
+ message(STATUS ${msg})
+ endif()
+ endforeach()
+ message(FATAL_ERROR "Lint found ${NUM_ERRORS} errors!")
+else()
+ message(STATUS "Lint did not find any errors!")
+endif()
+
diff --git a/caffe-crfrnn/data/cifar10/get_cifar10.sh b/caffe-crfrnn/data/cifar10/get_cifar10.sh
new file mode 100755
index 00000000..623c8485
--- /dev/null
+++ b/caffe-crfrnn/data/cifar10/get_cifar10.sh
@@ -0,0 +1,19 @@
+#!/usr/bin/env sh
+# This scripts downloads the CIFAR10 (binary version) data and unzips it.
+
+DIR="$( cd "$(dirname "$0")" ; pwd -P )"
+cd $DIR
+
+echo "Downloading..."
+
+wget --no-check-certificate http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
+
+echo "Unzipping..."
+
+tar -xf cifar-10-binary.tar.gz && rm -f cifar-10-binary.tar.gz
+mv cifar-10-batches-bin/* . && rm -rf cifar-10-batches-bin
+
+# Creation is split out because leveldb sometimes causes segfault
+# and needs to be re-created.
+
+echo "Done."
diff --git a/caffe-crfrnn/data/ilsvrc12/get_ilsvrc_aux.sh b/caffe-crfrnn/data/ilsvrc12/get_ilsvrc_aux.sh
new file mode 100755
index 00000000..b9b85d21
--- /dev/null
+++ b/caffe-crfrnn/data/ilsvrc12/get_ilsvrc_aux.sh
@@ -0,0 +1,21 @@
+#!/usr/bin/env sh
+#
+# N.B. This does not download the ilsvrcC12 data set, as it is gargantuan.
+# This script downloads the imagenet example auxiliary files including:
+# - the ilsvrc12 image mean, binaryproto
+# - synset ids and words
+# - Python pickle-format data of ImageNet graph structure and relative infogain
+# - the training splits with labels
+
+DIR="$( cd "$(dirname "$0")" ; pwd -P )"
+cd $DIR
+
+echo "Downloading..."
+
+wget http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
+
+echo "Unzipping..."
+
+tar -xf caffe_ilsvrc12.tar.gz && rm -f caffe_ilsvrc12.tar.gz
+
+echo "Done."
diff --git a/caffe-crfrnn/data/mnist/get_mnist.sh b/caffe-crfrnn/data/mnist/get_mnist.sh
new file mode 100755
index 00000000..8eb6aeed
--- /dev/null
+++ b/caffe-crfrnn/data/mnist/get_mnist.sh
@@ -0,0 +1,24 @@
+#!/usr/bin/env sh
+# This scripts downloads the mnist data and unzips it.
+
+DIR="$( cd "$(dirname "$0")" ; pwd -P )"
+cd $DIR
+
+echo "Downloading..."
+
+wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
+wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
+wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
+wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
+
+echo "Unzipping..."
+
+gunzip train-images-idx3-ubyte.gz
+gunzip train-labels-idx1-ubyte.gz
+gunzip t10k-images-idx3-ubyte.gz
+gunzip t10k-labels-idx1-ubyte.gz
+
+# Creation is split out because leveldb sometimes causes segfault
+# and needs to be re-created.
+
+echo "Done."
diff --git a/caffe-crfrnn/docs/CMakeLists.txt b/caffe-crfrnn/docs/CMakeLists.txt
new file mode 100644
index 00000000..ae47e461
--- /dev/null
+++ b/caffe-crfrnn/docs/CMakeLists.txt
@@ -0,0 +1,106 @@
+# Building docs script
+# Requirements:
+# sudo apt-get install doxygen texlive ruby-dev
+# sudo gem install jekyll execjs therubyracer
+
+if(NOT BUILD_docs OR NOT DOXYGEN_FOUND)
+ return()
+endif()
+
+#################################################################################################
+# Gather docs from /examples/**/readme.md
+function(gather_readmes_as_prebuild_cmd target gathered_dir root)
+ set(full_gathered_dir ${root}/${gathered_dir})
+
+ file(GLOB_RECURSE readmes ${root}/examples/readme.md ${root}/examples/README.md)
+ foreach(file ${readmes})
+ # Only use file if it is to be included in docs.
+ file(STRINGS ${file} file_lines REGEX "include_in_docs: true")
+
+ if(file_lines)
+ # Since everything is called readme.md, rename it by its dirname.
+ file(RELATIVE_PATH file ${root} ${file})
+ get_filename_component(folder ${file} PATH)
+ set(new_filename ${full_gathered_dir}/${folder}.md)
+
+ # folder value might be like /readme.md. That's why make directory.
+ get_filename_component(new_folder ${new_filename} PATH)
+ add_custom_command(TARGET ${target} PRE_BUILD
+ COMMAND ${CMAKE_COMMAND} -E make_directory ${new_folder}
+ COMMAND ln -sf ${root}/${file} ${new_filename}
+ COMMENT "Creating symlink ${new_filename} -> ${root}/${file}"
+ WORKING_DIRECTORY ${root} VERBATIM)
+ endif()
+ endforeach()
+endfunction()
+
+################################################################################################
+# Gather docs from examples/*.ipynb and add YAML front-matter.
+function(gather_notebooks_as_prebuild_cmd target gathered_dir root)
+ set(full_gathered_dir ${root}/${gathered_dir})
+
+ if(NOT PYTHON_EXECUTABLE)
+ message(STATUS "Python interpeter is not found. Can't include *.ipynb files in docs. Skipping...")
+ return()
+ endif()
+
+ file(GLOB_RECURSE notebooks ${root}/examples/*.ipynb)
+ foreach(file ${notebooks})
+ file(RELATIVE_PATH file ${root} ${file})
+ set(new_filename ${full_gathered_dir}/${file})
+
+ get_filename_component(new_folder ${new_filename} PATH)
+ add_custom_command(TARGET ${target} PRE_BUILD
+ COMMAND ${CMAKE_COMMAND} -E make_directory ${new_folder}
+ COMMAND ${PYTHON_EXECUTABLE} scripts/copy_notebook.py ${file} ${new_filename}
+ COMMENT "Copying notebook ${file} to ${new_filename}"
+ WORKING_DIRECTORY ${root} VERBATIM)
+ endforeach()
+
+ set(${outputs_var} ${outputs} PARENT_SCOPE)
+endfunction()
+
+################################################################################################
+########################## [ Non macro part ] ##################################################
+
+# Gathering is done at each 'make doc'
+file(REMOVE_RECURSE ${PROJECT_SOURCE_DIR}/docs/gathered)
+
+# Doxygen config file path
+set(DOXYGEN_config_file ${PROJECT_SOURCE_DIR}/.Doxyfile CACHE FILEPATH "Doxygen config file")
+
+# Adding docs target
+add_custom_target(docs COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYGEN_config_file}
+ WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
+ COMMENT "Launching doxygen..." VERBATIM)
+
+# Gathering examples into docs subfolder
+gather_notebooks_as_prebuild_cmd(docs docs/gathered ${PROJECT_SOURCE_DIR})
+gather_readmes_as_prebuild_cmd(docs docs/gathered ${PROJECT_SOURCE_DIR})
+
+# Auto detect output directory
+file(STRINGS ${DOXYGEN_config_file} config_line REGEX "OUTPUT_DIRECTORY[ \t]+=[^=].*")
+if(config_line)
+ string(REGEX MATCH "OUTPUT_DIRECTORY[ \t]+=([^=].*)" __ver_check "${config_line}")
+ string(STRIP ${CMAKE_MATCH_1} output_dir)
+ message(STATUS "Detected Doxygen OUTPUT_DIRECTORY: ${output_dir}")
+else()
+ set(output_dir ./doxygen/)
+ message(STATUS "Can't find OUTPUT_DIRECTORY in doxygen config file. Try to use default: ${output_dir}")
+endif()
+
+if(NOT IS_ABSOLUTE ${output_dir})
+ set(output_dir ${PROJECT_SOURCE_DIR}/${output_dir})
+ get_filename_component(output_dir ${output_dir} ABSOLUTE)
+endif()
+
+# creates symlink in docs subfolder to code documentation built by doxygen
+add_custom_command(TARGET docs POST_BUILD VERBATIM
+ COMMAND ln -sfn "${output_dir}/html" doxygen
+ WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/docs
+ COMMENT "Creating symlink ${PROJECT_SOURCE_DIR}/docs/doxygen -> ${output_dir}/html")
+
+# for quick launch of jekyll
+add_custom_target(jekyll COMMAND jekyll serve -w -s . -d _site --port=4000
+ WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/docs
+ COMMENT "Launching jekyll..." VERBATIM)
diff --git a/caffe-crfrnn/docs/CNAME b/caffe-crfrnn/docs/CNAME
new file mode 100644
index 00000000..eee1ae26
--- /dev/null
+++ b/caffe-crfrnn/docs/CNAME
@@ -0,0 +1 @@
+caffe.berkeleyvision.org
diff --git a/caffe-crfrnn/docs/README.md b/caffe-crfrnn/docs/README.md
new file mode 100644
index 00000000..8f1781e3
--- /dev/null
+++ b/caffe-crfrnn/docs/README.md
@@ -0,0 +1,5 @@
+# Caffe Documentation
+
+To generate the documentation, run `$CAFFE_ROOT/scripts/build_docs.sh`.
+
+To push your changes to the documentation to the gh-pages branch of your or the BVLC repo, run `$CAFFE_ROOT/scripts/deploy_docs.sh `.
diff --git a/caffe-crfrnn/docs/_config.yml b/caffe-crfrnn/docs/_config.yml
new file mode 100644
index 00000000..95aec12b
--- /dev/null
+++ b/caffe-crfrnn/docs/_config.yml
@@ -0,0 +1,7 @@
+defaults:
+ -
+ scope:
+ path: "" # an empty string here means all files in the project
+ values:
+ layout: "default"
+
diff --git a/caffe-crfrnn/docs/_layouts/default.html b/caffe-crfrnn/docs/_layouts/default.html
new file mode 100644
index 00000000..73c6d587
--- /dev/null
+++ b/caffe-crfrnn/docs/_layouts/default.html
@@ -0,0 +1,52 @@
+
+
+
+
+
+
+
+
+ Caffe {% if page contains 'title' %}| {{ page.title }}{% endif %}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/caffe-crfrnn/docs/development.md b/caffe-crfrnn/docs/development.md
new file mode 100644
index 00000000..dfed3308
--- /dev/null
+++ b/caffe-crfrnn/docs/development.md
@@ -0,0 +1,125 @@
+---
+title: Developing and Contributing
+---
+# Development
+
+Caffe is developed with active participation of the community.
+The [BVLC](http://bvlc.eecs.berkeley.edu/) maintainers welcome all contributions!
+
+The exact details of contributions are recorded by versioning and cited in our [acknowledgements](http://caffe.berkeleyvision.org/#acknowledgements).
+This method is impartial and always up-to-date.
+
+## License
+
+Caffe is licensed under the terms in [LICENSE](https://github.com/BVLC/caffe/blob/master/LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
+
+## Copyright
+
+Caffe uses a shared copyright model: each contributor holds copyright over their contributions to Caffe. The project versioning records all such contribution and copyright details.
+
+If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. Do not include copyright notices in files for this purpose.
+
+### Documentation
+
+This website, written with [Jekyll](http://jekyllrb.com/), functions as the official Caffe documentation -- simply run `scripts/build_docs.sh` and view the website at `http://0.0.0.0:4000`.
+
+We prefer tutorials and examples to be documented close to where they live, in `readme.md` files.
+The `build_docs.sh` script gathers all `examples/**/readme.md` and `examples/*.ipynb` files, and makes a table of contents.
+To be included in the docs, the readme files must be annotated with [YAML front-matter](http://jekyllrb.com/docs/frontmatter/), including the flag `include_in_docs: true`.
+Similarly for IPython notebooks: simply include `"include_in_docs": true` in the `"metadata"` JSON field.
+
+Other docs, such as installation guides, are written in the `docs` directory and manually linked to from the `index.md` page.
+
+We strive to provide provide lots of usage examples, and to document all code in docstrings.
+We absolutely appreciate any contribution to this effort!
+
+### The release cycle
+
+- The `dev` branch receives all new development, including community contributions.
+We aim to keep it in a functional state, but large changes do occur, and things do get broken every now and then.
+Use only if you want the "bleeding edge".
+- BVLC maintainers will periodically update the `master` branch with changes from `dev`, giving it a release tag ([releases so far](https://github.com/BVLC/caffe/releases)).
+Use this if you want more stability.
+
+### Issues & Pull Request Protocol
+
+Use Github Issues to report [bugs], propose features, and ask development [questions].
+Large-scale development work is guided by [milestones], which are sets of Issues selected for concurrent release (integration from `dev` to `master`).
+
+Please note that since the core developers are largely researchers, we may work on a feature in isolation for some time before releasing it to the community, so as to claim honest academic contribution.
+We do release things as soon as a reasonable technical report may be written, and we still aim to inform the community of ongoing development through Github Issues.
+
+When you are ready to start developing your feature or fixing a bug, follow this protocol:
+
+- Develop in [feature branches] with descriptive names.
+ - For new development branch off `dev`.
+ - For documentation and fixes for `master` branch off `master`.
+- Bring your work up-to-date by [rebasing] onto the latest `dev` / `master`.
+(Polish your changes by [interactive rebase], if you'd like.)
+- [Pull request] your contribution to `BVLC/caffe`'s `dev` / `master` branch for discussion and review.
+ - Make PRs *as soon as development begins*, to let discussion guide development.
+ - A PR is only ready for merge review when it is a fast-forward merge, and all code is documented, linted, and tested -- that means your PR must include tests!
+- When the PR satisfies the above properties, use comments to request maintainer review.
+
+Below is a poetic presentation of the protocol in code form.
+
+#### [Shelhamer's](https://github.com/shelhamer) “life of a branch in four acts”
+
+Make the `feature` branch off of the latest `bvlc/dev`
+```
+git checkout dev
+git pull upstream dev
+git checkout -b feature
+# do your work, make commits
+```
+
+Prepare to merge by rebasing your branch on the latest `bvlc/dev`
+```
+# make sure dev is fresh
+git checkout dev
+git pull upstream dev
+# rebase your branch on the tip of dev
+git checkout feature
+git rebase dev
+```
+
+Push your branch to pull request it into `dev`
+```
+git push origin feature
+# ...make pull request to dev...
+```
+
+Now make a pull request! You can do this from the command line (`git pull-request -b dev`) if you install [hub](https://github.com/github/hub).
+
+The pull request of `feature` into `dev` will be a clean merge. Applause.
+
+[bugs]: https://github.com/BVLC/caffe/issues?labels=bug&page=1&state=open
+[questions]: https://github.com/BVLC/caffe/issues?labels=question&page=1&state=open
+[milestones]: https://github.com/BVLC/caffe/issues?milestone=1
+[Pull request]: https://help.github.com/articles/using-pull-requests
+[interactive rebase]: https://help.github.com/articles/interactive-rebase
+[rebasing]: http://git-scm.com/book/en/Git-Branching-Rebasing
+[feature branches]: https://www.atlassian.com/git/workflows#!workflow-feature-branch
+
+### Testing
+
+Run `make runtest` to check the project tests. New code requires new tests. Pull requests that fail tests will not be accepted.
+
+The `googletest` framework we use provides many additional options, which you can access by running the test binaries directly. One of the more useful options is `--gtest_filter`, which allows you to filter tests by name:
+
+ # run all tests with CPU in the name
+ build/test/test_all.testbin --gtest_filter='*CPU*'
+
+ # run all tests without GPU in the name (note the leading minus sign)
+ build/test/test_all.testbin --gtest_filter=-'*GPU*'
+
+To get a list of all options `googletest` provides, simply pass the `--help` flag:
+
+ build/test/test_all.testbin --help
+
+### Style
+
+- Follow [Google C++ style](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml) and [Google python style](http://google-styleguide.googlecode.com/svn/trunk/pyguide.html) + [PEP 8](http://legacy.python.org/dev/peps/pep-0008/).
+- Wrap lines at 80 chars.
+- Remember that “a foolish consistency is the hobgoblin of little minds,” so use your best judgement to write the clearest code for your particular case.
+- **Run `make lint` to check C++ code.**
diff --git a/caffe-crfrnn/docs/index.md b/caffe-crfrnn/docs/index.md
new file mode 100644
index 00000000..e90b06b4
--- /dev/null
+++ b/caffe-crfrnn/docs/index.md
@@ -0,0 +1,102 @@
+---
+title: Deep Learning Framework
+---
+
+# Caffe
+
+Caffe is a deep learning framework developed with cleanliness, readability, and speed in mind.
+It was created by [Yangqing Jia](http://daggerfs.com) during his PhD at UC Berkeley, and is in active development by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and by community contributors.
+Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).
+
+Check out our web image classification [demo](http://demo.caffe.berkeleyvision.org)!
+
+## Why use Caffe?
+
+**Clean architecture** enables rapid deployment.
+Networks are specified in simple config files, with no hard-coded parameters in the code.
+Switching between CPU and GPU is as simple as setting a flag -- so models can be trained on a GPU machine, and then used on commodity clusters.
+
+**Readable & modifiable implementation** fosters active development.
+In Caffe's first six months, it has been forked by over 300 developers on Github, and many have pushed significant changes.
+
+**Speed** makes Caffe perfect for industry use.
+Caffe can process over **40M images per day** with a single NVIDIA K40 or Titan GPU\*.
+That's 5 ms/image in training, and 2 ms/image in test.
+We believe that Caffe is the fastest CNN implementation available.
+
+**Community**: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.
+There is an active discussion and support community on [Github](https://github.com/BVLC/caffe/issues).
+
+
+
+## Documentation
+
+- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)
+Caffe tutorial slides.
+- [ACM MM paper](http://ucb-icsi-vision-group.github.io/caffe-paper/caffe.pdf)
+A 4-page report for the ACM Multimedia Open Source competition.
+- [Caffe Tutorial](/tutorial)
+DIY deep learning with this hands-on tutorial to Caffe.
+- [Installation instructions](/installation.html)
+Tested on Ubuntu, Red Hat, OS X.
+* [Model Zoo](/model_zoo.html)
+BVLC suggests a standard distribution format for Caffe models, and provides trained models.
+* [Developing & Contributing](/development.html)
+Guidelines for development and contributing to Caffe.
+* [API Documentation](/doxygen/)
+Developer documentation automagically generated from code comments.
+
+### Examples
+
+{% assign examples = site.pages | where:'category','example' | sort: 'priority' %}
+{% for page in examples %}
+-
+{% endfor %}
+
+### Notebook examples
+
+{% assign notebooks = site.pages | where:'category','notebook' | sort: 'priority' %}
+{% for page in notebooks %}
+-
+{% endfor %}
+
+## Citing Caffe
+
+Please cite Caffe in your publications if it helps your research:
+
+ @misc{Jia13caffe,
+ Author = {Yangqing Jia},
+ Title = { {Caffe}: An Open Source Convolutional Architecture for Fast Feature Embedding},
+ Year = {2013},
+ Howpublished = {\url{http://caffe.berkeleyvision.org/}}
+ }
+
+If you do publish a paper where Caffe helped your research, we encourage you to update the [publications wiki](https://github.com/BVLC/caffe/wiki/Publications).
+Citations are also tracked automatically by [Google Scholar](http://scholar.google.com/scholar?oi=bibs&hl=en&cites=17333247995453974016).
+
+## Acknowledgements
+
+Yangqing would like to thank the NVIDIA Academic program for providing GPUs, [Oriol Vinyals](http://www1.icsi.berkeley.edu/~vinyals/) for discussions along the journey, and BVLC PI [Trevor Darrell](http://www.eecs.berkeley.edu/~trevor/) for guidance.
+
+A core set of BVLC members have contributed much new functionality and many fixes since the original release (alphabetical by first name):
+[Eric Tzeng](https://github.com/erictzeng), [Evan Shelhamer](http://imaginarynumber.net/), [Jeff Donahue](http://jeffdonahue.com/), [Jon Long](https://github.com/longjon), [Ross Girshick](http://www.cs.berkeley.edu/~rbg/), [Sergey Karayev](http://sergeykarayev.com/), [Sergio Guadarrama](http://www.eecs.berkeley.edu/~sguada/).
+
+Additionally, the open-source community plays a large and growing role in Caffe's development.
+Check out the Github [project pulse](https://github.com/BVLC/caffe/pulse) for recent activity, and the [contributors](https://github.com/BVLC/caffe/graphs/contributors) for a sorted list.
+
+We sincerely appreciate your interest and contributions!
+If you'd like to contribute, please read the [developing & contributing](development.html) guide.
+
+## Contacting us
+
+All questions about usage, installation, code, and applications should be searched for and asked on the [caffe-users mailing list](https://groups.google.com/forum/#!forum/caffe-users).
+
+All development discussion should be carried out at [GitHub Issues](https://github.com/BVLC/caffe/issues).
+
+If you have a proposal that may not be suited for public discussion *and an ability to act on it*, please email us [directly](mailto:caffe-dev@googlegroups.com).
+Requests for features, explanations, or personal help will be ignored; post such matters publicly as issues.
+
+The core Caffe developers may be able to provide [consulting services](mailto:caffe-coldpress@googlegroups.com) for appropriate projects.
diff --git a/caffe-crfrnn/docs/installation.md b/caffe-crfrnn/docs/installation.md
new file mode 100644
index 00000000..c667cd8c
--- /dev/null
+++ b/caffe-crfrnn/docs/installation.md
@@ -0,0 +1,302 @@
+---
+title: Installation
+---
+
+# Installation
+
+Prior to installing, it is best to read through this guide and take note of the details for your platform.
+We have installed Caffe on Ubuntu 14.04, Ubuntu 12.04, OS X 10.9, and OS X 10.8.
+
+- [Prerequisites](#prerequisites)
+- [Compilation](#compilation)
+- [Hardware questions](#hardware_questions)
+
+## Prerequisites
+
+Caffe depends on several software packages.
+
+* [CUDA](https://developer.nvidia.com/cuda-zone) library version 6.5 (recommended), 6.0, 5.5, or 5.0 and the latest driver version for CUDA 6 or 319.* for CUDA 5 (and NOT 331.*)
+* [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (provided via ATLAS, MKL, or OpenBLAS).
+* [OpenCV](http://opencv.org/).
+* [Boost](http://www.boost.org/) (>= 1.55, although only 1.55 and 1.56 are tested)
+* `glog`, `gflags`, `protobuf`, `leveldb`, `snappy`, `hdf5`, `lmdb`
+* For the Python wrapper
+ * `Python 2.7`, `numpy (>= 1.7)`, boost-provided `boost.python`
+* For the MATLAB wrapper
+ * MATLAB with the `mex` compiler.
+
+**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic.
+
+**CPU-only Caffe**: for cold-brewed CPU-only Caffe uncomment the `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA. This is helpful for cloud or cluster deployment.
+
+### CUDA and BLAS
+
+Caffe requires the CUDA `nvcc` compiler to compile its GPU code and CUDA driver for GPU operation.
+To install CUDA, go to the [NVIDIA CUDA website](https://developer.nvidia.com/cuda-downloads) and follow installation instructions there. Install the library and the latest standalone driver separately; the driver bundled with the library is usually out-of-date. **Warning!** The 331.* CUDA driver series has a critical performance issue: do not use it.
+
+For best performance, Caffe can be accelerated by [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). Register for free at the cuDNN site, install it, then continue with these installation instructions. To compile with cuDNN set the `USE_CUDNN := 1` flag set in your `Makefile.config`.
+
+Caffe requires BLAS as the backend of its matrix and vector computations.
+There are several implementations of this library.
+The choice is yours:
+
+* [ATLAS](http://math-atlas.sourceforge.net/): free, open source, and so the default for Caffe.
+ + Ubuntu: `sudo apt-get install libatlas-base-dev`
+ + CentOS/RHEL/Fedora: `sudo yum install atlas-devel`
+ + OS X: already installed as the [Accelerate / vecLib Framework](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man7/Accelerate.7.html).
+* [Intel MKL](http://software.intel.com/en-us/intel-mkl): commercial and optimized for Intel CPUs, with a free trial and [student](http://software.intel.com/en-us/intel-education-offerings) licenses.
+ 1. Install MKL.
+ 2. Set `BLAS := mkl` in `Makefile.config`
+* [OpenBLAS](http://www.openblas.net/): free and open source; this optimized and parallel BLAS could require more effort to install, although it might offer a speedup.
+ 1. Install OpenBLAS
+ 2. Set `BLAS := open` in `Makefile.config`
+
+### Python and/or MATLAB wrappers (optional)
+
+#### Python
+
+The main requirements are `numpy` and `boost.python` (provided by boost). `pandas` is useful too and needed for some examples.
+
+You can install the dependencies with
+
+ for req in $(cat requirements.txt); do sudo pip install $req; done
+
+but we highly recommend first installing the [Anaconda](https://store.continuum.io/cshop/anaconda/) Python distribution, which provides most of the necessary packages, as well as the `hdf5` library dependency.
+
+For **Ubuntu**, if you use the default Python you will need to `sudo apt-get install` the `python-dev` package to have the Python headers for building the wrapper.
+
+For **Fedora**, if you use the default Python you will need to `sudo yum install` the `python-devel` package to have the Python headers for building the wrapper.
+
+For **OS X**, Anaconda is the preferred Python. If you decide against it, please use Homebrew -- but beware of potential linking errors!
+
+To import the `caffe` Python module after completing the installation, add the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like. You should not import the module in the `caffe/python/caffe` directory!
+
+*Caffe's Python interface works with Python 2.7. Python 3 or earlier Pythons are your own adventure.*
+
+#### MATLAB
+
+Install MATLAB, and make sure that its `mex` is in your `$PATH`.
+
+*Caffe's MATLAB interface works with versions 2012b, 2013a/b, and 2014a.*
+
+### The rest of the dependencies
+
+#### Linux
+
+On **Ubuntu**, most of the dependencies can be installed with
+
+ sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev
+
+and for **Ubuntu 14.04** the rest of the dependencies can be installed with
+
+ sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler
+
+Keep reading to find out how to manually build and install the Google flags library, Google logging library and LMDB on **Ubuntu 12.04**.
+
+On **CentOS / RHEL / Fedora**, most of the dependencies can be installed with
+
+ sudo yum install protobuf-devel leveldb-devel snappy-devel opencv-devel boost-devel hdf5-devel
+
+The Google flags library, Google logging library and LMDB already made their ways into newer versions of **CentOS / RHEL / Fedora** so it is better to first attempt to install them using `yum`
+
+ sudo yum install gflags-devel glog-devel lmdb-devel
+
+**Finally** in case you couldn't find those extra libraries mentioned above in your distribution's repositories, here are the instructions to follow for manually building and installing them on **Ubuntu 12.04 / CentOS / RHEL / Fedora** (or practically on any Linux distribution)
+
+ # glog
+ wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
+ tar zxvf glog-0.3.3.tar.gz
+ cd glog-0.3.3
+ ./configure
+ make && make install
+ # gflags
+ wget https://github.com/schuhschuh/gflags/archive/master.zip
+ unzip master.zip
+ cd gflags-master
+ mkdir build && cd build
+ export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1
+ make && make install
+ # lmdb
+ git clone git://gitorious.org/mdb/mdb.git
+ cd mdb/libraries/liblmdb
+ make && make install
+
+Note that glog does not compile with the most recent gflags version (2.1), so before that is resolved you will need to build with glog first.
+
+#### OS X
+
+On **OS X**, we highly recommend using the [Homebrew](http://brew.sh/) package manager, and ideally starting from a clean install of the OS (or from a wiped `/usr/local`) to avoid conflicts.
+In the following, we assume that you're using Anaconda Python and Homebrew.
+
+To install the OpenCV dependency, we'll need to provide an additional source for Homebrew:
+
+ brew tap homebrew/science
+
+If using Anaconda Python, a modification is required to the OpenCV formula.
+Do `brew edit opencv` and change the lines that look like the two lines below to exactly the two lines below.
+
+ -DPYTHON_LIBRARY=#{py_prefix}/lib/libpython2.7.dylib
+ -DPYTHON_INCLUDE_DIR=#{py_prefix}/include/python2.7
+
+**NOTE**: We find that everything compiles successfully if `$LD_LIBRARY_PATH` is not set at all, and `$DYLD_FALLBACK_LIBRARY_PATH` is set to to provide CUDA, Python, and other relevant libraries (e.g. `/usr/local/cuda/lib:$HOME/anaconda/lib:/usr/local/lib:/usr/lib`).
+In other `ENV` settings, things may not work as expected.
+
+**NOTE**: There is currently a conflict between boost 1.56 and CUDA in some configurations. Check the [conflict description](https://github.com/BVLC/caffe/issues/1193#issuecomment-57491906) and try downgrading to 1.55.
+
+#### 10.8-specific Instructions
+
+Simply run the following:
+
+ brew install --build-from-source boost boost-python
+ brew install --with-python protobuf
+ for x in snappy leveldb gflags glog szip lmdb homebrew/science/opencv; do brew install $x; done
+
+Building boost from source is needed to link against your local Python (exceptions might be raised during some OS X installs, but **ignore** these and continue). If you do not need the Python wrapper, simply doing `brew install boost` is fine.
+
+**Note** that the HDF5 dependency is provided by Anaconda Python in this case.
+If you're not using Anaconda, include `hdf5` in the list above.
+
+#### 10.9-specific Instructions
+
+In OS X 10.9, clang++ is the default C++ compiler and uses `libc++` as the standard library.
+However, NVIDIA CUDA (even version 6.0) currently links only with `libstdc++`.
+This makes it necessary to change the compilation settings for each of the dependencies.
+
+We do this by modifying the Homebrew formulae before installing any packages.
+Make sure that Homebrew doesn't install any software dependencies in the background; all packages must be linked to `libstdc++`.
+
+The prerequisite Homebrew formulae are
+
+ boost snappy leveldb protobuf gflags glog szip lmdb homebrew/science/opencv
+
+For each of these formulas, `brew edit FORMULA`, and add the ENV definitions as shown:
+
+ def install
+ # ADD THE FOLLOWING:
+ ENV.append "CXXFLAGS", "-stdlib=libstdc++"
+ ENV.append "CFLAGS", "-stdlib=libstdc++"
+ ENV.append "LDFLAGS", "-stdlib=libstdc++ -lstdc++"
+ # The following is necessary because libtool likes to strip LDFLAGS:
+ ENV["CXX"] = "/usr/bin/clang++ -stdlib=libstdc++"
+ ...
+
+To edit the formulae in turn, run
+
+ for x in snappy leveldb protobuf gflags glog szip boost boost-python lmdb homebrew/science/opencv; do brew edit $x; done
+
+After this, run
+
+ for x in snappy leveldb gflags glog szip lmdb homebrew/science/opencv; do brew uninstall $x; brew install --build-from-source --fresh -vd $x; done
+ brew uninstall protobuf; brew install --build-from-source --with-python --fresh -vd protobuf
+ brew install --build-from-source --fresh -vd boost boost-python
+
+**Note** that `brew install --build-from-source --fresh -vd boost` is fine if you do not need the Caffe Python wrapper.
+
+**Note** that the HDF5 dependency is provided by Anaconda Python in this case.
+If you're not using Anaconda, include `hdf5` in the list above.
+
+**Note** that in order to build the Caffe Python wrappers you must install `boost` and `boost-python`:
+
+ brew install --build-from-source --fresh -vd boost boost-python
+
+**Note** that Homebrew maintains itself as a separate git repository and making the above `brew edit FORMULA` changes will change files in your local copy of homebrew's master branch. By default, this will prevent you from updating Homebrew using `brew update`, as you will get an error message like the following:
+
+ $ brew update
+ error: Your local changes to the following files would be overwritten by merge:
+ Library/Formula/lmdb.rb
+ Please, commit your changes or stash them before you can merge.
+ Aborting
+ Error: Failure while executing: git pull -q origin refs/heads/master:refs/remotes/origin/master
+
+One solution is to commit your changes to a separate Homebrew branch, run `brew update`, and rebase your changes onto the updated master. You'll have to do this both for the main Homebrew repository in `/usr/local/` and the Homebrew science repository that contains OpenCV in `/usr/local/Library/Taps/homebrew/homebrew-science`, as follows:
+
+ cd /usr/local
+ git checkout -b caffe
+ git add .
+ git commit -m "Update Caffe dependencies to use libstdc++"
+ cd /usr/local/Library/Taps/homebrew/homebrew-science
+ git checkout -b caffe
+ git add .
+ git commit -m "Update Caffe dependencies"
+
+Then, whenever you want to update homebrew, switch back to the master branches, do the update, rebase the caffe branches onto master and fix any conflicts:
+
+ # Switch batch to homebrew master branches
+ cd /usr/local
+ git checkout master
+ cd /usr/local/Library/Taps/homebrew/homebrew-science
+ git checkout master
+
+ # Update homebrew; hopefully this works without errors!
+ brew update
+
+ # Switch back to the caffe branches with the forumlae that you modified earlier
+ cd /usr/local
+ git rebase master caffe
+ # Fix any merge conflicts and commit to caffe branch
+ cd /usr/local/Library/Taps/homebrew/homebrew-science
+ git rebase master caffe
+ # Fix any merge conflicts and commit to caffe branch
+
+ # Done!
+
+At this point, you should be running the latest Homebrew packages and your Caffe-related modifications will remain in place.
+
+#### Windows
+
+There is an unofficial Windows port of Caffe at [niuzhiheng/caffe:windows](https://github.com/niuzhiheng/caffe). Thanks [@niuzhiheng](https://github.com/niuzhiheng)!
+
+## Compilation
+
+Now that you have the prerequisites, edit your `Makefile.config` to change the paths for your setup (you should especially uncomment and set `BLAS_LIB` accordingly on distributions like **CentOS / RHEL / Fedora** where ATLAS is installed under `/usr/lib[64]/atlas`)
+The defaults should work, but uncomment the relevant lines if using Anaconda Python.
+
+ cp Makefile.config.example Makefile.config
+ # Adjust Makefile.config (for example, if using Anaconda Python)
+ make all
+ make test
+ make runtest
+
+To compile with cuDNN acceleration, you should uncomment the `USE_CUDNN := 1` switch in `Makefile.config`.
+
+If there is no GPU in your machine, you should switch to CPU-only Caffe by uncommenting `CPU_ONLY := 1` in `Makefile.config`.
+
+To compile the Python and MATLAB wrappers do `make pycaffe` and `make matcaffe` respectively.
+Be sure to set your MATLAB and Python paths in `Makefile.config` first!
+
+*Distribution*: run `make distribute` to create a `distribute` directory with all the Caffe headers, compiled libraries, binaries, etc. needed for distribution to other machines.
+
+*Speed*: for a faster build, compile in parallel by doing `make all -j8` where 8 is the number of parallel threads for compilation (a good choice for the number of threads is the number of cores in your machine).
+
+Now that you have installed Caffe, check out the [MNIST tutorial](gathered/examples/mnist.html) and the [reference ImageNet model tutorial](gathered/examples/imagenet.html).
+
+### Compilation using CMake (beta)
+
+In lieu of manually editing `Makefile.config` to tell Caffe where dependencies are located, Caffe also provides a CMake-based build system (currently in "beta").
+It requires CMake version >= 2.8.8.
+The basic installation steps are as follows:
+
+ mkdir build
+ cd build
+ cmake ..
+ make all
+ make runtest
+
+#### Ubuntu 12.04
+
+Note that in Ubuntu 12.04, Aptitude will install version CMake 2.8.7 by default, which is not supported by Caffe's CMake build (requires at least 2.8.8).
+As a workaround, if you are using Ubuntu 12.04 you can try the following steps to install (or upgrade to) CMake 2.8.9:
+
+ sudo add-apt-repository ppa:ubuntu-sdk-team/ppa -y
+ sudo apt-get -y update
+ sudo apt-get install cmake
+
+## Hardware Questions
+
+**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with K40s, K20s, and Titans including models at ImageNet/ILSVRC scale. We also run on GTX series cards and GPU-equipped MacBook Pros. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.
+
+**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Your mileage may vary.
+
+Once installed, check your times against our [reference performance numbers](performance_hardware.html) to make sure everything is configured properly.
+
+Refer to the project's issue tracker for [hardware/compatibility](https://github.com/BVLC/caffe/issues?labels=hardware%2Fcompatibility&page=1&state=open).
diff --git a/caffe-crfrnn/docs/model_zoo.md b/caffe-crfrnn/docs/model_zoo.md
new file mode 100644
index 00000000..358bbb7f
--- /dev/null
+++ b/caffe-crfrnn/docs/model_zoo.md
@@ -0,0 +1,55 @@
+---
+title: Model Zoo
+---
+# Caffe Model Zoo
+
+Lots of people have used Caffe to train models of different architectures and applied to different problems, ranging from simple regression to AlexNet-alikes to Siamese networks for image similarity to speech applications.
+To lower the friction of sharing these models, we introduce the model zoo framework:
+
+- A standard format for packaging Caffe model info.
+- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` binaries.
+- A central wiki page for sharing model info Gists.
+
+## Where to get trained models
+
+First of all, we provide some trained models out of the box.
+Each one of these can be downloaded by running `scripts/download_model_binary.py ` where `` is specified below:
+
+- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in the NIPS 2012 paper. (Trained by Jeff Donahue @jeffdonahue)
+- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)
+- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn). (Trained by Ross Girshick @rbgirshick)
+- **BVLC GoogleNet** in `models/bvlc_googlenet`: GoogleNet trained on ILSVRC 2012, almost exactly as described in [GoogleNet](http://arxiv.org/abs/1409.4842). (Trained by Sergio Guadarrama @sguada)
+
+User-provided models are posted to a public-editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
+
+## Model info format
+
+A caffe model is distributed as a directory containing:
+
+- Solver/model prototxt(s)
+- `readme.md` containing
+ - YAML frontmatter
+ - Caffe version used to train this model (tagged release or commit hash).
+ - [optional] file URL and SHA1 of the trained `.caffemodel`.
+ - [optional] github gist id.
+ - Information about what data the model was trained on, modeling choices, etc.
+ - License information.
+- [optional] Other helpful scripts.
+
+## Hosting model info
+
+Github Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
+
+- `scripts/upload_model_to_gist.sh `: uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `/readme.md` frontmatter, then updates existing Gist.
+
+Try doing `scripts/upload_model_to_gist.sh models/bvlc_alexnet` to test the uploading (don't forget to delete the uploaded gist afterward).
+
+Downloading model info is done just as easily with `scripts/download_model_from_gist.sh `.
+
+### Hosting trained models
+
+It is up to the user where to host the `.caffemodel` file.
+We host our BVLC-provided models on our own server.
+Dropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).
+
+- `scripts/download_model_binary.py `: downloads the `.caffemodel` from the URL specified in the `/readme.md` frontmatter and confirms SHA1.
diff --git a/caffe-crfrnn/docs/performance_hardware.md b/caffe-crfrnn/docs/performance_hardware.md
new file mode 100644
index 00000000..b35246fe
--- /dev/null
+++ b/caffe-crfrnn/docs/performance_hardware.md
@@ -0,0 +1,73 @@
+---
+title: Performance and Hardware Configuration
+---
+
+# Performance and Hardware Configuration
+
+To measure performance on different NVIDIA GPUs we use CaffeNet, the Caffe reference ImageNet model.
+
+For training, each time point is 20 iterations/minibatches of 256 images for 5,120 images total. For testing, a 50,000 image validation set is classified.
+
+**Acknowledgements**: BVLC members are very grateful to NVIDIA for providing several GPUs to conduct this research.
+
+## NVIDIA K40
+
+Performance is best with ECC off and boost clock enabled. While ECC makes a negligible difference in speed, disabling it frees ~1 GB of GPU memory.
+
+Best settings with ECC off and maximum clock speed in standard Caffe:
+
+* Training is 26.5 secs / 20 iterations (5,120 images)
+* Testing is 100 secs / validation set (50,000 images)
+
+Best settings with Caffe + [cuDNN acceleration](http://nvidia.com/cudnn):
+
+* Training is 19.2 secs / 20 iterations (5,120 images)
+* Testing is 60.7 secs / validation set (50,000 images)
+
+Other settings:
+
+* ECC on, max speed: training 26.7 secs / 20 iterations, test 101 secs / validation set
+* ECC on, default speed: training 31 secs / 20 iterations, test 117 secs / validation set
+* ECC off, default speed: training 31 secs / 20 iterations, test 118 secs / validation set
+
+### K40 configuration tips
+
+For maximum K40 performance, turn off ECC and boost the clock speed (at your own risk).
+
+To turn off ECC, do
+
+ sudo nvidia-smi -i 0 --ecc-config=0 # repeat with -i x for each GPU ID
+
+then reboot.
+
+Set the "persistence" mode of the GPU settings by
+
+ sudo nvidia-smi -pm 1
+
+and then set the clock speed with
+
+ sudo nvidia-smi -i 0 -ac 3004,875 # repeat with -i x for each GPU ID
+
+but note that this configuration resets across driver reloading / rebooting. Include these commands in a boot script to intialize these settings. For a simple fix, add these commands to `/etc/rc.local` (on Ubuntu).
+
+## NVIDIA Titan
+
+Training: 26.26 secs / 20 iterations (5,120 images).
+Testing: 100 secs / validation set (50,000 images).
+
+cuDNN Training: 20.25 secs / 20 iterations (5,120 images).
+cuDNN Testing: 66.3 secs / validation set (50,000 images).
+
+
+## NVIDIA K20
+
+Training: 36.0 secs / 20 iterations (5,120 images).
+Testing: 133 secs / validation set (50,000 images).
+
+## NVIDIA GTX 770
+
+Training: 33.0 secs / 20 iterations (5,120 images).
+Testing: 129 secs / validation set (50,000 images).
+
+cuDNN Training: 24.3 secs / 20 iterations (5,120 images).
+cuDNN Testing: 104 secs / validation set (50,000 images).
diff --git a/caffe-crfrnn/docs/stylesheets/pygment_trac.css b/caffe-crfrnn/docs/stylesheets/pygment_trac.css
new file mode 100644
index 00000000..c6a6452d
--- /dev/null
+++ b/caffe-crfrnn/docs/stylesheets/pygment_trac.css
@@ -0,0 +1,69 @@
+.highlight { background: #ffffff; }
+.highlight .c { color: #999988; font-style: italic } /* Comment */
+.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
+.highlight .k { font-weight: bold } /* Keyword */
+.highlight .o { font-weight: bold } /* Operator */
+.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
+.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
+.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
+.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
+.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
+.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
+.highlight .ge { font-style: italic } /* Generic.Emph */
+.highlight .gr { color: #aa0000 } /* Generic.Error */
+.highlight .gh { color: #999999 } /* Generic.Heading */
+.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
+.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
+.highlight .go { color: #888888 } /* Generic.Output */
+.highlight .gp { color: #555555 } /* Generic.Prompt */
+.highlight .gs { font-weight: bold } /* Generic.Strong */
+.highlight .gu { color: #800080; font-weight: bold; } /* Generic.Subheading */
+.highlight .gt { color: #aa0000 } /* Generic.Traceback */
+.highlight .kc { font-weight: bold } /* Keyword.Constant */
+.highlight .kd { font-weight: bold } /* Keyword.Declaration */
+.highlight .kn { font-weight: bold } /* Keyword.Namespace */
+.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
+.highlight .kr { font-weight: bold } /* Keyword.Reserved */
+.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
+.highlight .m { color: #009999 } /* Literal.Number */
+.highlight .s { color: #d14 } /* Literal.String */
+.highlight .na { color: #008080 } /* Name.Attribute */
+.highlight .nb { color: #0086B3 } /* Name.Builtin */
+.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
+.highlight .no { color: #008080 } /* Name.Constant */
+.highlight .ni { color: #800080 } /* Name.Entity */
+.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
+.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
+.highlight .nn { color: #555555 } /* Name.Namespace */
+.highlight .nt { color: #000080 } /* Name.Tag */
+.highlight .nv { color: #008080 } /* Name.Variable */
+.highlight .ow { font-weight: bold } /* Operator.Word */
+.highlight .w { color: #bbbbbb } /* Text.Whitespace */
+.highlight .mf { color: #009999 } /* Literal.Number.Float */
+.highlight .mh { color: #009999 } /* Literal.Number.Hex */
+.highlight .mi { color: #009999 } /* Literal.Number.Integer */
+.highlight .mo { color: #009999 } /* Literal.Number.Oct */
+.highlight .sb { color: #d14 } /* Literal.String.Backtick */
+.highlight .sc { color: #d14 } /* Literal.String.Char */
+.highlight .sd { color: #d14 } /* Literal.String.Doc */
+.highlight .s2 { color: #d14 } /* Literal.String.Double */
+.highlight .se { color: #d14 } /* Literal.String.Escape */
+.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
+.highlight .si { color: #d14 } /* Literal.String.Interpol */
+.highlight .sx { color: #d14 } /* Literal.String.Other */
+.highlight .sr { color: #009926 } /* Literal.String.Regex */
+.highlight .s1 { color: #d14 } /* Literal.String.Single */
+.highlight .ss { color: #990073 } /* Literal.String.Symbol */
+.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
+.highlight .vc { color: #008080 } /* Name.Variable.Class */
+.highlight .vg { color: #008080 } /* Name.Variable.Global */
+.highlight .vi { color: #008080 } /* Name.Variable.Instance */
+.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */
+
+.type-csharp .highlight .k { color: #0000FF }
+.type-csharp .highlight .kt { color: #0000FF }
+.type-csharp .highlight .nf { color: #000000; font-weight: normal }
+.type-csharp .highlight .nc { color: #2B91AF }
+.type-csharp .highlight .nn { color: #000000 }
+.type-csharp .highlight .s { color: #A31515 }
+.type-csharp .highlight .sc { color: #A31515 }
diff --git a/caffe-crfrnn/docs/stylesheets/reset.css b/caffe-crfrnn/docs/stylesheets/reset.css
new file mode 100644
index 00000000..6020b26f
--- /dev/null
+++ b/caffe-crfrnn/docs/stylesheets/reset.css
@@ -0,0 +1,21 @@
+/* MeyerWeb Reset */
+
+html, body, div, span, applet, object, iframe,
+h1, h2, h3, h4, h5, h6, p, blockquote, pre,
+a, abbr, acronym, address, big, cite, code,
+del, dfn, em, img, ins, kbd, q, s, samp,
+small, strike, strong, sub, sup, tt, var,
+b, u, i, center,
+dl, dt, dd, ol, ul, li,
+fieldset, form, label, legend,
+table, caption, tbody, tfoot, thead, tr, th, td,
+article, aside, canvas, details, embed,
+figure, figcaption, footer, header, hgroup,
+menu, nav, output, ruby, section, summary,
+time, mark, audio, video {
+ margin: 0;
+ padding: 0;
+ border: 0;
+ font: inherit;
+ vertical-align: baseline;
+}
diff --git a/caffe-crfrnn/docs/stylesheets/styles.css b/caffe-crfrnn/docs/stylesheets/styles.css
new file mode 100644
index 00000000..2dbedb8a
--- /dev/null
+++ b/caffe-crfrnn/docs/stylesheets/styles.css
@@ -0,0 +1,348 @@
+@import url(http://fonts.googleapis.com/css?family=PT+Serif|Open+Sans:600,400);
+
+body {
+ padding:10px 50px 0 0;
+ font-family: 'Open Sans', sans-serif;
+ font-size: 14px;
+ color: #232323;
+ background-color: #FBFAF7;
+ margin: 0;
+ line-height: 1.5rem;
+ -webkit-font-smoothing: antialiased;
+}
+
+h1, h2, h3, h4, h5, h6 {
+ color:#232323;
+ margin:36px 0 10px;
+}
+
+p, ul, ol, table, dl {
+ margin:0 0 22px;
+}
+
+h1, h2, h3 {
+ font-family: 'PT Serif', serif;
+ line-height:1.3;
+ font-weight: normal;
+ display: block;
+ border-bottom: 1px solid #ccc;
+ padding-bottom: 5px;
+}
+
+h1 {
+ font-size: 30px;
+}
+
+h2 {
+ font-size: 24px;
+}
+
+h3 {
+ font-size: 18px;
+}
+
+h4, h5, h6 {
+ font-family: 'PT Serif', serif;
+ font-weight: 700;
+}
+
+a {
+ color:#C30000;
+ text-decoration:none;
+}
+
+a:hover {
+ text-decoration: underline;
+}
+
+a small {
+ font-size: 12px;
+}
+
+em {
+ font-style: italic;
+}
+
+strong {
+ font-weight:700;
+}
+
+ul {
+ padding-left: 25px;
+}
+
+ol {
+ list-style: decimal;
+ padding-left: 20px;
+}
+
+blockquote {
+ margin: 0;
+ padding: 0 0 0 20px;
+ font-style: italic;
+}
+
+dl, dt, dd, dl p {
+ font-color: #444;
+}
+
+dl dt {
+ font-weight: bold;
+}
+
+dl dd {
+ padding-left: 20px;
+ font-style: italic;
+}
+
+dl p {
+ padding-left: 20px;
+ font-style: italic;
+}
+
+hr {
+ border:0;
+ background:#ccc;
+ height:1px;
+ margin:0 0 24px;
+}
+
+/* Images */
+
+img {
+ position: relative;
+ margin: 0 auto;
+ max-width: 650px;
+ padding: 5px;
+ margin: 10px 0 32px 0;
+ border: 1px solid #ccc;
+}
+
+p img {
+ display: inline;
+ margin: 0;
+ padding: 0;
+ vertical-align: middle;
+ text-align: center;
+ border: none;
+}
+
+/* Code blocks */
+code, pre {
+ font-family: monospace;
+ color:#000;
+ font-size:12px;
+ line-height: 14px;
+}
+
+pre {
+ padding: 6px 12px;
+ background: #FDFEFB;
+ border-radius:4px;
+ border:1px solid #D7D8C8;
+ overflow: auto;
+ white-space: pre-wrap;
+ margin-bottom: 16px;
+}
+
+
+/* Tables */
+table {
+ width:100%;
+}
+
+table {
+ border: 1px solid #ccc;
+ margin-bottom: 32px;
+ text-align: left;
+ }
+
+th {
+ font-family: 'Open Sans', sans-serif;
+ font-size: 18px;
+ font-weight: normal;
+ padding: 10px;
+ background: #232323;
+ color: #FDFEFB;
+ }
+
+td {
+ padding: 10px;
+ background: #ccc;
+ }
+
+
+/* Wrapper */
+.wrapper {
+ width:960px;
+}
+
+
+/* Header */
+
+header {
+ width:170px;
+ float:left;
+ position:fixed;
+ padding: 12px 25px 22px 50px;
+ margin: 24px 25px 0 0;
+}
+
+p.header {
+ font-size: 14px;
+}
+
+h1.header {
+ font-size: 30px;
+ font-weight: 300;
+ line-height: 1.3em;
+ margin-top: 0;
+}
+
+a.name {
+ white-space: nowrap;
+}
+
+header ul {
+ list-style:none;
+ padding:0;
+}
+
+header li {
+ list-style-type: none;
+ width:132px;
+ height:15px;
+ margin-bottom: 12px;
+ line-height: 1em;
+ padding: 6px 6px 6px 7px;
+ background: #c30000;
+ border-radius:4px;
+ border:1px solid #555;
+}
+
+header li:hover {
+ background: #dd0000;
+}
+
+a.buttons {
+ color: #fff;
+ text-decoration: none;
+ font-weight: normal;
+ padding: 2px 2px 2px 22px;
+ height: 30px;
+}
+
+a.github {
+ background: url(/images/GitHub-Mark-64px.png) no-repeat center left;
+ background-size: 15%;
+}
+
+/* Section - for main page content */
+
+section {
+ width:650px;
+ float:right;
+ padding-bottom:50px;
+}
+
+p.footnote {
+ font-size: 12px;
+}
+
+
+/* Footer */
+
+footer {
+ width:170px;
+ float:left;
+ position:fixed;
+ bottom:10px;
+ padding-left: 50px;
+}
+
+@media print, screen and (max-width: 960px) {
+
+ div.wrapper {
+ width:auto;
+ margin:0;
+ }
+
+ header, section, footer {
+ float:none;
+ position:static;
+ width:auto;
+ }
+
+ footer {
+ border-top: 1px solid #ccc;
+ margin:0 84px 0 50px;
+ padding:0;
+ }
+
+ header {
+ padding-right:320px;
+ }
+
+ section {
+ padding:20px 84px 20px 50px;
+ margin:0 0 20px;
+ }
+
+ header a small {
+ display:inline;
+ }
+
+ header ul {
+ position:absolute;
+ right:130px;
+ top:84px;
+ }
+}
+
+@media print, screen and (max-width: 720px) {
+ body {
+ word-wrap:break-word;
+ }
+
+ header {
+ padding:10px 20px 0;
+ margin-right: 0;
+ }
+
+ section {
+ padding:10px 0 10px 20px;
+ margin:0 0 30px;
+ }
+
+ footer {
+ margin: 0 0 0 30px;
+ }
+
+ header ul, header p.view {
+ position:static;
+ }
+}
+
+@media print, screen and (max-width: 480px) {
+
+ header ul li.download {
+ display:none;
+ }
+
+ footer {
+ margin: 0 0 0 20px;
+ }
+
+ footer a{
+ display:block;
+ }
+
+}
+
+@media print {
+ body {
+ padding:0.4in;
+ font-size:12pt;
+ color:#444;
+ }
+}
diff --git a/caffe-crfrnn/docs/tutorial/convolution.md b/caffe-crfrnn/docs/tutorial/convolution.md
new file mode 100644
index 00000000..a02fe4ef
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/convolution.md
@@ -0,0 +1,13 @@
+---
+title: Convolution
+---
+# Caffeinated Convolution
+
+The Caffe strategy for convolution is to reduce the problem to matrix-matrix multiplication.
+This linear algebra computation is highly-tuned in BLAS libraries and efficiently computed on GPU devices.
+
+For more details read Yangqing's [Convolution in Caffe: a memo](https://github.com/Yangqing/caffe/wiki/Convolution-in-Caffe:-a-memo).
+
+As it turns out, this same reduction was independently explored in the context of conv. nets by
+
+> K. Chellapilla, S. Puri, P. Simard, et al. High performance convolutional neural networks for document processing. In Tenth International Workshop on Frontiers in Handwriting Recognition, 2006.
diff --git a/caffe-crfrnn/docs/tutorial/data.md b/caffe-crfrnn/docs/tutorial/data.md
new file mode 100644
index 00000000..40605f7c
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/data.md
@@ -0,0 +1,78 @@
+---
+title: Data
+---
+# Data: Ins and Outs
+
+Data flows through Caffe as [Blobs](net_layer_blob.html#blob-storage-and-communication).
+Data layers load input and save output by converting to and from Blob to other formats.
+Common transformations like mean-subtraction and feature-scaling are done by data layer configuration.
+New input types are supported by developing a new data layer -- the rest of the Net follows by the modularity of the Caffe layer catalogue.
+
+This data layer definition
+
+ layers {
+ name: "mnist"
+ # DATA layer loads leveldb or lmdb storage DBs for high-throughput.
+ type: DATA
+ # the 1st top is the data itself: the name is only convention
+ top: "data"
+ # the 2nd top is the ground truth: the name is only convention
+ top: "label"
+ # the DATA layer configuration
+ data_param {
+ # path to the DB
+ source: "examples/mnist/mnist_train_lmdb"
+ # type of DB: LEVELDB or LMDB (LMDB supports concurrent reads)
+ backend: LMDB
+ # batch processing improves efficiency.
+ batch_size: 64
+ }
+ # common data transformations
+ transform_param {
+ # feature scaling coefficient: this maps the [0, 255] MNIST data to [0, 1]
+ scale: 0.00390625
+ }
+ }
+
+loads the MNIST digits.
+
+**Tops and Bottoms**: A data layer makes **top** blobs to output data to the model.
+It does not have **bottom** blobs since it takes no input.
+
+**Data and Label**: a data layer has at least one top canonically named **data**.
+For ground truth a second top can be defined that is canonically named **label**.
+Both tops simply produce blobs and there is nothing inherently special about these names.
+The (data, label) pairing is a convenience for classification models.
+
+**Transformations**: data preprocessing is parametrized by transformation messages within the data layer definition.
+
+ layers {
+ name: "data"
+ type: DATA
+ [...]
+ transform_param {
+ scale: 0.1
+ mean_file_size: mean.binaryproto
+ # for images in particular horizontal mirroring and random cropping
+ # can be done as simple data augmentations.
+ mirror: 1 # 1 = on, 0 = off
+ # crop a `crop_size` x `crop_size` patch:
+ # - at random during training
+ # - from the center during testing
+ crop_size: 227
+ }
+ }
+
+**Prefetching**: for throughput data layers fetch the next batch of data and prepare it in the background while the Net computes the current batch.
+
+**Multiple Inputs**: a Net can have multiple inputs of any number and type. Define as many data layers as needed giving each a unique name and top. Multiple inputs are useful for non-trivial ground truth: one data layer loads the actual data and the other data layer loads the ground truth in lock-step. In this arrangement both data and label can be any 4D array. Further applications of multiple inputs are found in multi-modal and sequence models. In these cases you may need to implement your own data preparation routines or a special data layer.
+
+*Improvements to data processing to add formats, generality, or helper utilities are welcome!*
+
+## Formats
+
+Refer to the layer catalogue of [data layers](layers.html#data-layers) for close-ups on each type of data Caffe understands.
+
+## Deployment Input
+
+For on-the-fly computation deployment Nets define their inputs by `input` fields: these Nets then accept direct assignment of data for online or interactive computation.
diff --git a/caffe-crfrnn/docs/tutorial/fig/.gitignore b/caffe-crfrnn/docs/tutorial/fig/.gitignore
new file mode 100644
index 00000000..e69de29b
diff --git a/caffe-crfrnn/docs/tutorial/forward_backward.md b/caffe-crfrnn/docs/tutorial/forward_backward.md
new file mode 100644
index 00000000..f58b9cac
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/forward_backward.md
@@ -0,0 +1,37 @@
+---
+title: Forward and Backward for Inference and Learning
+---
+# Forward and Backward
+
+The forward and backward passes are the essential computations of a [Net](net_layer_blob.html).
+
+
+
+Let's consider a simple logistic regression classifier.
+
+The **forward** pass computes the output given the input for inference.
+In forward Caffe composes the computation of each layer to compute the "function" represented by the model.
+This pass goes from bottom to top.
+
+
+
+The data $x$ is passed through an inner product layer for $g(x)$ then through a softmax for $h(g(x))$ and softmax loss to give $f_W(x)$.
+
+The **backward** pass computes the gradient given the loss for learning.
+In backward Caffe reverse-composes the gradient of each layer to compute the gradient of the whole model by automatic differentiation.
+This is back-propagation.
+This pass goes from top to bottom.
+
+
+
+The backward pass begins with the loss and computes the gradient with respect to the output $\frac{\partial f_W}{\partial h}$. The gradient with respect to the rest of the model is computed layer-by-layer through the chain rule. Layers with parameters, like the `INNER_PRODUCT` layer, compute the gradient with respect to their parameters $\frac{\partial f_W}{\partial W_{\text{ip}}}$ during the backward step.
+
+These computations follow immediately from defining the model: Caffe plans and carries out the forward and backward passes for you.
+
+- The `Net::Forward()` and `Net::Backward()` methods carry out the respective passes while `Layer::Forward()` and `Layer::Backward()` compute each step.
+- Every layer type has `forward_{cpu,gpu}()` and `backward_{cpu,gpu}` methods to compute its steps according to the mode of computation. A layer may only implement CPU or GPU mode due to constraints or convenience.
+
+The [Solver](solver.html) optimizes a model by first calling forward to yield the output and loss, then calling backward to generate the gradient of the model, and then incorporating the gradient into a weight update that attempts to minimize the loss. Division of labor between the Solver, Net, and Layer keep Caffe modular and open to development.
+
+For the details of the forward and backward steps of Caffe's layer types, refer to the [layer catalogue](layers.html).
+
diff --git a/caffe-crfrnn/docs/tutorial/index.md b/caffe-crfrnn/docs/tutorial/index.md
new file mode 100644
index 00000000..7d4e77b1
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/index.md
@@ -0,0 +1,51 @@
+---
+title: Caffe Tutorial
+---
+# Caffe Tutorial
+
+Caffe is a deep learning framework and this tutorial explains its philosophy, architecture, and usage.
+This is a practical guide and framework introduction, so the full frontier, context, and history of deep learning cannot be covered here.
+While explanations will be given where possible, a background in machine learning and neural networks is helpful.
+
+## Philosophy
+
+In one sip, Caffe is brewed for
+
+- Expression: models and optimizations are defined as plaintext schemas instead of code.
+- Speed: for research and industry alike speed is crucial for state-of-the-art models and massive data.
+- Modularity: new tasks and settings require flexibility and extension.
+- Openness: scientific and applied progress call for common code, reference models, and reproducibility.
+- Community: academic research, startup prototypes, and industrial applications all share strength by joint discussion and development in a BSD-2 project.
+
+and these principles direct the project.
+
+## Tour
+
+- [Nets, Layers, and Blobs](net_layer_blob.html): the anatomy of a Caffe model.
+- [Forward / Backward](forward_backward.html): the essential computations of layered compositional models.
+- [Loss](loss.html): the task to be learned is defined by the loss.
+- [Solver](solver.html): the solver coordinates model optimization.
+- [Layer Catalogue](layers.html): the layer is the fundamental unit of modeling and computation -- Caffe's catalogue includes layers for state-of-the-art models.
+- [Interfaces](interfaces.html): command line, Python, and MATLAB Caffe.
+- [Data](data.html): how to caffeinate data for model input.
+
+For a closer look at a few details:
+
+- [Caffeinated Convolution](convolution.html): how Caffe computes convolutions.
+
+## Deeper Learning
+
+There are helpful references freely online for deep learning that complement our hands-on tutorial.
+These cover introductory and advanced material, background and history, and the latest advances.
+
+The [Tutorial on Deep Learning for Vision](https://sites.google.com/site/deeplearningcvpr2014/) from CVPR '14 is a good companion tutorial for researchers.
+Once you have the framework and practice foundations from the Caffe tutorial, explore the fundamental ideas and advanced research directions in the CVPR '14 tutorial.
+
+A broad introduction is given in the free online draft of [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html) by Michael Nielsen. In particular the chapters on using neural nets and how backpropagation works are helpful if you are new to the subject.
+
+These recent academic tutorials cover deep learning for researchers in machine learning and vision:
+
+- [Deep Learning Tutorial](http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf) by Yann LeCun (NYU, Facebook) and Marc'Aurelio Ranzato (Facebook). ICML 2013 tutorial.
+- [LISA Deep Learning Tutorial](http://deeplearning.net/tutorial/deeplearning.pdf) by the LISA Lab directed by Yoshua Bengio (U. Montréal).
+
+For an exposition of neural networks in circuits and code, check out [Understanding Neural Networks from a Programmer's Perspective](http://karpathy.github.io/neuralnets/) by Andrej Karpathy (Stanford).
diff --git a/caffe-crfrnn/docs/tutorial/interfaces.md b/caffe-crfrnn/docs/tutorial/interfaces.md
new file mode 100644
index 00000000..6b0ec347
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/interfaces.md
@@ -0,0 +1,68 @@
+---
+title: Interfaces
+---
+# Interfaces
+
+Caffe has command line, Python, and MATLAB interfaces for day-to-day usage, interfacing with research code, and rapid prototyping. While Caffe is a C++ library at heart and it exposes a modular interface for development, not every occasion calls for custom compilation. The cmdcaffe, pycaffe, and matcaffe interfaces are here for you.
+
+## Command Line
+
+The command line interface -- cmdcaffe -- is the `caffe` tool for model training, scoring, and diagnostics. Run `caffe` without any arguments for help. This tool and others are found in caffe/build/tools. (The following example calls require completing the LeNet / MNIST example first.)
+
+**Training**: `caffe train` learns models from scratch, resumes learning from saved snapshots, and fine-tunes models to new data and tasks. All training requires a solver configuration through the `-solver solver.prototxt` argument. Resuming requires the `-snapshot model_iter_1000.solverstate` argument to load the solver snapshot. Fine-tuning requires the `-weights model.caffemodel` argument for the model initialization.
+
+ # train LeNet
+ caffe train -solver examples/mnist/lenet_solver.prototxt
+ # train on GPU 2
+ caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 2
+ # resume training from the half-way point snapshot
+ caffe train -solver examples/mnist/lenet_solver.prototxt -snapshot examples/mnist/lenet_iter_5000.solverstate
+
+For a full example of fine-tuning, see examples/finetuning_on_flickr_style, but the training call alone is
+
+ # fine-tune CaffeNet model weights for style recognition
+ caffe train -solver examples/finetuning_on_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel
+
+**Testing**: `caffe test` scores models by running them in the test phase and reports the net output as its score. The net architecture must be properly defined to output an accuracy measure or loss as its output. The per-batch score is reported and then the grand average is reported last.
+
+ #
+ # score the learned LeNet model on the validation set as defined in the model architeture lenet_train_test.prototxt
+ caffe test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000 -gpu 0 -iterations 100
+
+**Benchmarking**: `caffe time` benchmarks model execution layer-by-layer through timing and synchronization. This is useful to check system performance and measure relative execution times for models.
+
+ # (These example calls require you complete the LeNet / MNIST example first.)
+ # time LeNet training on CPU for 10 iterations
+ caffe time -model examples/mnist/lenet_train_test.prototxt -iterations 10
+ # time a model architecture with the given weights on the first GPU for 10 iterations
+ # time LeNet training on GPU for the default 50 iterations
+ caffe time -model examples/mnist/lenet_train_test.prototxt -gpu 0
+
+**Diagnostics**: `caffe device_query` reports GPU details for reference and checking device ordinals for running on a given device in multi-GPU machines.
+
+ # query the first device
+ caffe device_query -gpu 0
+
+## Python
+
+The Python interface -- pycaffe -- is the `caffe` module and its scripts in caffe/python. `import caffe` to load models, do forward and backward, handle IO, visualize networks, and even instrument model solving. All model data, derivatives, and parameters are exposed for reading and writing.
+
+- `caffe.Net` is the central interface for loading, configuring, and running models. `caffe.Classsifier` and `caffe.Detector` provide convenience interfaces for common tasks.
+- `caffe.SGDSolver` exposes the solving interface.
+- `caffe.io` handles input / output with preprocessing and protocol buffers.
+- `caffe.draw` visualizes network architectures.
+- Caffe blobs are exposed as numpy ndarrays for ease-of-use and efficiency.
+
+Tutorial IPython notebooks are found in caffe/examples: do `ipython notebook caffe/examples` to try them. For developer reference docstrings can be found throughout the code.
+
+Compile pycaffe by `make pycaffe`. The module dir caffe/python/caffe should be installed in your PYTHONPATH for `import caffe`.
+
+## MATLAB
+
+The MATLAB interface -- matcaffe -- is the `caffe` mex and its helper m-files in caffe/matlab. Load models, do forward and backward, extract output and read-only model weights, and load the binaryproto format mean as a matrix.
+
+A MATLAB demo is in caffe/matlab/caffe/matcaffe_demo.m
+
+Note that MATLAB matrices and memory are in column-major layout counter to Caffe's row-major layout! Double-check your work accordingly.
+
+Compile matcaffe by `make matcaffe`.
diff --git a/caffe-crfrnn/docs/tutorial/layers.md b/caffe-crfrnn/docs/tutorial/layers.md
new file mode 100644
index 00000000..5f8f519c
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/layers.md
@@ -0,0 +1,468 @@
+---
+title: Layer Catalogue
+---
+# Layers
+
+To create a Caffe model you need to define the model architecture in a protocol buffer definition file (prototxt).
+
+Caffe layers and their parameters are defined in the protocol buffer definitions for the project in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto). The latest definitions are in the [dev caffe.proto](https://github.com/BVLC/caffe/blob/dev/src/caffe/proto/caffe.proto).
+
+TODO complete list of layers linking to headings
+
+### Vision Layers
+
+* Header: `./include/caffe/vision_layers.hpp`
+
+Vision layers usually take *images* as input and produce other *images* as output.
+A typical "image" in the real-world may have one color channel ($$c = 1$$), as in a grayscale image, or three color channels ($$c = 3$$) as in an RGB (red, green, blue) image.
+But in this context, the distinguishing characteristic of an image is its spatial structure: usually an image has some non-trivial height $$h > 1$$ and width $$w > 1$$.
+This 2D geometry naturally lends itself to certain decisions about how to process the input.
+In particular, most of the vision layers work by applying a particular operation to some region of the input to produce a corresponding region of the output.
+In contrast, other layers (with few exceptions) ignore the spatial structure of the input, effectively treating it as "one big vector" with dimension $$chw$$.
+
+
+#### Convolution
+
+* LayerType: `CONVOLUTION`
+* CPU implementation: `./src/caffe/layers/convolution_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/convolution_layer.cu`
+* Parameters (`ConvolutionParameter convolution_param`)
+ - Required
+ - `num_output` (`c_o`): the number of filters
+ - `kernel_size` (or `kernel_h` and `kernel_w`): specifies height and width of each filter
+ - Strongly Recommended
+ - `weight_filler` [default `type: 'constant' value: 0`]
+ - Optional
+ - `bias_term` [default `true`]: specifies whether to learn and apply a set of additive biases to the filter outputs
+ - `pad` (or `pad_h` and `pad_w`) [default 0]: specifies the number of pixels to (implicitly) add to each side of the input
+ - `stride` (or `stride_h` and `stride_w`) [default 1]: specifies the intervals at which to apply the filters to the input
+ - `group` (g) [default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the $$i$$th output group channels will be only connected to the $$i$$th input group channels.
+* Input
+ - `n * c_i * h_i * w_i`
+* Output
+ - `n * c_o * h_o * w_o`, where `h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1` and `w_o` likewise.
+* Sample (as seen in `./examples/imagenet/imagenet_train_val.prototxt`)
+
+ layers {
+ name: "conv1"
+ type: CONVOLUTION
+ bottom: "data"
+ top: "conv1"
+ blobs_lr: 1 # learning rate multiplier for the filters
+ blobs_lr: 2 # learning rate multiplier for the biases
+ weight_decay: 1 # weight decay multiplier for the filters
+ weight_decay: 0 # weight decay multiplier for the biases
+ convolution_param {
+ num_output: 96 # learn 96 filters
+ kernel_size: 11 # each filter is 11x11
+ stride: 4 # step 4 pixels between each filter application
+ weight_filler {
+ type: "gaussian" # initialize the filters from a Gaussian
+ std: 0.01 # distribution with stdev 0.01 (default mean: 0)
+ }
+ bias_filler {
+ type: "constant" # initialize the biases to zero (0)
+ value: 0
+ }
+ }
+ }
+
+The `CONVOLUTION` layer convolves the input image with a set of learnable filters, each producing one feature map in the output image.
+
+#### Pooling
+
+* LayerType: `POOLING`
+* CPU implementation: `./src/caffe/layers/pooling_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/pooling_layer.cu`
+* Parameters (`PoolingParameter pooling_param`)
+ - Required
+ - `kernel_size` (or `kernel_h` and `kernel_w`): specifies height and width of each filter
+ - Optional
+ - `pool` [default MAX]: the pooling method. Currently MAX, AVE, or STOCHASTIC
+ - `pad` (or `pad_h` and `pad_w`) [default 0]: specifies the number of pixels to (implicitly) add to each side of the input
+ - `stride` (or `stride_h` and `stride_w`) [default 1]: specifies the intervals at which to apply the filters to the input
+* Input
+ - `n * c * h_i * w_i`
+* Output
+ - `n * c * h_o * w_o`, where h_o and w_o are computed in the same way as convolution.
+* Sample (as seen in `./examples/imagenet/imagenet_train_val.prototxt`)
+
+ layers {
+ name: "pool1"
+ type: POOLING
+ bottom: "conv1"
+ top: "pool1"
+ pooling_param {
+ pool: MAX
+ kernel_size: 3 # pool over a 3x3 region
+ stride: 2 # step two pixels (in the bottom blob) between pooling regions
+ }
+ }
+
+#### Local Response Normalization (LRN)
+
+* LayerType: `LRN`
+* CPU Implementation: `./src/caffe/layers/lrn_layer.cpp`
+* CUDA GPU Implementation: `./src/caffe/layers/lrn_layer.cu`
+* Parameters (`LRNParameter lrn_param`)
+ - Optional
+ - `local_size` [default 5]: the number of channels to sum over (for cross channel LRN) or the side length of the square region to sum over (for within channel LRN)
+ - `alpha` [default 1]: the scaling parameter (see below)
+ - `beta` [default 5]: the exponent (see below)
+ - `norm_region` [default `ACROSS_CHANNELS`]: whether to sum over adjacent channels (`ACROSS_CHANNELS`) or nearby spatial locaitons (`WITHIN_CHANNEL`)
+
+The local response normalization layer performs a kind of "lateral inhibition" by normalizing over local input regions. In `ACROSS_CHANNELS` mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape `local_size x 1 x 1`). In `WITHIN_CHANNEL` mode, the local regions extend spatially, but are in separate channels (i.e., they have shape `1 x local_size x local_size`). Each input value is divided by $$(1 + (\alpha/n) \sum_i x_i^2)^\beta$$, where $$n$$ is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).
+
+#### im2col
+
+`IM2COL` is a helper for doing the image-to-column transformation that you most likely do not need to know about. This is used in Caffe's original convolution to do matrix multiplication by laying out all patches into a matrix.
+
+### Loss Layers
+
+Loss drives learning by comparing an output to a target and assigning cost to minimize. The loss itself is computed by the forward pass and the gradient w.r.t. to the loss is computed by the backward pass.
+
+#### Softmax
+
+* LayerType: `SOFTMAX_LOSS`
+
+The softmax loss layer computes the multinomial logistic loss of the softmax of its inputs. It's conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.
+
+#### Sum-of-Squares / Euclidean
+
+* LayerType: `EUCLIDEAN_LOSS`
+
+The Euclidean loss layer computes the sum of squares of differences of its two inputs, $$\frac 1 {2N} \sum_{i=1}^N \| x^1_i - x^2_i \|_2^2$$.
+
+#### Hinge / Margin
+
+* LayerType: `HINGE_LOSS`
+* CPU implementation: `./src/caffe/layers/hinge_loss_layer.cpp`
+* CUDA GPU implementation: none yet
+* Parameters (`HingeLossParameter hinge_loss_param`)
+ - Optional
+ - `norm` [default L1]: the norm used. Currently L1, L2
+* Inputs
+ - `n * c * h * w` Predictions
+ - `n * 1 * 1 * 1` Labels
+* Output
+ - `1 * 1 * 1 * 1` Computed Loss
+* Samples
+
+ # L1 Norm
+ layers {
+ name: "loss"
+ type: HINGE_LOSS
+ bottom: "pred"
+ bottom: "label"
+ }
+
+ # L2 Norm
+ layers {
+ name: "loss"
+ type: HINGE_LOSS
+ bottom: "pred"
+ bottom: "label"
+ top: "loss"
+ hinge_loss_param {
+ norm: L2
+ }
+ }
+
+The hinge loss layer computes a one-vs-all hinge or squared hinge loss.
+
+#### Sigmoid Cross-Entropy
+
+`SIGMOID_CROSS_ENTROPY_LOSS`
+
+#### Infogain
+
+`INFOGAIN_LOSS`
+
+#### Accuracy and Top-k
+
+`ACCURACY` scores the output as the accuracy of output with respect to target -- it is not actually a loss and has no backward step.
+
+### Activation / Neuron Layers
+
+In general, activation / Neuron layers are element-wise operators, taking one bottom blob and producing one top blob of the same size. In the layers below, we will ignore the input and out sizes as they are identical:
+
+* Input
+ - n * c * h * w
+* Output
+ - n * c * h * w
+
+#### ReLU / Rectified-Linear and Leaky-ReLU
+
+* LayerType: `RELU`
+* CPU implementation: `./src/caffe/layers/relu_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/relu_layer.cu`
+* Parameters (`ReLUParameter relu_param`)
+ - Optional
+ - `negative_slope` [default 0]: specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.
+* Sample (as seen in `./examples/imagenet/imagenet_train_val.prototxt`)
+
+ layers {
+ name: "relu1"
+ type: RELU
+ bottom: "conv1"
+ top: "conv1"
+ }
+
+Given an input value x, The `RELU` layer computes the output as x if x > 0 and negative_slope * x if x <= 0. When the negative slope parameter is not set, it is equivalent to the standard ReLU function of taking max(x, 0). It also supports in-place computation, meaning that the bottom and the top blob could be the same to preserve memory consumption.
+
+#### Sigmoid
+
+* LayerType: `SIGMOID`
+* CPU implementation: `./src/caffe/layers/sigmoid_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/sigmoid_layer.cu`
+* Sample (as seen in `./examples/imagenet/mnist_autoencoder.prototxt`)
+
+ layers {
+ name: "encode1neuron"
+ bottom: "encode1"
+ top: "encode1neuron"
+ type: SIGMOID
+ }
+
+The `SIGMOID` layer computes the output as sigmoid(x) for each input element x.
+
+#### TanH / Hyperbolic Tangent
+
+* LayerType: `TANH`
+* CPU implementation: `./src/caffe/layers/tanh_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/tanh_layer.cu`
+* Sample
+
+ layers {
+ name: "layer"
+ bottom: "in"
+ top: "out"
+ type: TANH
+ }
+
+The `TANH` layer computes the output as tanh(x) for each input element x.
+
+#### Absolute Value
+
+* LayerType: `ABSVAL`
+* CPU implementation: `./src/caffe/layers/absval_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/absval_layer.cu`
+* Sample
+
+ layers {
+ name: "layer"
+ bottom: "in"
+ top: "out"
+ type: ABSVAL
+ }
+
+The `ABSVAL` layer computes the output as abs(x) for each input element x.
+
+#### Power
+
+* LayerType: `POWER`
+* CPU implementation: `./src/caffe/layers/power_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/power_layer.cu`
+* Parameters (`PowerParameter power_param`)
+ - Optional
+ - `power` [default 1]
+ - `scale` [default 1]
+ - `shift` [default 0]
+* Sample
+
+ layers {
+ name: "layer"
+ bottom: "in"
+ top: "out"
+ type: POWER
+ power_param {
+ power: 1
+ scale: 1
+ shift: 0
+ }
+ }
+
+The `POWER` layer computes the output as (shift + scale * x) ^ power for each input element x.
+
+#### BNLL
+
+* LayerType: `BNLL`
+* CPU implementation: `./src/caffe/layers/bnll_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/bnll_layer.cu`
+* Sample
+
+ layers {
+ name: "layer"
+ bottom: "in"
+ top: "out"
+ type: BNLL
+ }
+
+The `BNLL` (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x.
+
+
+### Data Layers
+
+Data enters Caffe through data layers: they lie at the bottom of nets. Data can come from efficient databases (LevelDB or LMDB), directly from memory, or, when efficiency is not critical, from files on disk in HDF5 or common image formats.
+
+Common input preprocessing (mean subtraction, scaling, random cropping, and mirroring) is available by specifying `TransformationParameter`s.
+
+#### Database
+
+* LayerType: `DATA`
+* Parameters
+ - Required
+ - `source`: the name of the directory containing the database
+ - `batch_size`: the number of inputs to process at one time
+ - Optional
+ - `rand_skip`: skip up to this number of inputs at the beginning; useful for asynchronous sgd
+ - `backend` [default `LEVELDB`]: choose whether to use a `LEVELDB` or `LMDB`
+
+
+
+#### In-Memory
+
+* LayerType: `MEMORY_DATA`
+* Parameters
+ - Required
+ - `batch_size`, `channels`, `height`, `width`: specify the size of input chunks to read from memory
+
+The memory data layer reads data directly from memory, without copying it. In order to use it, one must call `MemoryDataLayer::Reset` (from C++) or `Net.set_input_arrays` (from Python) in order to specify a source of contiguous data (as 4D row major array), which is read one batch-sized chunk at a time.
+
+#### HDF5 Input
+
+* LayerType: `HDF5_DATA`
+* Parameters
+ - Required
+ - `source`: the name of the file to read from
+ - `batch_size`
+
+#### HDF5 Output
+
+* LayerType: `HDF5_OUTPUT`
+* Parameters
+ - Required
+ - `file_name`: name of file to write to
+
+The HDF5 output layer performs the opposite function of the other layers in this section: it writes its input blobs to disk.
+
+#### Images
+
+* LayerType: `IMAGE_DATA`
+* Parameters
+ - Required
+ - `source`: name of a text file, with each line giving an image filename and label
+ - `batch_size`: number of images to batch together
+ - Optional
+ - `rand_skip`
+ - `shuffle` [default false]
+ - `new_height`, `new_width`: if provided, resize all images to this size
+
+#### Windows
+
+`WINDOW_DATA`
+
+#### Dummy
+
+`DUMMY_DATA` is for development and debugging. See `DummyDataParameter`.
+
+### Common Layers
+
+#### Inner Product
+
+* LayerType: `INNER_PRODUCT`
+* CPU implementation: `./src/caffe/layers/inner_product_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/inner_product_layer.cu`
+* Parameters (`InnerProductParameter inner_product_param`)
+ - Required
+ - `num_output` (`c_o`): the number of filters
+ - Strongly recommended
+ - `weight_filler` [default `type: 'constant' value: 0`]
+ - Optional
+ - `bias_filler` [default `type: 'constant' value: 0`]
+ - `bias_term` [default `true`]: specifies whether to learn and apply a set of additive biases to the filter outputs
+* Input
+ - `n * c_i * h_i * w_i`
+* Output
+ - `n * c_o * 1 * 1`
+* Sample
+
+ layers {
+ name: "fc8"
+ type: INNER_PRODUCT
+ blobs_lr: 1 # learning rate multiplier for the filters
+ blobs_lr: 2 # learning rate multiplier for the biases
+ weight_decay: 1 # weight decay multiplier for the filters
+ weight_decay: 0 # weight decay multiplier for the biases
+ inner_product_param {
+ num_output: 1000
+ weight_filler {
+ type: "gaussian"
+ std: 0.01
+ }
+ bias_filler {
+ type: "constant"
+ value: 0
+ }
+ }
+ bottom: "fc7"
+ top: "fc8"
+ }
+
+The `INNER_PRODUCT` layer (also usually referred to as the fully connected layer) treats the input as a simple vector and produces an output in the form of a single vector (with the blob's height and width set to 1).
+
+#### Splitting
+
+The `SPLIT` layer is a utility layer that splits an input blob to multiple output blobs. This is used when a blob is fed into multiple output layers.
+
+#### Flattening
+
+The `FLATTEN` layer is a utility layer that flattens an input of shape `n * c * h * w` to a simple vector output of shape `n * (c*h*w) * 1 * 1`.
+
+#### Concatenation
+
+* LayerType: `CONCAT`
+* CPU implementation: `./src/caffe/layers/concat_layer.cpp`
+* CUDA GPU implementation: `./src/caffe/layers/concat_layer.cu`
+* Parameters (`ConcatParameter concat_param`)
+ - Optional
+ - `concat_dim` [default 1]: 0 for concatenation along num and 1 for channels.
+* Input
+ - `n_i * c_i * h * w` for each input blob i from 1 to K.
+* Output
+ - if `concat_dim = 0`: `(n_1 + n_2 + ... + n_K) * c_1 * h * w`, and all input `c_i` should be the same.
+ - if `concat_dim = 1`: `n_1 * (c_1 + c_2 + ... + c_K) * h * w`, and all input `n_i` should be the same.
+* Sample
+
+ layers {
+ name: "concat"
+ bottom: "in1"
+ bottom: "in2"
+ top: "out"
+ type: CONCAT
+ concat_param {
+ concat_dim: 1
+ }
+ }
+
+The `CONCAT` layer is a utility layer that concatenates its multiple input blobs to one single output blob. Currently, the layer supports concatenation along num or channels only.
+
+#### Slicing
+
+The `SLICE` layer is a utility layer that slices an input layer to multiple output layers along a given dimension (currently num or channel only) with given slice indices.
+
+#### Elementwise Operations
+
+`ELTWISE`
+
+#### Argmax
+
+`ARGMAX`
+
+#### Softmax
+
+`SOFTMAX`
+
+#### Mean-Variance Normalization
+
+`MVN`
diff --git a/caffe-crfrnn/docs/tutorial/loss.md b/caffe-crfrnn/docs/tutorial/loss.md
new file mode 100644
index 00000000..aac56177
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/loss.md
@@ -0,0 +1,51 @@
+---
+title: Loss
+---
+# Loss
+
+In Caffe, as in most of machine learning, learning is driven by a **loss** function (also known as an **error**, **cost**, or **objective** function).
+A loss function specifies the goal of learning by mapping parameter settings (i.e., the current network weights) to a scalar value specifying the "badness" of these parameter settings.
+Hence, the goal of learning is to find a setting of the weights that *minimizes* the loss function.
+
+The loss in Caffe is computed by the Forward pass of the network.
+Each layer takes a set of input (`bottom`) blobs and produces a set of output (`top`) blobs.
+Some of these layers' outputs may be used in the loss function.
+A typical choice of loss function for one-versus-all classification tasks is the `SOFTMAX_LOSS` function, used in a network definition as follows, for example:
+
+ layers {
+ name: "loss"
+ type: SOFTMAX_LOSS
+ bottom: "pred"
+ bottom: "label"
+ top: "loss"
+ }
+
+In a `SOFTMAX_LOSS` function, the `top` blob is a scalar (dimensions $$1 \times 1 \times 1 \times 1$$) which averages the loss (computed from predicted labels `pred` and actuals labels `label`) over the entire mini-batch.
+
+### Loss weights
+
+For nets with multiple layers producing a loss (e.g., a network that both classifies the input using a `SOFTMAX_LOSS` layer and reconstructs it using a `EUCLIDEAN_LOSS` layer), *loss weights* can be used to specify their relative importance.
+
+By convention, Caffe layer types with the suffix `_LOSS` contribute to the loss function, but other layers are assumed to be purely used for intermediate computations.
+However, any layer can be used as a loss by adding a field `loss_weight: ` to a layer definition for each `top` blob produced by the layer.
+Layers with the suffix `_LOSS` have an implicit `loss_weight: 1` for the first `top` blob (and `loss_weight: 0` for any additional `top`s); other layers have an implicit `loss_weight: 0` for all `top`s.
+So, the above `SOFTMAX_LOSS` layer could be equivalently written as:
+
+ layers {
+ name: "loss"
+ type: SOFTMAX_LOSS
+ bottom: "pred"
+ bottom: "label"
+ top: "loss"
+ loss_weight: 1
+ }
+
+However, *any* layer able to backpropagate may be given a non-zero `loss_weight`, allowing one to, for example, regularize the activations produced by some intermediate layer(s) of the network if desired.
+For non-singleton outputs with an associated non-zero loss, the loss is computed simply by summing over all entries of the blob.
+
+The final loss in Caffe, then, is computed by summing the total weighted loss over the network, as in the following pseudo-code:
+
+ loss := 0
+ for layer in layers:
+ for top, loss_weight in layer.tops, layer.loss_weights:
+ loss += loss_weight * sum(top)
diff --git a/caffe-crfrnn/docs/tutorial/net_layer_blob.md b/caffe-crfrnn/docs/tutorial/net_layer_blob.md
new file mode 100644
index 00000000..1f0966f8
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/net_layer_blob.md
@@ -0,0 +1,170 @@
+---
+title: Blobs, Layers, and Nets
+---
+# Blobs, Layers, and Nets: anatomy of a Caffe model
+
+Deep networks are compositional models that are naturally represented as a collection of inter-connected layers that work on chunks of data. Caffe defines a net layer-by-layer in its own model schema. The network defines the entire model bottom-to-top from input data to loss. As data and derivatives flow through the network in the [forward and backward passes](forward_backward.html) Caffe stores, communicates, and manipulates the information as *blobs*: the blob is the standard array and unified memory interface for the framework. The layer comes next as the foundation of both model and computation. The net follows as the collection and connection of layers. The details of blob describe how information is stored and communicated in and across layers and nets.
+
+[Solving](solver.html) is configured separately to decouple modeling and optimization.
+
+We will go over the details of these components in more detail.
+
+## Blob storage and communication
+
+A Blob is a wrapper over the actual data being processed and passed along by Caffe, and also under the hood provides synchronization capability between the CPU and the GPU. Mathematically, a blob is a 4-dimensional array that stores things in the order of (Num, Channels, Height and Width), from major to minor, and stored in a C-contiguous fashion. The main reason for putting Num (the name is due to legacy reasons, and is equivalent to the notation of "batch" as in minibatch SGD).
+
+Caffe stores and communicates data in 4-dimensional arrays called blobs. Blobs provide a unified memory interface, holding data e.g. batches of images, model parameters, and derivatives for optimization.
+
+Blobs conceal the computational and mental overhead of mixed CPU/GPU operation by synchronizing from the CPU host to the GPU device as needed. Memory on the host and device is allocated on demand (lazily) for efficient memory usage.
+
+The conventional blob dimensions for data are number N x channel K x height H x width W. Blob memory is row-major in layout so the last / rightmost dimension changes fastest. For example, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W + w.
+
+- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images B = 256.
+- Channel / K is the feature dimension e.g. for RGB images K = 3.
+
+Note that although we have designed blobs with its dimensions corresponding to image applications, they are named purely for notational purpose and it is totally valid for you to do non-image applications. For example, if you simply need fully-connected networks like the conventional multi-layer perceptron, use blobs of dimensions (Num, Channels, 1, 1) and call the InnerProductLayer (which we will cover soon).
+
+Caffe operations are general with respect to the channel dimension / K. Grayscale and hyperspectral imagery are fine. Caffe can likewise model and process arbitrary vectors in blobs with singleton. That is, the shape of blob holding 1000 vectors of 16 feature dimensions is 1000 x 16 x 1 x 1.
+
+Parameter blob dimensions vary according to the type and configuration of the layer. For a convolution layer with 96 filters of 11 x 11 spatial dimension and 3 inputs the blob is 96 x 3 x 11 x 11. For an inner product / fully-connected layer with 1000 output channels and 1024 input channels the parameter blob is 1 x 1 x 1000 x 1024.
+
+For custom data it may be necessary to hack your own input preparation tool or data layer. However once your data is in your job is done. The modularity of layers accomplishes the rest of the work for you.
+
+### Implementation Details
+
+As we are often interested in the values as well as the gradients of the blob, a Blob stores two chunks of memories, *data* and *diff*. The former is the normal data that we pass along, and the latter is the gradient computed by the network.
+
+Further, as the actual values could be stored either on the CPU and on the GPU, there are two different ways to access them: the const way, which does not change the values, and the mutable way, which changes the values:
+
+ const Dtype* cpu_data() const;
+ Dtype* mutable_cpu_data();
+
+(similarly for gpu and diff).
+
+The reason for such design is that, a Blob uses a SyncedMem class to synchronize values between the CPU and GPU in order to hide the synchronization details and to minimize data transfer. A rule of thumb is, always use the const call if you do not want to change the values, and never store the pointers in your own object. Every time you work on a blob, call the functions to get the pointers, as the SyncedMem will need this to figure out when to copy data.
+
+In practice when GPUs are present, one loads data from the disk to a blob in CPU code, calls a device kernel to do GPU computation, and ferries the blob off to the next layer, ignoring low-level details while maintaining a high level of performance. As long as all layers have GPU implementations, all the intermediate data and gradients will remain in the GPU.
+
+If you want to check out when a Blob will copy data, here is an illustrative example:
+
+ // Assuming that data are on the CPU initially, and we have a blob.
+ const Dtype* foo;
+ Dtype* bar;
+ foo = blob.gpu_data(); // data copied cpu->gpu.
+ foo = blob.cpu_data(); // no data copied since both have up-to-date contents.
+ bar = blob.mutable_gpu_data(); // no data copied.
+ // ... some operations ...
+ bar = blob.mutable_gpu_data(); // no data copied when we are still on GPU.
+ foo = blob.cpu_data(); // data copied gpu->cpu, since the gpu side has modified the data
+ foo = blob.gpu_data(); // no data copied since both have up-to-date contents
+ bar = blob.mutable_cpu_data(); // still no data copied.
+ bar = blob.mutable_gpu_data(); // data copied cpu->gpu.
+ bar = blob.mutable_cpu_data(); // data copied gpu->cpu.
+
+## Layer computation and connections
+
+The layer is the essence of a model and the fundamental unit of computation. Layers convolve filters, pool, take inner products, apply nonlinearities like rectified-linear and sigmoid and other elementwise transformations, normalize, load data, and compute losses like softmax and hinge. [See the layer catalogue](layers.html) for all operations. Most of the types needed for state-of-the-art deep learning tasks are there.
+
+
+
+A layer takes input through *bottom* connections and makes output through *top* connections.
+
+Each layer type defines three critical computations: *setup*, *forward*, and *backward*.
+
+- Setup: initialize the layer and its connections once at model initialization.
+- Forward: given input from bottom compute the output and send to the top.
+- Backward: given the gradient w.r.t. the top output compute the gradient w.r.t. to the input and send to the bottom. A layer with parameters computes the gradient w.r.t. to its parameters and stores it internally.
+
+More specifically, there will be two Forward and Backward functions implemented, one for CPU and one for GPU. If you do not implement a GPU version, the layer will fall back to the CPU functions as a backup option. This may come handy if you would like to do quick experiments, although it may come with additional data transfer cost (its inputs will be copied from GPU to CPU, and its outputs will be copied back from CPU to GPU).
+
+Layers have two key responsibilities for the operation of the network as a whole: a *forward pass* that takes the inputs and produces the outputs, and a *backward pass* that takes the gradient with respect to the output, and computes the gradients with respect to the parameters and to the inputs, which are in turn back-propagated to earlier layers. These passes are simply the composition of each layer's forward and backward.
+
+Developing custom layers requires minimal effort by the compositionality of the network and modularity of the code. Define the setup, forward, and backward for the layer and it is ready for inclusion in a net.
+
+## Net definition and operation
+
+The net jointly defines a function and its gradient by composition and auto-differentiation. The composition of every layer's output computes the function to do a given task, and the composition of every layer's backward computes the gradient from the loss to learn the task. Caffe models are end-to-end machine learning engines.
+
+The net is a set of layers connected in a computation graph -- a directed acyclic graph (DAG) to be exact. Caffe does all the bookkeeping for any DAG of layers to ensure correctness of the forward and backward passes. A typical net begins with a data layer that loads from disk and ends with a loss layer that computes the objective for a task such as classification or reconstruction.
+
+The net is defined as a set of layers and their connections in a plaintext modeling language.
+A simple logistic regression classifier
+
+
+
+is defined by
+
+ name: "LogReg"
+ layers {
+ name: "mnist"
+ type: DATA
+ top: "data"
+ top: "label"
+ data_param {
+ source: "input_leveldb"
+ batch_size: 64
+ }
+ }
+ layers {
+ name: "ip"
+ type: INNER_PRODUCT
+ bottom: "data"
+ top: "ip"
+ inner_product_param {
+ num_output: 2
+ }
+ }
+ layers {
+ name: "loss"
+ type: SOFTMAX_LOSS
+ bottom: "ip"
+ bottom: "label"
+ top: "loss"
+ }
+
+Model initialization is handled by `Net::Init()`. The initialization mainly does two things: scaffolding the overall DAG by creating the blobs and layers (for C++ geeks: the network will retain ownership of the blobs and layers during its lifetime), and calls the layers' `SetUp()` function. It also does a set of other bookkeeping things, such as validating the correctness of the overall network architecture. Also, during initialization the Net explains its initialization by logging to INFO as it goes:
+
+ I0902 22:52:17.931977 2079114000 net.cpp:39] Initializing net from parameters:
+ name: "LogReg"
+ [...model prototxt printout...]
+ # construct the network layer-by-layer
+ I0902 22:52:17.932152 2079114000 net.cpp:67] Creating Layer mnist
+ I0902 22:52:17.932165 2079114000 net.cpp:356] mnist -> data
+ I0902 22:52:17.932188 2079114000 net.cpp:356] mnist -> label
+ I0902 22:52:17.932200 2079114000 net.cpp:96] Setting up mnist
+ I0902 22:52:17.935807 2079114000 data_layer.cpp:135] Opening leveldb input_leveldb
+ I0902 22:52:17.937155 2079114000 data_layer.cpp:195] output data size: 64,1,28,28
+ I0902 22:52:17.938570 2079114000 net.cpp:103] Top shape: 64 1 28 28 (50176)
+ I0902 22:52:17.938593 2079114000 net.cpp:103] Top shape: 64 1 1 1 (64)
+ I0902 22:52:17.938611 2079114000 net.cpp:67] Creating Layer ip
+ I0902 22:52:17.938617 2079114000 net.cpp:394] ip <- data
+ I0902 22:52:17.939177 2079114000 net.cpp:356] ip -> ip
+ I0902 22:52:17.939196 2079114000 net.cpp:96] Setting up ip
+ I0902 22:52:17.940289 2079114000 net.cpp:103] Top shape: 64 2 1 1 (128)
+ I0902 22:52:17.941270 2079114000 net.cpp:67] Creating Layer loss
+ I0902 22:52:17.941305 2079114000 net.cpp:394] loss <- ip
+ I0902 22:52:17.941314 2079114000 net.cpp:394] loss <- label
+ I0902 22:52:17.941323 2079114000 net.cpp:356] loss -> loss
+ # set up the loss and configure the backward pass
+ I0902 22:52:17.941328 2079114000 net.cpp:96] Setting up loss
+ I0902 22:52:17.941328 2079114000 net.cpp:103] Top shape: 1 1 1 1 (1)
+ I0902 22:52:17.941329 2079114000 net.cpp:109] with loss weight 1
+ I0902 22:52:17.941779 2079114000 net.cpp:170] loss needs backward computation.
+ I0902 22:52:17.941787 2079114000 net.cpp:170] ip needs backward computation.
+ I0902 22:52:17.941794 2079114000 net.cpp:172] mnist does not need backward computation.
+ # determine outputs
+ I0902 22:52:17.941800 2079114000 net.cpp:208] This network produces output loss
+ # finish initialization and report memory usage
+ I0902 22:52:17.941810 2079114000 net.cpp:467] Collecting Learning Rate and Weight Decay.
+ I0902 22:52:17.941818 2079114000 net.cpp:219] Network initialization done.
+ I0902 22:52:17.941824 2079114000 net.cpp:220] Memory required for data: 201476
+
+Note that the construction of the network is device agnostic - recall our earlier explanation that blobs and layers hide implementation details from the model definition. After construction, the network is run on either CPU or GPU by setting a single switch defined in `Caffe::mode()` and set by `Caffe::set_mode()`. Layers come with corresponding CPU and GPU routines that produce identical results (up to numerical errors, and with tests to guard it). The CPU / GPU switch is seamless and independent of the model definition. For research and deployment alike it is best to divide model and implementation.
+
+### Model format
+
+The models are defined in plaintext protocol buffer schema (prototxt) while the learned models are serialized as binary protocol buffer (binaryproto) .caffemodel files.
+
+The model format is defined by the protobuf schema in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto). The source file is mostly self-explanatory so one is encouraged to check it out.
+
+Caffe speaks [Google Protocol Buffer](https://code.google.com/p/protobuf/) for the following strengths: minimal-size binary strings when serialized, efficient serialization, a human-readable text format compatible with the binary version, and efficient interface implementations in multiple languages, most notably C++ and Python. This all contributes to the flexibility and extensibility of modeling in Caffe.
diff --git a/caffe-crfrnn/docs/tutorial/solver.md b/caffe-crfrnn/docs/tutorial/solver.md
new file mode 100644
index 00000000..8884ea0e
--- /dev/null
+++ b/caffe-crfrnn/docs/tutorial/solver.md
@@ -0,0 +1,271 @@
+---
+title: Solver / Model Optimization
+---
+# Solver
+
+The solver orchestrates model optimization by coordinating the network's forward inference and backward gradients to form parameter updates that attempt to improve the loss.
+The responsibilities of learning are divided between the Solver for overseeing the optimization and generating parameter updates and the Net for yielding loss and gradients.
+
+The Caffe solvers are Stochastic Gradient Descent (SGD), Adaptive Gradient (ADAGRAD), and Nesterov's Accelerated Gradient (NAG).
+
+The solver
+
+1. scaffolds the optimization bookkeeping and creates the training network for learning and test network(s) for evaluation.
+2. iteratively optimizes by calling forward / backward and updating parameters
+3. (periodically) evaluates the test networks
+4. snapshots the model and solver state throughout the optimization
+
+where each iteration
+
+1. calls network forward to compute the output and loss
+2. calls network backward to compute the gradients
+3. incorporates the gradients into parameter updates according to the solver method
+4. updates the solver state according to learning rate, history, and method
+
+to take the weights all the way from initialization to learned model.
+
+Like Caffe models, Caffe solvers run in CPU / GPU modes.
+
+## Methods
+
+The solver methods address the general optimization problem of loss minimization.
+For dataset $$D$$, the optimization objective is the average loss over all $$|D|$$ data instances throughout the dataset
+
+$$L(W) = \frac{1}{|D|} \sum_i^{|D|} f_W\left(X^{(i)}\right) + \lambda r(W)$$
+
+where $$f_W\left(X^{(i)}\right)$$ is the loss on data instance $$X^{(i)}$$ and $$r(W)$$ is a regularization term with weight $$\lambda$$.
+$$|D|$$ can be very large, so in practice, in each solver iteration we use a stochastic approximation of this objective, drawing a mini-batch of $$N << |D|$$ instances:
+
+$$L(W) \approx \frac{1}{N} \sum_i^N f_W\left(X^{(i)}\right) + \lambda r(W)$$
+
+The model computes $$f_W$$ in the forward pass and the gradient $$\nabla f_W$$ in the backward pass.
+
+The parameter update $$\Delta W$$ is formed by the solver from the error gradient $$\nabla f_W$$, the regularization gradient $$\nabla r(W)$$, and other particulars to each method.
+
+### SGD
+
+**Stochastic gradient descent** (`solver_type: SGD`) updates the weights $$ W $$ by a linear combination of the negative gradient $$ \nabla L(W) $$ and the previous weight update $$ V_t $$.
+The **learning rate** $$ \alpha $$ is the weight of the negative gradient.
+The **momentum** $$ \mu $$ is the weight of the previous update.
+
+Formally, we have the following formulas to compute the update value $$ V_{t+1} $$ and the updated weights $$ W_{t+1} $$ at iteration $$ t+1 $$, given the previous weight update $$ V_t $$ and current weights $$ W_t $$:
+
+$$
+V_{t+1} = \mu V_t - \alpha \nabla L(W_t)
+$$
+
+$$
+W_{t+1} = W_t + V_{t+1}
+$$
+
+The learning "hyperparameters" ($$\alpha$$ and $$\mu$$) might require a bit of tuning for best results.
+If you're not sure where to start, take a look at the "Rules of thumb" below, and for further information you might refer to Leon Bottou's [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf) [1].
+
+[1] L. Bottou.
+ [Stochastic Gradient Descent Tricks](http://research.microsoft.com/pubs/192769/tricks-2012.pdf).
+ *Neural Networks: Tricks of the Trade*: Springer, 2012.
+
+#### Rules of thumb for setting the learning rate $$ \alpha $$ and momentum $$ \mu $$
+
+A good strategy for deep learning with SGD is to initialize the learning rate $$ \alpha $$ to a value around $$ \alpha \approx 0.01 = 10^{-2} $$, and dropping it by a constant factor (e.g., 10) throughout training when the loss begins to reach an apparent "plateau", repeating this several times.
+Generally, you probably want to use a momentum $$ \mu = 0.9 $$ or similar value.
+By smoothing the weight updates across iterations, momentum tends to make deep learning with SGD both stabler and faster.
+
+This was the strategy used by Krizhevsky et al. [1] in their famously winning CNN entry to the ILSVRC-2012 competition, and Caffe makes this strategy easy to implement in a `SolverParameter`, as in our reproduction of [1] at `./examples/imagenet/alexnet_solver.prototxt`.
+
+To use a learning rate policy like this, you can put the following lines somewhere in your solver prototxt file:
+
+ base_lr: 0.01 # begin training at a learning rate of 0.01 = 1e-2
+
+ lr_policy: "step" # learning rate policy: drop the learning rate in "steps"
+ # by a factor of gamma every stepsize iterations
+
+ gamma: 0.1 # drop the learning rate by a factor of 10
+ # (i.e., multiply it by a factor of gamma = 0.1)
+
+ stepsize: 100000 # drop the learning rate every 100K iterations
+
+ max_iter: 350000 # train for 350K iterations total
+
+ momentum: 0.9
+
+Under the above settings, we'll always use `momentum` $$ \mu = 0.9 $$.
+We'll begin training at a `base_lr` of $$ \alpha = 0.01 = 10^{-2} $$ for the first 100,000 iterations, then multiply the learning rate by `gamma` ($$ \gamma $$) and train at $$ \alpha' = \alpha \gamma = (0.01) (0.1) = 0.001 = 10^{-3} $$ for iterations 100K-200K, then at $$ \alpha'' = 10^{-4} $$ for iterations 200K-300K, and finally train until iteration 350K (since we have `max_iter: 350000`) at $$ \alpha''' = 10^{-5} $$.
+
+Note that the momentum setting $$ \mu $$ effectively multiplies the size of your updates by a factor of $$ \frac{1}{1 - \mu} $$ after many iterations of training, so if you increase $$ \mu $$, it may be a good idea to **decrease** $$ \alpha $$ accordingly (and vice versa).
+
+For example, with $$ \mu = 0.9 $$, we have an effective update size multiplier of $$ \frac{1}{1 - 0.9} = 10 $$.
+If we increased the momentum to $$ \mu = 0.99 $$, we've increased our update size multiplier to 100, so we should drop $$ \alpha $$ (`base_lr`) by a factor of 10.
+
+Note also that the above settings are merely guidelines, and they're definitely not guaranteed to be optimal (or even work at all!) in every situation.
+If learning diverges (e.g., you start to see very large or `NaN` or `inf` loss values or outputs), try dropping the `base_lr` (e.g., `base_lr: 0.001`) and re-training, repeating this until you find a `base_lr` value that works.
+
+[1] A. Krizhevsky, I. Sutskever, and G. Hinton.
+ [ImageNet Classification with Deep Convolutional Neural Networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf).
+ *Advances in Neural Information Processing Systems*, 2012.
+
+### AdaGrad
+
+The **adaptive gradient** (`solver_type: ADAGRAD`) method (Duchi et al. [1]) is a gradient-based optimization method (like SGD) that attempts to "find needles in haystacks in the form of very predictive but rarely seen features," in Duchi et al.'s words.
+Given the update information from all previous iterations $$ \left( \nabla L(W) \right)_{t'} $$ for $$ t' \in \{1, 2, ..., t\} $$,
+the update formulas proposed by [1] are as follows, specified for each component $$i$$ of the weights $$W$$:
+
+$$
+(W_{t+1})_i =
+(W_t)_i - \alpha
+\frac{\left( \nabla L(W_t) \right)_{i}}{
+ \sqrt{\sum_{t'=1}^{t} \left( \nabla L(W_{t'}) \right)_i^2}
+}
+$$
+
+Note that in practice, for weights $$ W \in \mathcal{R}^d $$, AdaGrad implementations (including the one in Caffe) use only $$ \mathcal{O}(d) $$ extra storage for the historical gradient information (rather than the $$ \mathcal{O}(dt) $$ storage that would be necessary to store each historical gradient individually).
+
+[1] J. Duchi, E. Hazan, and Y. Singer.
+ [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf).
+ *The Journal of Machine Learning Research*, 2011.
+
+### NAG
+
+**Nesterov's accelerated gradient** (`solver_type: NAG`) was proposed by Nesterov [1] as an "optimal" method of convex optimization, achieving a convergence rate of $$ \mathcal{O}(1/t^2) $$ rather than the $$ \mathcal{O}(1/t) $$.
+Though the required assumptions to achieve the $$ \mathcal{O}(1/t^2) $$ convergence typically will not hold for deep networks trained with Caffe (e.g., due to non-smoothness and non-convexity), in practice NAG can be a very effective method for optimizing certain types of deep learning architectures, as demonstrated for deep MNIST autoencoders by Sutskever et al. [2].
+
+The weight update formulas look very similar to the SGD updates given above:
+
+$$
+V_{t+1} = \mu V_t - \alpha \nabla L(W_t + \mu V_t)
+$$
+
+$$
+W_{t+1} = W_t + V_{t+1}
+$$
+
+What distinguishes the method from SGD is the weight setting $$ W $$ on which we compute the error gradient $$ \nabla L(W) $$ -- in NAG we take the gradient on weights with added momentum $$ \nabla L(W_t + \mu V_t) $$; in SGD we simply take the gradient $$ \nabla L(W_t) $$ on the current weights themselves.
+
+[1] Y. Nesterov.
+ A Method of Solving a Convex Programming Problem with Convergence Rate $$\mathcal{O}(1/\sqrt{k})$$.
+ *Soviet Mathematics Doklady*, 1983.
+
+[2] I. Sutskever, J. Martens, G. Dahl, and G. Hinton.
+ [On the Importance of Initialization and Momentum in Deep Learning](http://www.cs.toronto.edu/~fritz/absps/momentum.pdf).
+ *Proceedings of the 30th International Conference on Machine Learning*, 2013.
+
+## Scaffolding
+
+The solver scaffolding prepares the optimization method and initializes the model to be learned in `Solver::Presolve()`.
+
+ > caffe train -solver examples/mnist/lenet_solver.prototxt
+ I0902 13:35:56.474978 16020 caffe.cpp:90] Starting Optimization
+ I0902 13:35:56.475190 16020 solver.cpp:32] Initializing solver from parameters:
+ test_iter: 100
+ test_interval: 500
+ base_lr: 0.01
+ display: 100
+ max_iter: 10000
+ lr_policy: "inv"
+ gamma: 0.0001
+ power: 0.75
+ momentum: 0.9
+ weight_decay: 0.0005
+ snapshot: 5000
+ snapshot_prefix: "examples/mnist/lenet"
+ solver_mode: GPU
+ net: "examples/mnist/lenet_train_test.prototxt"
+
+Net initialization
+
+ I0902 13:35:56.655681 16020 solver.cpp:72] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
+ [...]
+ I0902 13:35:56.656740 16020 net.cpp:56] Memory required for data: 0
+ I0902 13:35:56.656791 16020 net.cpp:67] Creating Layer mnist
+ I0902 13:35:56.656811 16020 net.cpp:356] mnist -> data
+ I0902 13:35:56.656846 16020 net.cpp:356] mnist -> label
+ I0902 13:35:56.656874 16020 net.cpp:96] Setting up mnist
+ I0902 13:35:56.694052 16020 data_layer.cpp:135] Opening lmdb examples/mnist/mnist_train_lmdb
+ I0902 13:35:56.701062 16020 data_layer.cpp:195] output data size: 64,1,28,28
+ I0902 13:35:56.701146 16020 data_layer.cpp:236] Initializing prefetch
+ I0902 13:35:56.701196 16020 data_layer.cpp:238] Prefetch initialized.
+ I0902 13:35:56.701212 16020 net.cpp:103] Top shape: 64 1 28 28 (50176)
+ I0902 13:35:56.701230 16020 net.cpp:103] Top shape: 64 1 1 1 (64)
+ [...]
+ I0902 13:35:56.703737 16020 net.cpp:67] Creating Layer ip1
+ I0902 13:35:56.703753 16020 net.cpp:394] ip1 <- pool2
+ I0902 13:35:56.703778 16020 net.cpp:356] ip1 -> ip1
+ I0902 13:35:56.703797 16020 net.cpp:96] Setting up ip1
+ I0902 13:35:56.728127 16020 net.cpp:103] Top shape: 64 500 1 1 (32000)
+ I0902 13:35:56.728142 16020 net.cpp:113] Memory required for data: 5039360
+ I0902 13:35:56.728175 16020 net.cpp:67] Creating Layer relu1
+ I0902 13:35:56.728194 16020 net.cpp:394] relu1 <- ip1
+ I0902 13:35:56.728219 16020 net.cpp:345] relu1 -> ip1 (in-place)
+ I0902 13:35:56.728240 16020 net.cpp:96] Setting up relu1
+ I0902 13:35:56.728256 16020 net.cpp:103] Top shape: 64 500 1 1 (32000)
+ I0902 13:35:56.728270 16020 net.cpp:113] Memory required for data: 5167360
+ I0902 13:35:56.728287 16020 net.cpp:67] Creating Layer ip2
+ I0902 13:35:56.728304 16020 net.cpp:394] ip2 <- ip1
+ I0902 13:35:56.728333 16020 net.cpp:356] ip2 -> ip2
+ I0902 13:35:56.728356 16020 net.cpp:96] Setting up ip2
+ I0902 13:35:56.728690 16020 net.cpp:103] Top shape: 64 10 1 1 (640)
+ I0902 13:35:56.728705 16020 net.cpp:113] Memory required for data: 5169920
+ I0902 13:35:56.728734 16020 net.cpp:67] Creating Layer loss
+ I0902 13:35:56.728747 16020 net.cpp:394] loss <- ip2
+ I0902 13:35:56.728767 16020 net.cpp:394] loss <- label
+ I0902 13:35:56.728786 16020 net.cpp:356] loss -> loss
+ I0902 13:35:56.728811 16020 net.cpp:96] Setting up loss
+ I0902 13:35:56.728837 16020 net.cpp:103] Top shape: 1 1 1 1 (1)
+ I0902 13:35:56.728849 16020 net.cpp:109] with loss weight 1
+ I0902 13:35:56.728878 16020 net.cpp:113] Memory required for data: 5169924
+
+Loss
+
+ I0902 13:35:56.728893 16020 net.cpp:170] loss needs backward computation.
+ I0902 13:35:56.728909 16020 net.cpp:170] ip2 needs backward computation.
+ I0902 13:35:56.728924 16020 net.cpp:170] relu1 needs backward computation.
+ I0902 13:35:56.728938 16020 net.cpp:170] ip1 needs backward computation.
+ I0902 13:35:56.728953 16020 net.cpp:170] pool2 needs backward computation.
+ I0902 13:35:56.728970 16020 net.cpp:170] conv2 needs backward computation.
+ I0902 13:35:56.728984 16020 net.cpp:170] pool1 needs backward computation.
+ I0902 13:35:56.728998 16020 net.cpp:170] conv1 needs backward computation.
+ I0902 13:35:56.729014 16020 net.cpp:172] mnist does not need backward computation.
+ I0902 13:35:56.729027 16020 net.cpp:208] This network produces output loss
+ I0902 13:35:56.729053 16020 net.cpp:467] Collecting Learning Rate and Weight Decay.
+ I0902 13:35:56.729071 16020 net.cpp:219] Network initialization done.
+ I0902 13:35:56.729085 16020 net.cpp:220] Memory required for data: 5169924
+ I0902 13:35:56.729277 16020 solver.cpp:156] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
+
+Completion
+
+ I0902 13:35:56.806970 16020 solver.cpp:46] Solver scaffolding done.
+ I0902 13:35:56.806984 16020 solver.cpp:165] Solving LeNet
+
+
+## Updating Parameters
+
+The actual weight update is made by the solver then applied to the net parameters in `Solver::ComputeUpdateValue()`.
+The `ComputeUpdateValue` method incorporates any weight decay $$ r(W) $$ into the weight gradients (which currently just contain the error gradients) to get the final gradient with respect to each network weight.
+Then these gradients are scaled by the learning rate $$ \alpha $$ and the update to subtract is stored in each parameter Blob's `diff` field.
+Finally, the `Blob::Update` method is called on each parameter blob, which performs the final update (subtracting the Blob's `diff` from its `data`).
+
+## Snapshotting and Resuming
+
+The solver snapshots the weights and its own state during training in `Solver::Snapshot()` and `Solver::SnapshotSolverState()`.
+The weight snapshots export the learned model while the solver snapshots allow training to be resumed from a given point.
+Training is resumed by `Solver::Restore()` and `Solver::RestoreSolverState()`.
+
+Weights are saved without extension while solver states are saved with `.solverstate` extension.
+Both files will have an `_iter_N` suffix for the snapshot iteration number.
+
+Snapshotting is configured by:
+
+ # The snapshot interval in iterations.
+ snapshot: 5000
+ # File path prefix for snapshotting model weights and solver state.
+ # Note: this is relative to the invocation of the `caffe` utility, not the
+ # solver definition file.
+ snapshot_prefix: "/path/to/model"
+ # Snapshot the diff along with the weights. This can help debugging training
+ # but takes more storage.
+ snapshot_diff: false
+ # A final snapshot is saved at the end of training unless
+ # this flag is set to false. The default is true.
+ snapshot_after_train: true
+
+in the solver definition prototxt.
diff --git a/caffe-crfrnn/examples/.gitignore b/caffe-crfrnn/examples/.gitignore
new file mode 100644
index 00000000..29aa4e63
--- /dev/null
+++ b/caffe-crfrnn/examples/.gitignore
@@ -0,0 +1,2 @@
+*/*.caffemodel
+*/*.solverstate
diff --git a/caffe-crfrnn/examples/CMakeLists.txt b/caffe-crfrnn/examples/CMakeLists.txt
new file mode 100644
index 00000000..663d7360
--- /dev/null
+++ b/caffe-crfrnn/examples/CMakeLists.txt
@@ -0,0 +1,31 @@
+file(GLOB_RECURSE examples_srcs "${PROJECT_SOURCE_DIR}/examples/*.cpp")
+
+foreach(source_file ${examples_srcs})
+ # get file name
+ get_filename_component(name ${source_file} NAME_WE)
+
+ # get folder name
+ get_filename_component(path ${source_file} PATH)
+ get_filename_component(folder ${path} NAME_WE)
+
+ add_executable(${name} ${source_file})
+ target_link_libraries(${name} ${Caffe_LINK})
+ caffe_default_properties(${name})
+
+ # set back RUNTIME_OUTPUT_DIRECTORY
+ set_target_properties(${name} PROPERTIES
+ RUNTIME_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/examples/${folder}")
+
+ caffe_set_solution_folder(${name} examples)
+
+ # install
+ install(TARGETS ${name} DESTINATION bin)
+
+ if(UNIX OR APPLE)
+ # Funny command to make tutorials work
+ # TODO: remove in future as soon as naming is standartaized everywhere
+ set(__outname ${PROJECT_BINARY_DIR}/examples/${folder}/${name}${Caffe_POSTFIX})
+ add_custom_command(TARGET ${name} POST_BUILD
+ COMMAND ln -sf "${__outname}" "${__outname}.bin")
+ endif()
+endforeach()
diff --git a/caffe-crfrnn/include/caffe/blob.hpp b/caffe-crfrnn/include/caffe/blob.hpp
new file mode 100644
index 00000000..ef10aea5
--- /dev/null
+++ b/caffe-crfrnn/include/caffe/blob.hpp
@@ -0,0 +1,144 @@
+#ifndef CAFFE_BLOB_HPP_
+#define CAFFE_BLOB_HPP_
+
+#include "caffe/common.hpp"
+#include "caffe/proto/caffe.pb.h"
+#include "caffe/syncedmem.hpp"
+#include "caffe/util/math_functions.hpp"
+
+namespace caffe {
+
+/**
+ * @brief A wrapper around SyncedMemory holders serving as the basic
+ * computational unit through which Layer%s, Net%s, and Solver%s
+ * interact.
+ *
+ * TODO(dox): more thorough description.
+ */
+template
+class Blob {
+ public:
+ Blob()
+ : data_(), diff_(), num_(0), channels_(0), height_(0), width_(0),
+ count_(0), capacity_(0) {}
+ explicit Blob(const int num, const int channels, const int height,
+ const int width);
+ /**
+ * @brief Change the dimensions of the blob, allocating new memory if
+ * necessary.
+ *
+ * This function can be called both to create an initial allocation
+ * of memory, and to adjust the dimensions of a top blob during Layer::Reshape
+ * or Layer::Forward. When changing the size of blob, memory will only be
+ * reallocated if sufficient memory does not already exist, and excess memory
+ * will never be freed.
+ *
+ * Note that reshaping an input blob and immediately calling Net::Backward is
+ * an error; either Net::Forward or Net::Reshape need to be called to
+ * propagate the new input shape to higher layers.
+ */
+ void Reshape(const int num, const int channels, const int height,
+ const int width);
+ void ReshapeLike(const Blob& other);
+ inline int num() const { return num_; }
+ inline int channels() const { return channels_; }
+ inline int height() const { return height_; }
+ inline int width() const { return width_; }
+ inline int count() const { return count_; }
+ inline int offset(const int n, const int c = 0, const int h = 0,
+ const int w = 0) const {
+ CHECK_GE(n, 0);
+ CHECK_LE(n, num_);
+ CHECK_GE(channels_, 0);
+ CHECK_LE(c, channels_);
+ CHECK_GE(height_, 0);
+ CHECK_LE(h, height_);
+ CHECK_GE(width_, 0);
+ CHECK_LE(w, width_);
+ return ((n * channels_ + c) * height_ + h) * width_ + w;
+ }
+ /**
+ * @brief Copy from a source Blob.
+ *
+ * @param source the Blob to copy from
+ * @param copy_diff if false, copy the data; if true, copy the diff
+ * @param reshape if false, require this Blob to be pre-shaped to the shape
+ * of other (and die otherwise); if true, Reshape this Blob to other's
+ * shape if necessary
+ */
+ void CopyFrom(const Blob& source, bool copy_diff = false,
+ bool reshape = false);
+
+ inline Dtype data_at(const int n, const int c, const int h,
+ const int w) const {
+ return *(cpu_data() + offset(n, c, h, w));
+ }
+
+ inline Dtype diff_at(const int n, const int c, const int h,
+ const int w) const {
+ return *(cpu_diff() + offset(n, c, h, w));
+ }
+
+ inline const shared_ptr& data() const {
+ CHECK(data_);
+ return data_;
+ }
+
+ inline const shared_ptr& diff() const {
+ CHECK(diff_);
+ return diff_;
+ }
+
+ const Dtype* cpu_data() const;
+ void set_cpu_data(Dtype* data);
+ const Dtype* gpu_data() const;
+ const Dtype* cpu_diff() const;
+ const Dtype* gpu_diff() const;
+ Dtype* mutable_cpu_data();
+ Dtype* mutable_gpu_data();
+ Dtype* mutable_cpu_diff();
+ Dtype* mutable_gpu_diff();
+ void Update();
+ void FromProto(const BlobProto& proto);
+ void ToProto(BlobProto* proto, bool write_diff = false) const;
+
+ /// @brief Compute the sum of absolute values (L1 norm) of the data.
+ Dtype asum_data() const;
+ /// @brief Compute the sum of absolute values (L1 norm) of the diff.
+ Dtype asum_diff() const;
+
+ /**
+ * @brief Set the data_ shared_ptr to point to the SyncedMemory holding the
+ * data_ of Blob other -- useful in Layer&s which simply perform a copy
+ * in their Forward pass.
+ *
+ * This deallocates the SyncedMemory holding this Blob's data_, as
+ * shared_ptr calls its destructor when reset with the "=" operator.
+ */
+ void ShareData(const Blob& other);
+ /**
+ * @brief Set the diff_ shared_ptr to point to the SyncedMemory holding the
+ * diff_ of Blob other -- useful in Layer&s which simply perform a copy
+ * in their Forward pass.
+ *
+ * This deallocates the SyncedMemory holding this Blob's diff_, as
+ * shared_ptr calls its destructor when reset with the "=" operator.
+ */
+ void ShareDiff(const Blob& other);
+
+ protected:
+ shared_ptr data_;
+ shared_ptr diff_;
+ int num_;
+ int channels_;
+ int height_;
+ int width_;
+ int count_;
+ int capacity_;
+
+ DISABLE_COPY_AND_ASSIGN(Blob);
+}; // class Blob
+
+} // namespace caffe
+
+#endif // CAFFE_BLOB_HPP_
diff --git a/caffe-crfrnn/include/caffe/caffe.hpp b/caffe-crfrnn/include/caffe/caffe.hpp
new file mode 100644
index 00000000..3c829f2f
--- /dev/null
+++ b/caffe-crfrnn/include/caffe/caffe.hpp
@@ -0,0 +1,19 @@
+// caffe.hpp is the header file that you need to include in your code. It wraps
+// all the internal caffe header files into one for simpler inclusion.
+
+#ifndef CAFFE_CAFFE_HPP_
+#define CAFFE_CAFFE_HPP_
+
+#include "caffe/blob.hpp"
+#include "caffe/common.hpp"
+#include "caffe/filler.hpp"
+#include "caffe/layer.hpp"
+#include "caffe/layer_factory.hpp"
+#include "caffe/net.hpp"
+#include "caffe/proto/caffe.pb.h"
+#include "caffe/solver.hpp"
+#include "caffe/util/benchmark.hpp"
+#include "caffe/util/io.hpp"
+#include "caffe/vision_layers.hpp"
+
+#endif // CAFFE_CAFFE_HPP_
diff --git a/caffe-crfrnn/include/caffe/common.hpp b/caffe-crfrnn/include/caffe/common.hpp
new file mode 100644
index 00000000..81b2e9ae
--- /dev/null
+++ b/caffe-crfrnn/include/caffe/common.hpp
@@ -0,0 +1,175 @@
+#ifndef CAFFE_COMMON_HPP_
+#define CAFFE_COMMON_HPP_
+
+#include
+#include
+#include
+
+#include
+#include // NOLINT(readability/streams)
+#include // NOLINT(readability/streams)
+#include