Skip to content

Commit

Permalink
migration: github.com/opencv/opencv_contrib
Browse files Browse the repository at this point in the history
  • Loading branch information
alalek committed Jul 12, 2016
1 parent a996bcf commit 823dea7
Show file tree
Hide file tree
Showing 11 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/Itseez/opencv/wiki/How_to_contribute).
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
Expand Down
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ compiler:
- clang
before_script:
- cd ../
- git clone https://github.com/Itseez/opencv.git
- git clone https://github.com/opencv/opencv.git
- mkdir build-opencv
- cd build-opencv
- cmake -DOPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules ../opencv
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
## Contributing guidelines

All guidelines for contributing to the OpenCV repository can be found at [`How to contribute guideline`](https://github.com/Itseez/opencv/wiki/How_to_contribute).
All guidelines for contributing to the OpenCV repository can be found at [`How to contribute guideline`](https://github.com/opencv/opencv/wiki/How_to_contribute).
8 changes: 4 additions & 4 deletions modules/dnn/tutorials/tutorial_dnn_build.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Build opencv_contrib with dnn module {#tutorial_dnn_build}

Introduction
------------
opencv_dnn module is placed in the secondary [opencv_contrib](https://github.com/Itseez/opencv_contrib) repository,
opencv_dnn module is placed in the secondary [opencv_contrib](https://github.com/opencv/opencv_contrib) repository,
which isn't distributed in binary form, therefore you need to build it manually.

To do this you need to have installed: [CMake](http://www.cmake.org/download), git, and build system (*gcc* with *make* for Linux or *MS Visual Studio* for Windows)
Expand All @@ -12,12 +12,12 @@ Steps
-----
-# Make any directory, for example **opencv_root**

-# Clone [opencv](https://github.com/Itseez/opencv) and [opencv_contrib](https://github.com/Itseez/opencv_contrib) repos to the **opencv_root**.
-# Clone [opencv](https://github.com/opencv/opencv) and [opencv_contrib](https://github.com/opencv/opencv_contrib) repos to the **opencv_root**.
You can do it in terminal like here:
@code
cd opencv_root
git clone https://github.com/Itseez/opencv
git clone https://github.com/Itseez/opencv_contrib
git clone https://github.com/opencv/opencv
git clone https://github.com/opencv/opencv_contrib
@endcode

-# Run [CMake-gui] and set source and build directories:
Expand Down
2 changes: 1 addition & 1 deletion modules/text/include/opencv2/text.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ grouping horizontally aligned text, and the method proposed by Lluis Gomez and D
in [Gomez13][Gomez14] for grouping arbitrary oriented text (see erGrouping).
To see the text detector at work, have a look at the textdetection demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp>
@defgroup text_recognize Scene Text Recognition
@}
Expand Down
2 changes: 1 addition & 1 deletion modules/text/include/opencv2/text/erfilter.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -345,7 +345,7 @@ single vector\<Point\>, the function separates them in two different vectors (th
ERStats where extracted from two different channels).
An example of MSERsToERStats in use can be found in the text detection webcam_demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
*/
CV_EXPORTS void MSERsToERStats(InputArray image, std::vector<std::vector<Point> > &contours,
std::vector<std::vector<ERStat> > &regions);
Expand Down
14 changes: 7 additions & 7 deletions modules/text/include/opencv2/text/ocr.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,10 @@ Notice that it is compiled only when tesseract-ocr is correctly installed.
@note
- (C++) An example of OCRTesseract recognition combined with scene text detection can be found
at the end_to_end_recognition demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/end_to_end_recognition.cpp>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/end_to_end_recognition.cpp>
- (C++) Another example of OCRTesseract recognition combined with scene text detection can be
found at the webcam_demo:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
*/
class CV_EXPORTS_W OCRTesseract : public BaseOCR
{
Expand Down Expand Up @@ -152,7 +152,7 @@ enum decoder_mode
@note
- (C++) An example on using OCRHMMDecoder recognition combined with scene text detection can
be found at the webcam_demo sample:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp>
*/
class CV_EXPORTS_W OCRHMMDecoder : public BaseOCR
{
Expand All @@ -165,7 +165,7 @@ class CV_EXPORTS_W OCRHMMDecoder : public BaseOCR
The default character classifier and feature extractor can be loaded using the utility funtion
loadOCRHMMClassifierNM and KNN model provided in
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRHMM_knn_model_data.xml.gz>.
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_knn_model_data.xml.gz>.
*/
class CV_EXPORTS_W ClassifierCallback
{
Expand Down Expand Up @@ -321,7 +321,7 @@ CV_EXPORTS_W Ptr<OCRHMMDecoder::ClassifierCallback> loadOCRHMMClassifierCNN(cons
* The function calculate frequency statistics of character pairs from the given lexicon and fills the output transition_probabilities_table with them. The transition_probabilities_table can be used as input in the OCRHMMDecoder::create() and OCRBeamSearchDecoder::create() methods.
* @note
* - (C++) An alternative would be to load the default generic language transition table provided in the text module samples folder (created from ispell 42869 english words list) :
* <https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml>
* <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml>
**/
CV_EXPORTS void createOCRHMMTransitionsTable(std::string& vocabulary, std::vector<std::string>& lexicon, OutputArray transition_probabilities_table);

Expand All @@ -335,7 +335,7 @@ CV_EXPORTS_W Mat createOCRHMMTransitionsTable(const String& vocabulary, std::vec
@note
- (C++) An example on using OCRBeamSearchDecoder recognition combined with scene text detection can
be found at the demo sample:
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/word_recognition.cpp>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/word_recognition.cpp>
*/
class CV_EXPORTS_W OCRBeamSearchDecoder : public BaseOCR
{
Expand All @@ -348,7 +348,7 @@ class CV_EXPORTS_W OCRBeamSearchDecoder : public BaseOCR
The default character classifier and feature extractor can be loaded using the utility funtion
loadOCRBeamSearchClassifierCNN with all its parameters provided in
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRBeamSearch_CNN_model_data.xml.gz>.
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRBeamSearch_CNN_model_data.xml.gz>.
*/
class CV_EXPORTS_W ClassifierCallback
{
Expand Down
2 changes: 1 addition & 1 deletion modules/text/src/ocr_hmm_decoder.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1206,7 +1206,7 @@ the output transition_probabilities_table with them.
The transition_probabilities_table can be used as input in the OCRHMMDecoder::create() and OCRBeamSearchDecoder::create() methods.
@note
- (C++) An alternative would be to load the default generic language transition table provided in the text module samples folder (created from ispell 42869 english words list) :
<https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml>
<https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_transitions_table.xml>
*/
void createOCRHMMTransitionsTable(string& vocabulary, vector<string>& lexicon, OutputArray _transitions)
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ Explanation
as shown in help. In the help, it means that the image files are numbered with 4 digits
(e.g. the file naming will be 0001.jpg, 0002.jpg, and so on).

You can find video samples in Itseez/opencv_extra/testdata/cv/tracking
<https://github.com/Itseez/opencv_extra/tree/master/testdata/cv/tracking>
You can find video samples in opencv_extra/testdata/cv/tracking
<https://github.com/opencv/opencv_extra/tree/master/testdata/cv/tracking>

-# **Declares the required variables**

Expand Down
2 changes: 1 addition & 1 deletion modules/ximgproc/samples/structured_edge_detection.cpp
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
/**************************************************************************************
The structered edge demo requires you to provide a model.
This model can be found at the opencv_extra repository on Github on the following link:
https://github.com/Itseez/opencv_extra/blob/master/testdata/cv/ximgproc/model.yml.gz
https://github.com/opencv/opencv_extra/blob/master/testdata/cv/ximgproc/model.yml.gz
***************************************************************************************/

#include <opencv2/ximgproc.hpp>
Expand Down
2 changes: 1 addition & 1 deletion modules/ximgproc/tutorials/disparity_filtering.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Source Stereoscopic Image
Source Code
-----------

We will be using snippets from the example application, that can be downloaded [here ](https://github.com/Itseez/opencv_contrib/blob/master/modules/ximgproc/samples/disparity_filtering.cpp).
We will be using snippets from the example application, that can be downloaded [here ](https://github.com/opencv/opencv_contrib/blob/master/modules/ximgproc/samples/disparity_filtering.cpp).

Explanation
-----------
Expand Down

0 comments on commit 823dea7

Please sign in to comment.