diff --git a/.idea/dictionaries/dlindenbaum.xml b/.idea/dictionaries/dlindenbaum.xml
new file mode 100644
index 0000000..1815fa5
--- /dev/null
+++ b/.idea/dictionaries/dlindenbaum.xml
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/.idea/vcs.xml b/.idea/vcs.xml
new file mode 100644
index 0000000..94a25f7
--- /dev/null
+++ b/.idea/vcs.xml
@@ -0,0 +1,6 @@
+
+
+
+
+
+
\ No newline at end of file
diff --git a/README.md b/README.md
index 4cf1403..a2fd6b7 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
# SpaceNet Utilities
-This repository has two python packages, geoTools and evalTools. The geoTools packages is intended to assist in the preprocessing of [SpaceNet](https://aws.amazon.com/public-data-sets/spacenet/) satellite imagery data corpus to a format that is consumable by machine learning algorithms. The evalTools package is used to evaluate the effectiveness of object detection algorithms using ground truth.
-
+This repository has two python packages, geoTools and evalTools. The geoTools packages is intended to assist in the preprocessing of [SpaceNet](https://spacenetchallenge.github.io/) satellite imagery data corpus hosted on [SpaceNet on AWS](https://aws.amazon.com/public-datasets/spacenet/) to a format that is consumable by machine learning algorithms. The evalTools package is used to evaluate the effectiveness of object detection algorithms using ground truth.
+This is version 2.0 and has been updated with more capabilities to
## Download Instructions
Further download instructions for the [SpaceNet Dataset](https://github.com/SpaceNetChallenge/utilities/tree/master/content/download_instructions) can be found [here](https://github.com/SpaceNetChallenge/utilities/tree/master/content/download_instructions)
@@ -32,6 +32,74 @@ Hints:
* The images provided could contain anywhere from zero to multiple buildings.
* All proposed polygons should be legitimate (they should have an area, they should have points that at least make a triangle instead of a point or a line, etc).
* Use the [metric implementation code](https://github.com/SpaceNetChallenge/utilities/blob/master/python/evaluateScene.py) to self evaluate.
+To run the metric you can use the following command.
+```
+python python/evaluateScene.py /path/to/SpaceNetTruthFile.csv \
+ /path/to/SpaceNetProposalFile.csv \
+ --resultsOutputFile /path/to/SpaceNetResults.csv
+```
+
+## Data Transformation Code
+
+To make the Spacenet dataset easier to use we have created a tool createDataSpaceNet.py
+This tool currently supports the creation of datasets with annotation to support 3 Formats
+1. [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/)
+2. [Darknet](https://pjreddie.com/darknet/yolo/)
+3. [Segmenation Boundaries Dataset (SBD)](http://home.bharathh.info/pubs/codes/SBD/download.html)
+
+It will create the appropriate annotation files and a summary trainval.txt and test.txt in the outputDirectory
+
+### Create an PASCAL VOC2012 Compatiable Dataset
+The final product will have image dimensions of 420 pixels
+```
+python python/createDataSpaceNet.py /path/to/spacenet_sample/AOI_2_Vegas_Train/ \
+ RGB-PanSharpen \
+ --outputDirectory /path/to/spacenet_sample/annotations/ \
+ --annotationType PASCALVOC2012 \
+ --imgSizePix 400
+
+```
+### Changing the raster format
+Some GIS Images have 16-bit pixel values which openCV has trouble with. createDataSpaceNet.py can convert the 16bit GeoTiff to an 8bit GeoTiff or 8bit JPEG
+
+To create the 8bit GeoTiff
+```
+python python/createDataSpaceNet.py /path/to/spacenet_sample/AOI_2_Vegas_Train/ \
+ RGB-PanSharpen \
+ --outputDirectory /path/to/spacenet_sample/annotations/ \
+ --annotationType PASCALVOC2012 \
+ --convertTo8Bit \
+ --outputFileType GTiff \
+ --imgSizePix 400
+
+```
+
+To create the 8bit JPEG
+```
+python python/createDataSpaceNet.py /path/to/spacenet_sample/AOI_2_Vegas_Train/ \
+ RGB-PanSharpen \
+ --outputDirectory /path/to/spacenet_sample/annotations/ \
+ --annotationType PASCALVOC2012 \
+ --convertTo8Bit \
+ --outputFileType JPEG \
+ --imgSizePix 400
+
+```
+
+For more Features
+```
+python python/createDataSpaceNet.py -h
+
+```
+
+
+
+## Use our Docker Container
+We have created two Docker files at /docker/standalone/cpu and /docker/standalone/gpu
+These Dockerfiles will build a docker container with all packages neccessary to run the package
+
+More documenation to follow
+
## Dependencies
All dependencies can be found in [requirements.txt](./python/requirements.txt)
diff --git a/content/download_instructions/README.md b/content/download_instructions/README.md
index c5ec3c5..14aac15 100644
--- a/content/download_instructions/README.md
+++ b/content/download_instructions/README.md
@@ -1,67 +1,118 @@
-# SpaceNet Utilities
+## Hosting
+[SpaceNet](https://aws.amazon.com/public-datasets/spacenet/) is a corpus of commercial satellite imagery and labeled training data to use for machine learning research. The dataset is currently hosted as an [Amazon Web Services (AWS) Public Dataset](https://aws.amazon.com/public-datasets/).
-Instruction for the download of [SpaceNet](https://aws.amazon.com/public-data-sets/spacenet/) satellite imagery data corpus to a format that is consumable by machine learning algorithms.
-
-
+## Catalog
+1. Area of Interest 1 (AOI 1) - Location: Rio de Janeiro. 50cm imagery collected from DigitalGlobe’s [WorldView-2 satellite](http://satimagingcorp.s3.amazonaws.com/site/pdf/WorldView-2_datasheet.pdf). The dataset includes building footprints and 8-band multispectral data.
+2. Area of Interest 2 (AOI 2) - Location: Vegas. 30cm imagery collected from DigitalGlobe’s [WorldView-3 satellite](https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf). The dataset includes building footprints and 8-band multispectral data.
+3. Area of Interest 3 (AOI 3) - Location: Paris. 30cm imagery collected from DigitalGlobe’s [WorldView-3 satellite](https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf). The dataset includes building footprints and 8-band multispectral data.
+4. Area of Interest 4 (AOI 4) - Location: Shanghai. 30cm imagery collected from DigitalGlobe’s [WorldView-3 satellite](https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf). The dataset includes building footprints and 8-band multispectral data.
+5. Area of Interest 5 (AOI 5) - Location: Khartoum. 30cm imagery collected from DigitalGlobe’s [WorldView-3 satellite](https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf). The dataset includes building footprints and 8-band multispectral data.
+6. Point of Interest (POI) Dataset- Location: Rio de Janeiro. The dataset includes POIs.
## Dependencies
-Required that AWS CLI is installed and that an active AWS account with credit card is associated with the aws cli
+The [AWS Command Line Interface (CLI)](https://aws.amazon.com/cli/) must be installed with an active AWS account. Configure the AWS CLI using 'aws configure'
-Configure the AWS CLI using aws configure
-SpaceNet AWS Structure
+## SpaceNet Simple Storage Service (S3) Directory Structure (AOI 1)
```
s3://spacenet-dataset/
-- AOI_1_Rio
|-- processedData
- | -- processedBuildingLabels.tar.gz # Compressed 3band and 8band 200m x 200m tiles with associated building foot print labels
- # This dataset is the Training Dataset for the first [Top Coder Competition](https://community.topcoder.com/longcontest/?module=ViewProblemStatement&rd=16835&pm=14439)
+ | -- processedBuildingLabels.tar.gz # Compressed 3band and 8band 200m x 200m tiles with associated building foot print labels # This dataset is the Training Dataset for the first Top Coder Competition
`-- srcData
|-- rasterData
- | |-- 3-Band.tar.gz # 3band (RGB) Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by [WorldView-2](http://satimagingcorp.s3.amazonaws.com/site/pdf/WorldView-2_datasheet.pdf)
- | -- 8-Band.tar.gz # 8band Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by [WorldView-2](http://satimagingcorp.s3.amazonaws.com/site/pdf/WorldView-2_datasheet.pdf)
+ | |-- 3-Band.tar.gz # 3band (RGB) Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by WorldView-2
+ | -- 8-Band.tar.gz # 8band Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by WorldView-2
+ -- vectorData
+ |-- Rio_BuildingLabels.tar.gz # Source Dataset that contains Building the building foot prints traced from the Mosaic
+ |-- Rio_HGIS_Metro.gdb.tar.gz # Source Point of Interest Dataset in GeoDatabase Format. Best if Used with ESRI
+ -- Rio_HGIS_Metro_extract.tar # Source Point of Interest Dataset in GeoJSON with associated .jpg. Easy to Use without ESRI toolset
+-- AOI_1_Rio
+ |-- processedData
+ | -- processedBuildingLabels.tar.gz # Compressed 3band and 8band 200m x 200m tiles with associated building foot print labels # This dataset is the Training Dataset for the first Top Coder Competition
+ `-- srcData
+ |-- rasterData
+ | |-- 3-Band.tar.gz # 3band (RGB) Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by WorldView-2
+ | -- 8-Band.tar.gz # 8band Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by WorldView-2
+ -- vectorData
+ |-- Rio_BuildingLabels.tar.gz # Source Dataset that contains Building the building foot prints traced from the Mosaic
+ |-- Rio_HGIS_Metro.gdb.tar.gz # Source Point of Interest Dataset in GeoDatabase Format. Best if Used with ESRI
+ -- Rio_HGIS_Metro_extract.tar # Source Point of Interest Dataset in GeoJSON with associated .jpg. Easy to Use without ESRI toolset
+-- AOI_1_Rio
+ |-- processedData
+ | -- processedBuildingLabels.tar.gz # Compressed 3band and 8band 200m x 200m tiles with associated building foot print labels # This dataset is the Training Dataset for the first Top Coder Competition
+ `-- srcData
+ |-- rasterData
+ | |-- 3-Band.tar.gz # 3band (RGB) Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by WorldView-2
+ | -- 8-Band.tar.gz # 8band Raster Mosaic for Rio De Jenairo area (2784 sq KM) collected by WorldView-2
-- vectorData
|-- Rio_BuildingLabels.tar.gz # Source Dataset that contains Building the building foot prints traced from the Mosaic
|-- Rio_HGIS_Metro.gdb.tar.gz # Source Point of Interest Dataset in GeoDatabase Format. Best if Used with ESRI
-- Rio_HGIS_Metro_extract.tar # Source Point of Interest Dataset in GeoJSON with associated .jpg. Easy to Use without ESRI toolset
```
-#To download the Imagery
-
-##To download processed 200mx200m Tiles with associated building foot prints for building foot print extraction tests do the following
+## SpaceNet Simple Storage Service (S3) Directory Structure (AOI 2-5)
```
-## Warning this file is 3.4 GB
-aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/processedData/processedBuildingLabels.tar.gz --request-payer requester processedBuildingLabels.tar.gz
+├── AOI_[Num]_[City]_Train
+│ ├── geojson
+│ │ └── buildings # Contains GeoJson labels of buildings for each tile
+│ ├── MUL # Contains Tiles of 8-Band Multi-Spectral raster data from WorldView-3
+│ ├── MUL-PanSharpen # Contains Tiles of 8-Band Multi-Spectral raster data pansharpened to 0.3m
+│ ├── PAN # Contains Tiles of Panchromatic raster data from Worldview-3
+│ ├── RGB-PanSharpen # Contains Tiles of RGB raster data from Worldview-3
+│ └── summaryData # Contains CSV with pixel based labels for each building in the Tile Set.
```
+## Download instructions
-##To download the Source Imagery Mosaic
+### AOI 1 - Rio de Janeiro
+To download processed 200mx200m tiles of AOI 1 (3.4 GB) with associated building footprints do the following:
+```
+aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/processedData/processedBuildingLabels.tar.gz --request-payer requester processedBuildingLabels.tar.gz
+```
+To download the Source Imagery Mosaic (3-band = 2.3 GB and 8-band = 6.5 GB):
```
-## Warning this file is 2.3 GB
aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/rasterData/3-Band.tar.gz --request-payer requester 3-Band.tar.gz
-## Warning this file is 6.5 GB
aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/rasterData/8-Band.tar.gz --request-payer requester 8-Band.tar.gz
```
-
-##To download the Source Vector Data for the Building Extraction Challenge
+To download the Source Vector Data (0.18 GB):
```
-## Warning this file is 0.18 GB
aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/vectorData/Rio_BuildingLabels.tar.gz --request-payer requester Rio_BuildingLabels.tar.gz
+```
+### AOI 2 - Vegas
+To download processed 200mx200m tiles of AOI 2 (23 GB) with associated building footprints do the following:
+```
+aws s3api get-object --bucket spacenet-dataset --key AOI_2_Vegas/AOI_2_Vegas_Train.tar.gz --request-payer requester AOI_2_Vegas_Train.tar.gz
```
-##To download the Rio Point of Interest Dataset in ESRI GeoDatabase Form
+### AOI 3 - Paris
+To download processed 200mx200m tiles of AOI 3 (5 GB) with associated building footprints do the following:
+```
+## Warning this file is 5 GB
+aws s3api get-object --bucket spacenet-dataset --key AOI_3_Paris/AOI_3_Paris_Train.tar.gz --request-payer requester AOI_3_Paris_Train.tar.gz
```
-## Warning this file is 31 GB
-aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/vectorData/Rio_HGIS_Metro.gdb.tar.gz --request-payer requester Rio_HGIS_Metro.gdb.tar.gz
+### AOI 4 - Shanghai
+To download processed 200mx200m tiles of AOI 4 (23 GB) with associated building footprints do the following:
+```
+aws s3api get-object --bucket spacenet-dataset --key AOI_4_Shanghai/AOI_4_Shanghai_Train.tar.gz --request-payer requester AOI_4_Shanghai_Train.tar.gz
```
-##To download the Rio Point of Interest Dataset Extracted into GeoJSONs with associated .jpg
+### AOI 5 - Khartoum
+To download processed 200mx200m tiles of AOI 5 (4 GB) with associated building footprints do the following:
+```
+aws s3api get-object --bucket spacenet-dataset --key AOI_5_Khartoum/AOI_5_Khartoum_Train.tar.gz --request-payer requester AOI_5_Khartoum_Train.tar.gz
+```
+
+### Point of Interest Dataset in ESRI GeoDatabase Form (31 GB)
+```
+aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/vectorData/Rio_HGIS_Metro.gdb.tar.gz --request-payer requester Rio_HGIS_Metro.gdb.tar.gz
```
-## Warning this file is 29 GB
-aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/vectorData/Rio_HGIS_Metro_extract.tar --request-payer requester Rio_HGIS_Metro_extract.tar
+### Point of Interest Dataset Extracted into GeoJSONs with associated .jpg (29 GB)
+```
+aws s3api get-object --bucket spacenet-dataset --key AOI_1_Rio/srcData/vectorData/Rio_HGIS_Metro_extract.tar --request-payer requester Rio_HGIS_Metro_extract.tar
```
diff --git a/docker/Dockerfile b/docker/Dockerfile
new file mode 100644
index 0000000..aec6838
--- /dev/null
+++ b/docker/Dockerfile
@@ -0,0 +1,93 @@
+LABEL maintainer dlindenbaum
+
+
+## List of Python packages needed
+###dataTools
+#import numpy as np
+#from osgeo import ogr, gdal, osr
+#import cv2
+###evalTools
+###geoTools
+#from osgeo import gdal, osr, ogr
+# import rtree
+#from osgeo import gdal, osr, ogr, gdalnumeric
+#from PIL import Image
+
+## Install External Libraries for Pillow
+#apt-get install
+#libjpeg
+#zlib
+#libtiff
+#libfreetype
+#littlecms
+#libwebp
+#openjpeg
+#
+### install Python openCV
+#libopencv-dev \
+#python-opencv \
+#python-numpy \
+#python-pip \
+#python-setuptools \
+#gdal-bin \
+#python-gdal
+
+## Install General Requirements
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ apt-utils \
+ build-essential \
+ cmake \
+ git \
+ wget \
+ vim \
+ python-dev \
+ python-pip \
+ python-setuptools
+RUN pip install --upgrade pip
+
+## Install Basics for Python
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ python-numpy \
+ python-scipy
+
+## Install Essentials for Pillow
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libjpeg-dev \
+ zlib1g \
+ libtiff5-dev \
+ libfreetype6-dev \
+ libwebp-dev \
+ libopenjpeg-dev
+
+RUN pip install Pillow
+
+## Install GDAL Requirments
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ gdal-bin \
+ python-gdal
+
+## Instal OpenCV Requirements
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libopencv-dev \
+ python-opencv
+
+## Install RTRee
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libspatialindex-dev
+
+RUN pip install rtree
+
+ENV GIT_BASE=/opt/
+WORKDIR $GIT_BASE
+
+# Download spaceNetUtilities
+# FIXME: use ARG instead of ENV once DockerHub supports this
+ENV CLONE_TAG=spacenetV2
+RUN git clone -b ${CLONE_TAG} --depth 1 https://github.com/SpaceNetChallenge/utilities.git
+
+ENV PYUTILS_ROOT $GIT_BASE/utilities/python
+ENV PYTHONPATH $PYUTILS_ROOT:$PYTHONPATH
+
+WORKDIR /workspace
+
+
diff --git a/docker/standalone/cpu/Dockerfile b/docker/standalone/cpu/Dockerfile
new file mode 100644
index 0000000..451126a
--- /dev/null
+++ b/docker/standalone/cpu/Dockerfile
@@ -0,0 +1,64 @@
+FROM ubuntu:16.04
+LABEL maintainer dlindenbaum
+
+
+
+## Install General Requirements
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ apt-utils \
+ build-essential \
+ cmake \
+ git \
+ wget \
+ vim \
+ python-dev \
+ python-pip \
+ python-setuptools
+RUN pip install --upgrade pip
+
+## Install Basics for Python
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ python-numpy \
+ python-scipy
+
+## Install Essentials for Pillow
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libjpeg-dev \
+ zlib1g \
+ libtiff5-dev \
+ libfreetype6-dev \
+ libwebp-dev \
+ libopenjpeg-dev
+
+RUN pip install Pillow
+
+## Install GDAL Requirments
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ gdal-bin \
+ python-gdal
+
+## Instal OpenCV Requirements
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libopencv-dev \
+ python-opencv
+
+## Install RTRee
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libspatialindex-dev
+
+RUN pip install rtree
+
+ENV GIT_BASE=/opt/
+WORKDIR $GIT_BASE
+
+# Download spaceNetUtilities
+# FIXME: use ARG instead of ENV once DockerHub supports this
+ENV CLONE_TAG=spacenetV2
+RUN git clone -b ${CLONE_TAG} --depth 1 https://github.com/SpaceNetChallenge/utilities.git
+
+ENV PYUTILS_ROOT $GIT_BASE/utilities/python
+ENV PYTHONPATH $PYUTILS_ROOT:$PYTHONPATH
+
+WORKDIR /workspace
+
+
diff --git a/docker/standalone/gpu/Dockerfile b/docker/standalone/gpu/Dockerfile
new file mode 100644
index 0000000..ad84509
--- /dev/null
+++ b/docker/standalone/gpu/Dockerfile
@@ -0,0 +1,63 @@
+FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
+LABEL maintainer dlindenbaum
+
+
+## Install General Requirements
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ apt-utils \
+ build-essential \
+ cmake \
+ git \
+ wget \
+ vim \
+ python-dev \
+ python-pip \
+ python-setuptools
+RUN pip install --upgrade pip
+
+## Install Basics for Python
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ python-numpy \
+ python-scipy
+
+## Install Essentials for Pillow
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libjpeg-dev \
+ zlib1g \
+ libtiff5-dev \
+ libfreetype6-dev \
+ libwebp-dev \
+ libopenjpeg-dev
+
+RUN pip install Pillow
+
+## Install GDAL Requirments
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ gdal-bin \
+ python-gdal
+
+## Instal OpenCV Requirements
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libopencv-dev \
+ python-opencv
+
+## Install RTRee
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ libspatialindex-dev
+
+RUN pip install rtree
+
+ENV GIT_BASE=/opt/
+WORKDIR $GIT_BASE
+
+# Download spaceNetUtilities
+# FIXME: use ARG instead of ENV once DockerHub supports this
+ENV CLONE_TAG=spacenetV2
+RUN git clone -b ${CLONE_TAG} --depth 1 https://github.com/SpaceNetChallenge/utilities.git
+
+ENV PYUTILS_ROOT $GIT_BASE/utilities/python
+ENV PYTHONPATH $PYUTILS_ROOT:$PYTHONPATH
+
+WORKDIR /workspace
+
+
diff --git a/python/createCSVFromGEOJSON.py b/python/createCSVFromGEOJSON.py
new file mode 100644
index 0000000..6c7a008
--- /dev/null
+++ b/python/createCSVFromGEOJSON.py
@@ -0,0 +1,81 @@
+from spaceNetUtilities import labelTools as lT
+import os
+import glob
+import argparse
+
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-imgDir", "--imgDir", type=str,
+ help="Directory of Raster Images")
+ parser.add_argument("-geoDir", "--geojsonDir", type=str,
+ help="Directory of geojson files")
+ parser.add_argument("-o", "--outputCSV", type=str,
+ help="Output File Name and Location for CSV")
+ parser.add_argument("-pixPrecision", "--pixelPrecision", type=int,
+ help="Number of decimal places to include for pixel, uses round(xPix, pixPrecision)"
+ "Default = 2",
+ default=2)
+ parser.add_argument("--CreateProposalFile", help="Create ProposalsFile",
+ action="store_true")
+ parser.add_argument("-strip", "--stripOutFromGeoJson", type=str,
+ help="string delimited")
+ parser.add_argument("--DontstripFirstUnderScore", action="store_false")
+ args = parser.parse_args()
+
+ rasterDirectory = args.imgDir
+ geoJsonDirectory = args.geojsonDir
+ outputCSVFileName = args.outputCSV
+ createProposalFile = args.CreateProposalFile
+ if args.stripOutFromGeoJson:
+ stripList = args.stripOutFromGeoJson.split(' ')
+ else:
+ stripList =[]
+
+
+ #band3directory = '/usr/local/share/data/AOI_1_RIO/processed2/3band'
+ #band8directory = '/usr/local/share/data/AOI_1_RIO/processed2/8band'
+ #geoJsonDirectory = '/usr/local/share/data/AOI_1_RIO/processed2/geojson'
+
+ jsonList = []
+ chipSummaryList = []
+
+ #AOI_2_RIO_3Band_img997.tif
+ #AOI_2_RIO_img635.geojson
+
+
+ # find RasterPrecursor
+ rasterList = glob.glob(os.path.join(rasterDirectory, '*.tif'))
+ rasterPrefix = os.path.basename(rasterList[0])
+ rasterPrefix = rasterPrefix.split("_")[0]
+
+
+ geoJsonList = glob.glob(os.path.join(geoJsonDirectory, '*.geojson'))
+ for imageId in geoJsonList:
+ imageId = os.path.basename(imageId)
+ rasterName = imageId.replace('.geojson','.tif')
+
+ for stripItem in stripList:
+ rasterName = rasterName.replace(stripItem, '')
+
+
+ if args.DontstripFirstUnderScore:
+ rasterName = rasterPrefix+"_"+rasterName.split('_',1)[1]
+ else:
+ rasterName = rasterPrefix+"_"+rasterName
+ print(imageId)
+ print(os.path.join(rasterDirectory,rasterName))
+ chipSummary = {'chipName': os.path.join(rasterDirectory, rasterName),
+ 'geoVectorName': os.path.join(geoJsonDirectory, imageId),
+ 'imageId': os.path.splitext(imageId)[0]}
+
+ chipSummaryList.append(chipSummary)
+
+ print("starting")
+ lT.createCSVSummaryFile(chipSummaryList, outputCSVFileName,
+ replaceImageID=rasterPrefix+"_",
+ createProposalsFile=createProposalFile,
+ pixPrecision=args.pixelPrecision)
+
+ print("finished")
\ No newline at end of file
diff --git a/python/createDataSpaceNet.py b/python/createDataSpaceNet.py
new file mode 100644
index 0000000..ab12c36
--- /dev/null
+++ b/python/createDataSpaceNet.py
@@ -0,0 +1,324 @@
+import os
+import glob
+import random
+from spaceNetUtilities import labelTools as lT
+from spaceNetUtilities import geoTools as gT
+import argparse
+
+
+def processRasterChip(rasterImage, rasterDescription, geojson, geojsonDescription, outputDirectory='',
+ imagePixSize=-1, clipOverlap=0.0, randomClip=False,
+ minpartialPerc=0.0,
+ outputPrefix=''):
+
+ # cut Image to Size
+ chipSummaryList=[]
+ if imagePixSize>0:
+
+ rasterFileList = [[rasterImage, rasterDescription]]
+ shapeFileSrcList = [[geojson, geojsonDescription]]
+ # cut image to size
+ print(rasterFileList)
+ chipSummaryList = gT.cutChipFromMosaic(rasterFileList,
+ shapeFileSrcList,
+ outputDirectory=outputDirectory,
+ outputPrefix=outputPrefix,
+ clipSizeMX=imagePixSize,
+ clipSizeMY=imagePixSize,
+ minpartialPerc=minpartialPerc,
+ createPix=True,
+ clipOverlap=clipOverlap,
+ noBlackSpace=True,
+ randomClip=-1
+ )
+
+
+
+
+
+ else:
+ chipSummary = {'rasterSource': rasterImage,
+ 'chipName': rasterImage,
+ 'geoVectorName': geojson,
+ 'pixVectorName': ''
+ }
+
+ chipSummaryList.append(chipSummary)
+
+
+ return chipSummaryList
+
+
+def processChipSummaryList(chipSummaryList, outputDirectory='', annotationType='PASCALVOC2012', outputFormat='GTiff',
+ outputPixType='',
+ datasetName='spacenetV2',
+ folder_name='folder_name'
+ ):
+
+ if outputPixType == '':
+ convertTo8Bit = False
+ else:
+ convertTo8Bit = True
+
+ entryList = []
+ for chipSummary in chipSummaryList:
+
+
+
+ annotationName = os.path.basename(chipSummary['rasterSource'])
+ annotationName = annotationName.replace('.tif', '.xml')
+ annotationName = os.path.join(outputDirectory, annotationName)
+
+
+
+ if annotationType=='PASCALVOC2012':
+ entry = lT.geoJsonToPASCALVOC2012(annotationName, chipSummary['geoVectorName'], chipSummary['rasterSource'],
+ dataset='spacenetV2',
+ folder_name='spacenetV2',
+ annotationStyle=annotationType,
+ segment=True,
+ bufferSizePix=2.5,
+ convertTo8Bit=convertTo8Bit,
+ outputPixType=outputPixType
+ )
+ elif annotationType=='DARKNET':
+ entry = lT.geoJsonToDARKNET(annotationName, chipSummary['geoVectorName'], chipSummary['rasterSource'],
+ dataset='spacenetV2',
+ folder_name='spacenetV2',
+ annotationStyle=annotationType,
+ convertTo8Bit=convertTo8Bit,
+ outputPixType=outputPixType,
+ outputFormat=outputFormat
+ )
+
+ elif annotationType=='SBD':
+ basename = os.path.basename(chipSummary['rasterSource'])
+ annotationName = basename.replace('.tif', '.mat')
+ annotationName_cls = os.path.join(outputDirectory,'cls', annotationName)
+ annotationName_inst = os.path.join(outputDirectory,'inst', annotationName)
+
+ #Check to make sure output directories exist, if not make it.
+ if not os.path.exists(os.path.join(outputDirectory,'cls')):
+ os.makedirs(os.path.join(outputDirectory,'cls'))
+ if not os.path.exists(os.path.join(outputDirectory,'inst')):
+ os.makedirs(os.path.join(outputDirectory,'inst'))
+
+ entry = lT.geoJsonToSBD(annotationName_cls, annotationName_inst, chipSummary['geoVectorName'], chipSummary['rasterSource'],
+ dataset='spacenetV2',
+ folder_name='spacenetV2',
+ annotationStyle=annotationType,
+ segment=True,
+ convertTo8Bit=convertTo8Bit,
+ outputPixType=outputPixType,
+ outputFormat=outputFormat
+ )
+ else:
+ print("Annotation Type = {} is not supported yet".format(annotationType))
+ return -1
+
+
+
+ entryList.append(entry)
+
+ return entryList
+
+def createTrainTestSplitSummary(entryList, trainTestSplit=0.8,
+ outputDirectory='',
+ annotationSummaryPrefix='',
+ annotationType='PASCALVOC2012',
+ shuffleList=True,
+ ):
+
+ if shuffleList:
+ random.shuffle(entryList)
+
+
+ splitPoint=int(round(len(entryList)*trainTestSplit))
+ trainvalList = entryList[0:splitPoint]
+ testList = entryList[splitPoint+1:]
+
+
+ trainValFileName = os.path.join(outputDirectory, annotationSummaryPrefix+'trainval.txt')
+ print('creating trainval.txt {} entries'.format(len(trainvalList)))
+ print('Writing to TrainVal List to file: {}'.format(trainValFileName))
+ with open(trainValFileName, 'w') as f:
+ for entry in trainvalList:
+ if annotationType=='SBD':
+ f.write('{} {} {}\n'.format(entry['rasterFileName'], entry['annotationName_cls'],
+ entry['annotationName_inst']))
+ else:
+ f.write('{} {}\n'.format(entry['rasterFileName'], entry['annotationName']))
+
+
+ testFileName = os.path.join(outputDirectory, annotationSummaryPrefix+'test.txt')
+ testNameSizeFileName = os.path.join(outputDirectory, annotationSummaryPrefix + 'test_name_size.txt')
+ print('creating test.txt {} entries'.format(len(testList)))
+ print('Writing to Test List to file: {}'.format(testFileName))
+ with open(testFileName, 'w') as f, \
+ open(testNameSizeFileName, 'w') as fname:
+ for entry in testList:
+ if annotationType == 'SBD':
+ f.write('{} {} {}\n'.format(entry['rasterFileName'], entry['annotationName_cls'],
+ entry['annotationName_inst']))
+ fname.write('{} {} {}\n'.format(entry['basename'], entry['width'], entry['height']))
+ else:
+ f.write('{} {}\n'.format(entry['rasterFileName'], entry['annotationName']))
+ fname.write('{} {} {}\n'.format(entry['basename'], entry['width'], entry['height']))
+
+
+ return (trainValFileName, testFileName, testNameSizeFileName)
+
+if __name__ == '__main__':
+
+ #python createDataSpaceNet.py /data/spacenet_sample/AOI_2_Vegas_Train/ RGB-PanSharpen \
+ # --outputDirectory /data/spacenet_sample/annotations/ \
+ # --imgSizePix 416
+
+ # python createDataSpaceNet.py /data/spacenet_sample/AOI_2_Vegas_Train/
+ # RGB-PanSharpen --outputDirectory /data/spacenet_sample/annotations/ --imgSizePix 416
+ # --annotationType PASCALVOC2012 --convertTo8Bit
+
+
+ # python createDataSpaceNet.py /data/spacenet_sample/AOI_2_Vegas_Train/
+ # RGB-PanSharpen --outputDirectory /data/spacenet_sample/annotations/ --imgSizePix 416 --annotationType DARKNET
+ # --convertTo8Bit
+ parser = argparse.ArgumentParser(description='Process SrcData for Region ComputerVision Dataset')
+ parser.add_argument("srcSpaceNetFolder",
+ help="location of Spacenet AOI Data i.e. '/path/to/AOI_2_Vegas")
+ parser.add_argument("--srcImageryDirectory",
+ help="The Folder to look for imagery in"
+ "i.e. RGB-PanSharpen",
+ default='RGB-PanSharpen')
+ parser.add_argument("--geoJsonDirectory",
+ help="name of geojson folder typedirectory to use"
+ "i.e. 'buildings'",
+ default='buildings')
+ parser.add_argument("--outputDirectory",
+ help="Location To place processed Files"
+ "If not used output directory will be"
+ "os.path.join(srcSpacenetFolder, 'annotations')",
+ default='annotations')
+ parser.add_argument("--annotationSummaryPrefix",
+ help="Prefix to attach to trainval.txt and test.txt",
+ default='')
+ parser.add_argument("--imgSizePix",
+ help="set the dimensions of the square image in pixels"
+ "Default is -1, images are not modified",
+ type=int,
+ default=-1)
+ parser.add_argument("--annotationType",
+ help="Set the annotationType. Currently Supported are:"
+ "1. PASCALVOC2012"
+ "2. DARKNET"
+ "3. SBD"
+ "default is PASCALVOC2012",
+ default='PASCALVOC2012')
+ parser.add_argument("--outputFileType",
+ help="What type of image type would you like to output to currently supported are:"
+ "1. GTiff"
+ "2. JPEG",
+ default='GTiff')
+ parser.add_argument("--convertTo8Bit",
+ help='Convert Image from Native format to 8bit',
+ action='store_true')
+ parser.add_argument("--featureName",
+ help='Type of feature to be summarized by csv (i.e. Building)'
+ 'Currently in SpaceNet V2 Building is only label',
+ type=str,
+ default='Buildings')
+ parser.add_argument("--spacenetVersion",
+ help='Spacenet Version to process, '
+ 'Version 1 supports AOI_1_RIO, '
+ 'Version 2 is AOI_2_Vegas-AOI_5_Khartoum',
+ type=int,
+ default=2)
+ parser.add_argument("--trainTestSplit",
+ help='Decimal of data to use for training i.e. 0.8 = 80% of data for Training',
+ type=float,
+ default=0.8)
+
+ args = parser.parse_args()
+
+ entryList = []
+ srcSpaceNetDirectory = args.srcSpaceNetFolder
+
+ #listOfAOIs = [subdir for subdir in os.listdir(spaceNetDirectory) if
+ # os.path.isdir(os.path.join(spaceNetDirectory, subdir))]
+
+ listOfAOIs = [srcSpaceNetDirectory]
+ srcImageryDirectory = args.srcImageryDirectory # 'PAN', 'MUL, 'MUL-PanSharpen', 'RGB-PanSharpen'
+ if args.spacenetVersion == 2:
+ geojsonDirectory = os.path.join('geojson', args.geoJsonDirectory) # 'geojson/buildings/'
+ elif args.spacenetVersion == 1:
+ geojsonDirectory = os.path.join('vectordata','geojson')
+ # 'vectordata/geojson'
+ else:
+ print('Bad Spacenet Version Submitted, Version {} is not supported'.foramt(args.spacenetVersion))
+
+ if args.convertTo8Bit:
+
+ outputDataType = 'Byte'
+ outputFileType = args.outputFileType
+
+ else:
+ outputDataType = ''
+ outputFileType = ''
+
+ if args.outputDirectory == 'annotations':
+ fullPathAnnotationsDirectory = os.path.join(srcSpaceNetDirectory, 'annotations')
+ else:
+ fullPathAnnotationsDirectory = args.outputDirectory
+
+ for aoiSubDir in listOfAOIs:
+ fullPathSubDir = os.path.join(srcSpaceNetDirectory, aoiSubDir)
+
+ ## Create Annotations directory
+ #fullPathAnnotationsDirectory = os.path.join(fullPathSubDir, annotationsDirectory)
+ if not os.path.exists(fullPathAnnotationsDirectory):
+ os.makedirs(fullPathAnnotationsDirectory)
+ if not os.path.exists(os.path.join(fullPathAnnotationsDirectory, 'annotations')):
+ os.makedirs(os.path.join(fullPathAnnotationsDirectory, 'annotations'))
+
+ fullPathImageDirectory = os.path.join(fullPathSubDir, srcImageryDirectory)
+ fullPathGeoJsonDirectory = os.path.join(fullPathSubDir, geojsonDirectory)
+
+ listofRaster = sorted(glob.glob(os.path.join(fullPathImageDirectory, '*.tif')))
+ listofgeojson = sorted(glob.glob(os.path.join(fullPathGeoJsonDirectory, '*.geojson')))
+
+ print('fullpathImageDirectory = {}'.format(fullPathImageDirectory))
+ print('fullpathGeoJsonDirectory = {}'.format(fullPathGeoJsonDirectory))
+ if len(listofRaster) != len(listofgeojson):
+ print('Error lists do not match fix source errors')
+
+ break
+
+ else:
+
+ for rasterImage, geoJson in zip(listofRaster, listofgeojson):
+
+ chipSummaryList = processRasterChip(rasterImage, srcImageryDirectory,
+ geoJson, args.geoJsonDirectory,
+ outputDirectory=fullPathAnnotationsDirectory,
+ imagePixSize=args.imgSizePix, clipOverlap=0.0, randomClip=False,
+ minpartialPerc=0.0,
+ outputPrefix='')
+
+ entryListTmp = processChipSummaryList(chipSummaryList,
+ outputDirectory=os.path.join(fullPathAnnotationsDirectory, 'annotations'),
+ annotationType=args.annotationType,
+ outputFormat=outputFileType,
+ outputPixType=outputDataType,
+ datasetName='spacenetV2',
+ folder_name='folder_name'
+ )
+ print(entryListTmp)
+ entryList.extend(entryListTmp)
+
+ createTrainTestSplitSummary(entryList,
+ trainTestSplit=args.trainTestSplit,
+ outputDirectory=fullPathAnnotationsDirectory,
+ annotationSummaryPrefix=args.annotationSummaryPrefix,
+ annotationType=args.annotationType,
+ shuffleList=True
+ )
+
diff --git a/python/create_spacenet_AOI.py b/python/create_spacenet_AOI.py
new file mode 100644
index 0000000..fb64469
--- /dev/null
+++ b/python/create_spacenet_AOI.py
@@ -0,0 +1,112 @@
+from spaceNetUtilities import labelTools as lT
+import argparse
+import os
+import csv
+
+if __name__ == '__main__':
+
+
+ parser = argparse.ArgumentParser(description='Process SrcData for Region into small chips')
+ parser.add_argument("srcRasterList", help="csv file with path/to/raster.vrt, rasterDescripiton "
+ "i.e, path/to/AOI_#_Num_3band.vrt, 3band ")
+ parser.add_argument("geoJsonList", help="csv file with path/to/vector_buildings.geojson, vectorDescription"
+ "i.e, path/to/AOI_#_buildings.geojson, buildings")
+
+ parser.add_argument("--srcOutline", help="Vector Area describing extent of labeled area"
+ "If not specified, All of raster will be assumed to be labeled",
+ default='')
+
+ parser.add_argument("--outputDirectory", help="Location To place processed Files"
+ "If not used output directory will be"
+ "os.path.join(os.getcwd(), processed)",
+ default='')
+ parser.add_argument("--imgSizeM",
+ help="set the dimensions of the square image in meters "
+ "Default is 200m",
+ type=int,
+ default=200)
+ parser.add_argument("--AOI_Name",
+ help="AOI City Name i.e. RIO",
+ default='TEST')
+ parser.add_argument("--AOI_Num",
+ help='AOI Number i.e 3',
+ type=int,
+ default='0')
+ parser.add_argument("--mosaicGTIFF",
+ help='By default, all mosaic actions are done using virtual raster tiles. This is a low memory'
+ 'and is generally faster. However no physical geotiff mosaic is made. '
+ 'Enable this flag to create a GTIFF of the mosaiced area before tiling',
+ action="store_false")
+ parser.add_argument("--createPix",
+ help='Use imageSize in Pixels as opposed to Meters',
+ action="store_true")
+ parser.add_argument("--createSummaryCSV",
+ help='Create Summary CSV used for grading of SpaceNet Challenge V2',
+ action="store_true")
+ parser.add_argument("--csvLabel",
+ help='Type of csv to be created (i.e. All, Train, Test, Validate',
+ type=str,
+ default='All')
+ parser.add_argument("--featureName",
+ help='Type of feature to be summarized by csv (i.e. Building)',
+ type=str,
+ default='Buildings')
+
+ # geoJSON AOI boundary
+ args = parser.parse_args()
+
+
+
+ AOI_Name = 'RIO'
+ AOI_Num = 3
+ # outputDirectory Base Location
+
+
+ AOI_Name = args.AOI_Name
+ AOI_Num = args.AOI_Num
+ srcOutline = args.srcOutline
+ clipImageryToAOI=True
+ if srcOutline=='':
+ clipImageryToAOI=False
+
+ outputDirectory = args.outputDirectory
+
+ if outputDirectory=='':
+ # outputDirectory Base Location
+ outputDirectory = os.path.join(os.getcwd(), 'processed')
+
+ if not os.path.isdir(outputDirectory):
+ os.makedirs(outputDirectory)
+ # location of imagery
+
+ srcImageryList = []
+ with open(args.srcRasterList, 'rb') as csvfile:
+ print(args.srcRasterList)
+ csvreader = csv.reader(csvfile, delimiter=',')
+ for row in csvreader:
+ print(row)
+ if row:
+ if not row[0].startswith("#"):
+ srcImageryList.append([x.strip() for x in row])
+
+ srcVectorFileList = []
+ with open(args.geoJsonList, 'rb') as csvfile:
+ csvreader = csv.reader(csvfile, delimiter=',')
+ for row in csvreader:
+ if row:
+ if not row[0].startswith("#"):
+ srcVectorFileList.append([x.strip() for x in row])
+
+ lT.createAOIName(AOI_Name, AOI_Num,
+ srcImageryList,
+ srcOutline,
+ srcVectorFileList,
+ outputDirectory=outputDirectory,
+ clipImageryToAOI=clipImageryToAOI,
+ windowSizeMeters=args.imgSizeM,
+ vrtMosaic=args.mosaicGTIFF,
+ createPix=args.createPix,
+ createSummaryCSVChallenge=args.createSummaryCSV,
+ csvLabel=args.csvLabel,
+ featurName=args.featureName
+ )
\ No newline at end of file
diff --git a/python/evaluateScene.py b/python/evaluateScene.py
index 5e65d85..ce10e37 100644
--- a/python/evaluateScene.py
+++ b/python/evaluateScene.py
@@ -1,25 +1,26 @@
-from spaceNet import evalTools as eT
-from spaceNet import geoTools as gT
+from spaceNetUtilities import evalTools as eT
+from spaceNetUtilities import geoTools as gT
import numpy as np
-import sys
+import csv
import multiprocessing
import time
+import argparse
-if __name__ == "__main__":
- # load Truth and Test File Locations
- if len(sys.argv) > 1:
- truth_fp = sys.argv[1]
- test_fp = sys.argv[2]
- else:
- test_fp = '../testData/public_polygons_solution_3Band_envelope.geojson'
- truth_fp = '../testData/public_polygons_solution_3Band.geojson'
+
+def evaluateSpaceNetSolution(summaryTruthFile, summaryProposalFile, resultsOutputFile='', processgeoJson=False,
+ useParallelProcessing=False):
+
+ truth_fp = summaryTruthFile
+ test_fp = summaryProposalFile
# check for cores available
- if len(sys.argv) > 3:
- max_cpu = int(sys.argv[3])
- else:
+ if useParallelProcessing:
+
max_cpu = multiprocessing.cpu_count()
- parallel=False
+ parallel = True
+ else:
+ max_cpu = 1
+ parallel = False
# initialize scene counts
true_pos_counts = []
@@ -28,25 +29,31 @@
t0 = time.time()
# Start Ingest Of Truth and Test Case
- sol_polys = gT.importgeojson(truth_fp, removeNoBuildings=True)
- prop_polys = gT.importgeojson(test_fp)
+ if args.geoJson:
+ sol_polys = gT.import_summary_geojson(truth_fp, removeNoBuildings=True)
+ prop_polys = gT.import_summary_geojson(test_fp)
+ polyFlag = 'poly'
+ else:
+ sol_polys = gT.readwktcsv(truth_fp, removeNoBuildings=True)
+ prop_polys = gT.readwktcsv(test_fp)
+ polyFlag = 'polyPix'
t1 = time.time()
total = t1 - t0
print('time of ingest: ', total)
# Speed up search by preprocessing ImageId and polygonIds
-
+ polyFlag = ''
test_image_ids = set([item['ImageId'] for item in prop_polys if item['ImageId'] > 0])
prop_polysIdList = np.asarray([item['ImageId'] for item in prop_polys if item["ImageId"] > 0 and \
- item['BuildingId']!=-1])
- prop_polysPoly = np.asarray([item['poly'] for item in prop_polys if item["ImageId"] > 0 and \
- item['BuildingId']!=-1])
+ item['BuildingId'] != -1])
+ prop_polysPoly = np.asarray([item[polyFlag] for item in prop_polys if item["ImageId"] > 0 and \
+ item['BuildingId'] != -1])
sol_polysIdsList = np.asarray([item['ImageId'] for item in sol_polys if item["ImageId"] > 0 and \
- item['BuildingId']!=-1])
- sol_polysPoly = np.asarray([item['poly'] for item in sol_polys if item["ImageId"] > 0 and \
- item['BuildingId']!=-1])
+ item['BuildingId'] != -1])
+ sol_polysPoly = np.asarray([item[polyFlag] for item in sol_polys if item["ImageId"] > 0 and \
+ item['BuildingId'] != -1])
bad_count = 0
F1ScoreList = []
cpu_count = min(multiprocessing.cpu_count(), max_cpu)
@@ -55,24 +62,24 @@
ResultList = []
eval_function_input_list = eT.create_eval_function_input((test_image_ids,
- (prop_polysIdList, prop_polysPoly),
- (sol_polysIdsList, sol_polysPoly)))
- # evalFunctionInput = creatEevalFunctionInput((test_image_ids,
- # (prop_polysIdList, prop_polysPoly),
- # (sol_polysIdsList, sol_polysPoly)))
+ (prop_polysIdList, prop_polysPoly),
+ (sol_polysIdsList, sol_polysPoly)))
+
# Calculate Values
t3 = time.time()
- print('time For DataCreation {}s'.format(t3-t1))
- #result_list = p.map(eT.evalfunction, eval_function_input_list)
- if parallel==False:
+ print('time For DataCreation {}s'.format(t3 - t1))
+
+ # result_list = p.map(eT.evalfunction, eval_function_input_list)
+ if parallel == False:
result_list = []
for eval_input in eval_function_input_list:
-
result_list.append(eT.evalfunction(eval_input))
else:
result_list = p.map(eT.evalfunction, eval_function_input_list)
- result_sum = np.sum(result_list, axis=0)
+ result_listNP = np.asarray([item[0] for item in result_list])
+
+ result_sum = np.sum(result_listNP, axis=0)
true_pos_total = result_sum[1]
false_pos_total = result_sum[2]
false_neg_total = result_sum[3]
@@ -81,13 +88,85 @@
print('False_Neg_Total', false_neg_total)
precision = float(true_pos_total) / (float(true_pos_total) + float(false_pos_total))
recall = float(true_pos_total) / (float(true_pos_total) + float(false_neg_total))
- F1ScoreTotal = 2.0 * precision*recall / (precision + recall)
+ F1ScoreTotal = 2.0 * precision * recall / (precision + recall)
print('F1Total', F1ScoreTotal)
t2 = time.time()
- total = t2-t0
- print('time of evaluation: {}'.format(t2-t1))
- print('time of evaluation {}s/imageId'.format((t2-t1)/len(result_list)))
+ total = t2 - t0
+ print('time of evaluation: {}'.format(t2 - t1))
+ print('time of evaluation {}s/imageId'.format((t2 - t1) / len(result_list)))
print('Total Time {}s'.format(total))
print(result_list)
- print(np.mean(result_list))
+ print(np.mean(result_listNP))
+
+ if resultsOutputFile != '':
+ with open(resultsOutputFile, 'w') as csvFile:
+ csvwriter = csv.writer(csvFile, delimiter=',')
+ csvwriter.writerow['TruthFile', truth_fp]
+ csvwriter.writerow['ProposalFile', test_fp]
+ csvwriter.writerow['Summary Results']
+ csvwriter.writerow['F1Score Total', F1ScoreTotal]
+ csvwriter.writerow['Precision', precision]
+ csvwriter.writerow['Recall', recall]
+ csvwriter.writerow['True Positive Total', true_pos_total]
+ csvwriter.writerow['False Positive Total', false_pos_total]
+ csvwriter.writerow['False Negative Total', false_neg_total]
+ csvwriter.writerow['']
+ csvwriter.writerow['Per Image Stats']
+ csvwriter.writerow['ImageId', 'F1Score', 'True Positive Count', 'False Positive Count']
+ for result in result_list:
+ tmpList = [result[1]]
+ tmpList.extend(result[0])
+ csvwriter.writerow(tmpList)
+
+
+ resultsDict = {'TruthFile': truth_fp,
+ 'ProposalFile': test_fp,
+ 'F1ScoreTotal': F1ScoreTotal,
+ 'PrecisionTotal': precision,
+ 'RecalTotal': recall,
+ 'TruePositiveTotal': true_pos_total,
+ 'FalsePositiveTotal': false_pos_total,
+ 'FalseNegativeTotal': false_neg_total,
+ 'PerImageStatsResultList': result_list,
+ 'OutputSummaryFile': resultsOutputFile}
+
+ return resultsDict
+
+
+if __name__ == "__main__":
+
+ parser = argparse.ArgumentParser(description='Evaluate Score for SpaceNet')
+ parser.add_argument("summaryTruthFile",
+ help="The Location of Summary Ground Truth csv File"
+ "Format is {},{},{},{}.format(ImageId, BuildingId, polygonPixWKT, polygonGeoPix),"
+ "unless --geoJson flag is set"
+ "See spaceNet competition details for more information about file format"
+ )
+ parser.add_argument("summaryProposalFile",
+ help="The Location of summary Propsal csv File"
+ "Format is {},{},{},{}.format(ImageId, BuildingId, polygonPixWKT, Confidence)"
+ "unless --geoJson flag is set"
+ )
+ parser.add_argument("--resultsOutputFile",
+ help="If you would like summary data outwritten to a file, specify the file",
+ default='')
+ parser.add_argument("--geoJson",
+ help='Convert Image from Native format to 8bit',
+ action='store_true')
+
+ parser.add_argument("--useParallelProcessing",
+ help='Convert Image from Native format to 8bit',
+ action='store_true')
+
+ args = parser.parse_args()
+ # load Truth and Test File Locations
+
+ summaryDict = evaluateSpaceNetSolution(args.summaryTruthFile,
+ args.summaryProposalFile,
+ resultsOutputFile=args.resultsOutputFile,
+ processgeoJson=args.geoJson,
+ useParallelProcessing=args.useParallelProcessing)
+
+
+
diff --git a/python/externalVectorProcessing.py b/python/externalVectorProcessing.py
new file mode 100644
index 0000000..da921fb
--- /dev/null
+++ b/python/externalVectorProcessing.py
@@ -0,0 +1,141 @@
+from spaceNetUtilities import geoTools as gT
+from spaceNetUtilities import labelTools as lT
+from osgeo import gdal, ogr, osr
+import argparse
+import os
+import subprocess
+import glob
+
+
+
+def buildTindex(rasterFolder, rasterExtention='.tif'):
+ rasterList = glob.glob(os.path.join(rasterFolder, '*{}'.format(rasterExtention)))
+ print(rasterList)
+
+ print(os.path.join(rasterFolder, '*{}'.format(rasterExtention)))
+
+ memDriver = ogr.GetDriverByName('MEMORY')
+ gTindex = memDriver.CreateDataSource('gTindex')
+ srcImage = gdal.Open(rasterList[0])
+ spat_ref = osr.SpatialReference()
+ spat_ref.SetProjection(srcImage.GetProjection())
+ gTindexLayer = gTindex.CreateLayer("gtindexlayer", spat_ref, geom_type=ogr.wkbPolygon)
+
+ # Add an ID field
+ idField = ogr.FieldDefn("location", ogr.OFTString)
+ gTindexLayer.CreateField(idField)
+
+ # Create the feature and set values
+ featureDefn = gTindexLayer.GetLayerDefn()
+
+
+
+ for rasterFile in rasterList:
+ srcImage = gdal.Open(rasterFile)
+
+ geoTrans, polyToCut, ulX, ulY, lrX, lrY = gT.getRasterExtent(srcImage)
+
+ feature = ogr.Feature(featureDefn)
+ feature.SetGeometry(polyToCut)
+ feature.SetField("location", rasterFile)
+ gTindexLayer.CreateFeature(feature)
+ feature = None
+
+
+ return gTindex, gTindexLayer
+
+
+def createTiledGeoJsonFromSrc(rasterFolderLocation, vectorSrcFile, geoJsonOutputDirectory, rasterTileIndex='',
+ geoJsonPrefix='GEO', rasterFileExtenstion='.tif',
+ rasterPrefixToReplace='PAN'
+ ):
+ if rasterTileIndex == '':
+ gTindex, gTindexLayer = buildTindex(rasterFolderLocation, rasterExtention=rasterFileExtenstion)
+ else:
+ gTindex = ogr.Open(rasterTileIndex,0)
+ gTindexLayer = gTindex.GetLayer()
+
+ shapeSrc = ogr.Open(vectorSrcFile,0)
+ chipSummaryList = []
+ for feature in gTindexLayer:
+ featureGeom = feature.GetGeometryRef()
+ rasterFileName = feature.GetField('location')
+ rasterFileBaseName = os.path.basename(rasterFileName)
+ outGeoJson = rasterFileBaseName.replace(rasterPrefixToReplace, geoJsonPrefix)
+ outGeoJson = outGeoJson.replace(rasterFileExtenstion, '.geojson')
+ outGeoJson = os.path.join(geoJsonOutputDirectory, outGeoJson)
+
+ gT.clipShapeFile(shapeSrc, outGeoJson, featureGeom, minpartialPerc=0.0, debug=False)
+ imageId = rasterFileBaseName.replace(rasterPrefixToReplace+"_", "")
+ chipSummary = {'chipName': rasterFileName,
+ 'geoVectorName': outGeoJson,
+ 'imageId': os.path.splitext(imageId)[0]}
+
+ chipSummaryList.append(chipSummary)
+
+ return chipSummaryList
+
+if __name__ == "__main__":
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-imgDir", "--imgDir", type=str,
+ help="Directory of Raster Images")
+ parser.add_argument("-vecSrc", "--vectorSrcFile", type=str,
+ help="Geo spatial Vector src file supported by GDAL and OGR")
+ parser.add_argument("-vecPrFx", "--vectorPrefix", type=str,
+ help="Prefix to attach to image id to indicate type of geojson created",
+ default='OSM')
+ parser.add_argument("-rastPrFx", "--rasterPrefix", type=str,
+ help="Prefix of raster images to replace when creating geojson of geojson created",
+ default='PAN')
+ parser.add_argument("-rastExt", "--rasterExtension", type=str,
+ help="Extension of raster images to i.e. .tif, .png, .jpeg",
+ default='.tif')
+
+ parser.add_argument("-o", "--outputCSV", type=str,
+ help="Output file name and location for truth summary CSV equivalent to SpacenetV2 competition")
+ parser.add_argument("-pixPrecision", "--pixelPrecision", type=int,
+ help="Number of decimal places to include for pixel, uses round(xPix, pixPrecision)"
+ "Default = 2",
+ default=2)
+ parser.add_argument("--CreateProposalFile",
+ help="Create proposals file in format approriate for SpacenetV2 competition",
+ action="store_true")
+
+
+
+
+ args = parser.parse_args()
+
+
+
+ rasterFolderLocation = args.imgDir
+ vectorSrcFile = args.imgDir
+ vectorPrefix = args.vectorPrefix
+ rasterPrefix = args.rasterPrefix
+ pixPrecision = args.pixPrecision
+ createProposalFile = args.CreateProposalFile
+ rasterFileExtension = args.rasterExtension
+
+ rasterFolderBaseName = os.path.basename(rasterFolderLocation)
+ if rasterFolderBaseName == "":
+ rasterFolderBaseName = os.path.basename(os.path.dirname(rasterFolderLocation))
+
+ geoJsonOutputDirectory = os.path.join(os.path.dirname(vectorSrcFile), rasterFolderBaseName)
+ chipSummaryList = createTiledGeoJsonFromSrc(rasterFolderLocation, vectorSrcFile, geoJsonOutputDirectory, rasterTileIndex='',
+ geoJsonPrefix=vectorPrefix, rasterFileExtenstion=rasterFileExtension,
+ rasterPrefixToReplace=rasterPrefix
+ )
+
+
+ outputCSVFileName = geoJsonOutputDirectory+"OSM_Proposal.csv"
+ lT.createCSVSummaryFile(chipSummaryList, outputCSVFileName,
+ replaceImageID=rasterPrefix+"_",
+ pixPrecision=pixPrecision,
+ createProposalsFile=createProposalFile
+ )
+
+
+
+
+
diff --git a/python/spaceNet/geoTools.py b/python/spaceNet/geoTools.py
deleted file mode 100644
index df9cf05..0000000
--- a/python/spaceNet/geoTools.py
+++ /dev/null
@@ -1,458 +0,0 @@
-from osgeo import gdal, osr, ogr
-from pandas import pandas as pd
-import numpy as np
-import os
-import csv
-import rtree
-import subprocess
-
-
-def importgeojson(geojsonfilename, removeNoBuildings=False):
- # driver = ogr.GetDriverByName('geojson')
- datasource = ogr.Open(geojsonfilename, 0)
-
- layer = datasource.GetLayer()
- print(layer.GetFeatureCount())
-
- polys = []
- for idx, feature in enumerate(layer):
-
- poly = feature.GetGeometryRef()
-
- if poly:
- polys.append({'ImageId': feature.GetField('ImageId'), 'BuildingId': feature.GetField('BuildingId'),
- 'poly': feature.GetGeometryRef().Clone()})
-
- return polys
-
-
-def readwktcsv(csv_path):
- #
- # csv Format Expected = ['ImageId', 'BuildingId', 'PolygonWKT_Pix', 'PolygonWKT_Geo']
- # returns list of Dictionaries {'ImageId': image_id, 'BuildingId': building_id, 'poly': poly}
- # image_id is a string,
- # BuildingId is an integer,
- # poly is a ogr.Geometry Polygon
-
- # buildinglist = []
- # polys_df = pd.read_csv(csv_path)
- # image_ids = set(polys_df['ImageId'].tolist())
- # for image_id in image_ids:
- # img_df = polys_df.loc[polys_df['ImageId'] == image_id]
- # building_ids = set(img_df['BuildingId'].tolist())
- # for building_id in building_ids:
- #
- # building_df = img_df.loc[img_df['BuildingId'] == building_id]
- # poly = ogr.CreateGeometryFromWkt(building_df.iloc[0, 2])
- # buildinglist.append({'ImageId': image_id, 'BuildingId': building_id, 'poly': poly})
- buildinglist = []
- with open(csv_path, 'rb') as csvfile:
- building_reader = csv.reader(csvfile, delimiter=',', quotechar='"')
- next(building_reader, None) # skip the headers
- for row in building_reader:
- poly = ogr.CreateGeometryFromWkt(row[2])
- buildinglist.append({'ImageId': row[0], 'BuildingId': int(row[1]), 'poly': poly})
-
- return buildinglist
-
-
-def exporttogeojson(geojsonfilename, buildinglist):
- #
- # geojsonname should end with .geojson
- # building list should be list of dictionaries
- # list of Dictionaries {'ImageId': image_id, 'BuildingId': building_id, 'poly': poly}
- # image_id is a string,
- # BuildingId is an integer,
- # poly is a ogr.Geometry Polygon
- #
- # returns geojsonfilename
-
- driver = ogr.GetDriverByName('geojson')
- if os.path.exists(geojsonfilename):
- driver.DeleteDataSource(geojsonfilename)
- datasource = driver.CreateDataSource(geojsonfilename)
- layer = datasource.CreateLayer('buildings', geom_type=ogr.wkbPolygon)
- field_name = ogr.FieldDefn("ImageId", ogr.OFTString)
- field_name.SetWidth(75)
- layer.CreateField(field_name)
- layer.CreateField(ogr.FieldDefn("BuildingId", ogr.OFTInteger))
-
- # loop through buildings
- for building in buildinglist:
- # create feature
- feature = ogr.Feature(layer.GetLayerDefn())
- feature.SetField("ImageId", building['ImageId'])
- feature.SetField("BuildingId", building['BuildingId'])
- feature.SetGeometry(building['poly'])
-
- # Create the feature in the layer (geojson)
- layer.CreateFeature(feature)
- # Destroy the feature to free resources
- feature.Destroy()
-
- datasource.Destroy()
-
- return geojsonfilename
-
-
-def createmaskfrompolygons(polygons):
- pass
-
-
-def latLonToPixel(lat, lon, input_raster='', targetsr='', geomTransform=''):
- sourcesr = osr.SpatialReference()
- sourcesr.ImportFromEPSG(4326)
-
- geom = ogr.Geometry(ogr.wkbPoint)
- geom.AddPoint(lon, lat)
-
- if targetsr == '':
- src_raster = gdal.Open(input_raster)
- targetsr = osr.SpatialReference()
- targetsr.ImportFromWkt(src_raster.GetProjectionRef())
- coordTrans = osr.CoordinateTransformation(sourcesr, targetsr)
- if geomTransform == '':
- src_raster = gdal.Open(input_raster)
- transform = src_raster.GetGeoTransform()
- else:
- transform = geomTransform
-
- xOrigin = transform[0]
- # print xOrigin
- yOrigin = transform[3]
- # print yOrigin
- pixelWidth = transform[1]
- # print pixelWidth
- pixelHeight = transform[5]
- # print pixelHeight
- geom.Transform(coordTrans)
- # print geom.GetPoint()
- xPix = (geom.GetPoint()[0] - xOrigin) / pixelWidth
- yPix = (geom.GetPoint()[1] - yOrigin) / pixelHeight
-
- return (xPix, yPix)
-
-
-def pixelToLatLon(xPix, yPix, inputRaster, targetSR=''):
- if targetSR == '':
- targetSR = osr.SpatialReference()
- targetSR.ImportFromEPSG(4326)
-
- geom = ogr.Geometry(ogr.wkbPoint)
- srcRaster = gdal.Open(inputRaster)
- sourceSR = osr.SpatialReference()
- sourceSR.ImportFromWkt(srcRaster.GetProjectionRef())
- coordTrans = osr.CoordinateTransformation(sourceSR, targetSR)
-
- transform = srcRaster.GetGeoTransform()
- xOrigin = transform[0]
- yOrigin = transform[3]
- pixelWidth = transform[1]
- pixelHeight = transform[5]
-
- xCoord = (xPix * pixelWidth) + xOrigin
- yCoord = (yPix * pixelHeight) + yOrigin
- geom.AddPoint(xCoord, yCoord)
- geom.Transform(coordTrans)
- return (geom.GetX(), geom.GetY())
-
-
-def geoPolygonToPixelPolygonWKT(geom, inputRaster, targetSR, geomTransform):
- # Returns Pixel Coordinate List and GeoCoordinateList
-
- polygonPixBufferList = []
- polygonPixBufferWKTList = []
- if geom.GetGeometryName() == 'POLYGON':
- polygonPix = ogr.Geometry(ogr.wkbPolygon)
- for ring in geom:
- # GetPoint returns a tuple not a Geometry
- ringPix = ogr.Geometry(ogr.wkbLinearRing)
-
- for pIdx in xrange(ring.GetPointCount()):
- lon, lat, z = ring.GetPoint(pIdx)
- xPix, yPix = latLonToPixel(lat, lon, inputRaster, targetSR, geomTransform)
- ringPix.AddPoint(xPix, yPix)
-
- polygonPix.AddGeometry(ringPix)
- polygonPixBuffer = polygonPix.Buffer(0.0)
- polygonPixBufferList.append([polygonPixBuffer, geom])
-
- elif geom.GetGeometryName() == 'MULTIPOLYGON':
-
- for poly in geom:
- polygonPix = ogr.Geometry(ogr.wkbPolygon)
- for ring in geom:
- # GetPoint returns a tuple not a Geometry
- ringPix = ogr.Geometry(ogr.wkbLinearRing)
-
- for pIdx in xrange(ring.GetPointCount()):
- lon, lat, z = ring.GetPoint(pIdx)
- xPix, yPix = latLonToPixel(lat, lon, inputRaster, targetSR, geomTransform)
- ringPix.AddPoint(xPix, yPix)
-
- polygonPix.AddGeometry(ringPix)
- polygonPixBuffer = polygonPix.Buffer(0.0)
- polygonPixBufferList.append([polygonPixBuffer, geom])
-
- for polygonTest in polygonPixBufferList:
- if polygonTest[0].GetGeometryName() == 'POLYGON':
- polygonPixBufferWKTList.append([polygonTest[0].ExportToWkt(), polygonTest[1].ExportToWkt()])
- elif polygonTest[0].GetGeometryName() == 'MULTIPOLYGON':
- for polygonTest2 in polygonTest[0]:
- polygonPixBufferWKTList.append([polygonTest2.ExportToWkt(), polygonTest[1].ExportToWkt()])
-
- return polygonPixBufferWKTList
-
-
-def convert_wgs84geojson_to_pixgeojson(wgs84geojson, inputraster, image_id=[], pixelgeojson=[]):
- dataSource = ogr.Open(wgs84geojson, 0)
- layer = dataSource.GetLayer()
- print(layer.GetFeatureCount())
- building_id = 0
- # check if geoJsonisEmpty
- buildinglist = []
- if not image_id:
- image_id = inputraster.replace(".tif", "")
-
- if layer.GetFeatureCount() > 0:
- srcRaster = gdal.Open(inputraster)
- targetSR = osr.SpatialReference()
- targetSR.ImportFromWkt(srcRaster.GetProjectionRef())
- geomTransform = srcRaster.GetGeoTransform()
-
- for feature in layer:
-
- geom = feature.GetGeometryRef()
-
- ## Calculate 3 band
- polygonWKTList = geoPolygonToPixelPolygonWKT(geom, inputraster, targetSR, geomTransform)
-
- for polygonWKT in polygonWKTList:
- building_id += 1
- buildinglist.append({'ImageId': image_id,
- 'BuildingId': building_id,
- 'poly': ogr.CreateGeometryFromWkt(polygonWKT)})
-
- if pixelgeojson:
- exporttogeojson(pixelToLatLon, buildinglist=buildinglist)
-
- return buildinglist
-
-
-def create_rtreefromdict(buildinglist):
- # create index
- index = rtree.index.Index(interleaved=False)
- for idx, building in enumerate(buildinglist):
- index.insert(idx, building['poly'].GetEnvelope())
-
- return index
-
-
-def create_rtree_from_poly(poly_list):
- # create index
- index = rtree.index.Index(interleaved=False)
- for idx, building in enumerate(poly_list):
- index.insert(idx, building.GetEnvelope())
-
- return index
-
-
-def search_rtree(test_building, index):
- # input test poly ogr.Geometry and rtree index
- if test_building.GetGeometryName() == 'POLYGON' or \
- test_building.GetGeometryName() == 'MULTIPOLYGON':
- fidlist = index.intersection(test_building.GetEnvelope())
- else:
- fidlist = []
-
- return fidlist
-
-
-def get_envelope(poly):
- env = poly.GetEnvelope()
-
- # Get Envelope returns a tuple (minX, maxX, minY, maxY)
- # Create ring
- ring = ogr.Geometry(ogr.wkbLinearRing)
- ring.AddPoint(env[0], env[2])
- ring.AddPoint(env[0], env[3])
- ring.AddPoint(env[1], env[3])
- ring.AddPoint(env[1], env[2])
- ring.AddPoint(env[0], env[2])
-
- # Create polygon
- poly1 = ogr.Geometry(ogr.wkbPolygon)
- poly1.AddGeometry(ring)
-
- return poly1
-
-def utm_getZone(longitude):
- return (int(1+(longitude+180.0)/6.0))
-
-
-def utm_isNorthern(latitude):
- if (latitude < 0.0):
- return 0
- else:
- return 1
-
-
-def createUTMTransform(polyGeom):
- # pt = polyGeom.Boundary().GetPoint()
- utm_zone = utm_getZone(polyGeom.GetEnvelope()[0])
- is_northern = utm_isNorthern(polyGeom.GetEnvelope()[2])
- utm_cs = osr.SpatialReference()
- utm_cs.SetWellKnownGeogCS('WGS84')
- utm_cs.SetUTM(utm_zone, is_northern);
- wgs84_cs = osr.SpatialReference()
- wgs84_cs.ImportFromEPSG(4326)
-
- transform_WGS84_To_UTM = osr.CoordinateTransformation(wgs84_cs, utm_cs)
- transform_UTM_To_WGS84 = osr.CoordinateTransformation(utm_cs, wgs84_cs)
-
- return transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs
-
-
-def getRasterExtent(srcImage):
- geoTrans = srcImage.GetGeoTransform()
- ulX = geoTrans[0]
- ulY = geoTrans[3]
- xDist = geoTrans[1]
- yDist = geoTrans[5]
- rtnX = geoTrans[2]
- rtnY = geoTrans[4]
-
- cols = srcImage.RasterXSize
- rows = srcImage.RasterYSize
-
- lrX = ulX + xDist * cols
- lrY = ulY + yDist * rows
-
- # Create ring
- ring = ogr.Geometry(ogr.wkbLinearRing)
- ring.AddPoint(lrX, lrY)
- ring.AddPoint(lrX, ulY)
- ring.AddPoint(ulX, ulY)
- ring.AddPoint(ulX, lrY)
- ring.AddPoint(lrX, lrY)
-
- # Create polygon
- poly = ogr.Geometry(ogr.wkbPolygon)
- poly.AddGeometry(ring)
-
- return geoTrans, poly, ulX, ulY, lrX, lrY
-
-
-def createPolygonFromCorners(lrX,lrY,ulX, ulY):
- # Create ring
- ring = ogr.Geometry(ogr.wkbLinearRing)
- ring.AddPoint(lrX, lrY)
- ring.AddPoint(lrX, ulY)
- ring.AddPoint(ulX, ulY)
- ring.AddPoint(ulX, lrY)
- ring.AddPoint(lrX, lrY)
-
- # Create polygon
- poly = ogr.Geometry(ogr.wkbPolygon)
- poly.AddGeometry(ring)
-
- return poly
-
-
-def clipShapeFile(shapeSrc, outputFileName, polyToCut):
-
- source_layer = shapeSrc.GetLayer()
- source_srs = source_layer.GetSpatialRef()
- # Create the output Layer
- outGeoJSon = outputFileName.replace('.tif', '.geojson')
- outDriver = ogr.GetDriverByName("geojson")
- if os.path.exists(outGeoJSon):
- outDriver.DeleteDataSource(outGeoJSon)
-
- outDataSource = outDriver.CreateDataSource(outGeoJSon)
- outLayer = outDataSource.CreateLayer("groundTruth", source_srs, geom_type=ogr.wkbPolygon)
- # Add input Layer Fields to the output Layer
- inLayerDefn = source_layer.GetLayerDefn()
- for i in range(0, inLayerDefn.GetFieldCount()):
- fieldDefn = inLayerDefn.GetFieldDefn(i)
- outLayer.CreateField(fieldDefn)
- outLayer.CreateField(ogr.FieldDefn("partialBuilding", ogr.OFTInteger))
- outLayerDefn = outLayer.GetLayerDefn()
- source_layer.SetSpatialFilter(polyToCut)
- for inFeature in source_layer:
-
- outFeature = ogr.Feature(outLayerDefn)
-
- for i in range (0, inLayerDefn.GetFieldCount()):
- outFeature.SetField(inLayerDefn.GetFieldDefn(i).GetNameRef(), inFeature.GetField(i))
-
- geom = inFeature.GetGeometryRef()
- geomNew = geom.Intersection(polyToCut)
- if geomNew:
- if geom.GetArea() == geomNew.GetArea():
- outFeature.SetField("partialBuilding", 0)
- else:
- outFeature.SetField("partialBuilding", 1)
- else:
- outFeature.SetField("partialBuilding", 1)
-
-
- outFeature.SetGeometry(geomNew)
- outLayer.CreateFeature(outFeature)
-
-
-def cutChipFromMosaic(rasterFile, shapeFileSrc, outlineSrc,outputDirectory='', outputPrefix='clip_',
- clipSizeMX=100, clipSizeMY=100):
- #rasterFile = '/Users/dlindenbaum/dataStorage/spacenet/mosaic_8band/013022223103.tif'
- srcImage = gdal.Open(rasterFile)
- geoTrans, poly, ulX, ulY, lrX, lrY = getRasterExtent(srcImage)
- rasterFileBase = os.path.basename(rasterFile)
- if outputDirectory=="":
- outputDirectory=os.path.dirname(rasterFile)
- transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = createUTMTransform(poly)
- poly.Transform(transform_WGS84_To_UTM)
- env = poly.GetEnvelope()
- minX = env[0]
- minY = env[2]
- maxX = env[1]
- maxY = env[3]
-
- #return poly to WGS84
- poly.Transform(transform_UTM_To_WGS84)
-
- shapeSrc = ogr.Open(shapeFileSrc)
- outline = ogr.Open(outlineSrc)
- layer = outline.GetLayer()
- for feature in layer:
- geom = feature.GetGeometryRef()
-
- for llX in np.arange(minX, maxX, clipSizeMX):
- for llY in np.arange(minY, maxY, clipSizeMY):
- uRX = llX+clipSizeMX
- uRY = llY+clipSizeMY
-
- polyCut = createPolygonFromCorners(llX, llY, uRX, uRY)
- polyCut.Transform(transform_UTM_To_WGS84)
- if (polyCut).Intersection(geom):
- print "Do it."
- envCut = polyCut.GetEnvelope()
- minXCut = envCut[0]
- minYCut = envCut[2]
- maxXCut = envCut[1]
- maxYCut = envCut[3]
- outputFileName = os.path.join(outputDirectory, outputPrefix+rasterFileBase.replace('.tif', "_{}_{}.tif".format(minXCut,minYCut)))
- ## Clip Image
- subprocess.call(["gdalwarp", "-te", "{}".format(minXCut), "{}".format(minYCut),
- "{}".format(maxXCut), "{}".format(maxYCut), rasterFile, outputFileName])
- outGeoJSon = outputFileName.replace('.tif', '.geojson')
- ### Clip poly to cust to Raster Extent
- polyVectorCut=polyCut.Intersection(poly)
- clipShapeFile(shapeSrc, outputFileName, polyVectorCut)
- #subprocess.call(["ogr2ogr", "-f", "ESRI Shapefile",
- # "-spat", "{}".format(minXCut), "{}".format(minYCut),
- # "{}".format(maxXCut), "{}".format(maxYCut), "-clipsrc", outGeoJSon, shapeFileSrc])
- ## ClipShapeFile
- else:
- print "Ain't nobody got time for that!"
-
-
diff --git a/python/spaceNet/__init__.py b/python/spaceNetUtilities/__init__.py
similarity index 100%
rename from python/spaceNet/__init__.py
rename to python/spaceNetUtilities/__init__.py
diff --git a/python/spaceNetUtilities/__init__.pyc b/python/spaceNetUtilities/__init__.pyc
new file mode 100644
index 0000000..ab294b9
Binary files /dev/null and b/python/spaceNetUtilities/__init__.pyc differ
diff --git a/python/spaceNetUtilities/dataTools.py b/python/spaceNetUtilities/dataTools.py
new file mode 100644
index 0000000..9a408eb
--- /dev/null
+++ b/python/spaceNetUtilities/dataTools.py
@@ -0,0 +1,191 @@
+import geoTools as gT
+import numpy as np
+import os
+from osgeo import ogr
+import cv2
+from math import ceil
+import math
+
+
+
+def chipImage(rasterFileName, shapeFile, outputDirectory='', numPixelWidth=30,
+ finalImageSize = 256, rotationList=np.array([]), outputFormat='.png', rotateNorth=False,
+ numPixelSetByLine='', windowSize = 'static',
+ truthFile='', sortbyCount='Yes', outputFile=''
+ ):
+ baseName, fileExt = os.path.splitext(rasterFileName)
+ baseDir = os.path.dirname(rasterFileName)
+ baseName = os.path.basename(baseName)
+
+ # % Create Directories
+ if outputDirectory == '':
+ outputDirectory = os.path.join(baseDir, baseName)
+ if not os.path.isdir(outputDirectory):
+ os.mkdir(outputDirectory)
+
+ originalDirectory = os.path.join(outputDirectory, 'noRotation')
+ if not os.path.isdir(originalDirectory):
+ os.mkdir(originalDirectory)
+ if rotateNorth:
+ rotateNorthDirectory = os.path.join(outputDirectory, 'rotateNorth')
+ if not os.path.isdir(rotateNorthDirectory):
+ os.mkdir(rotateNorthDirectory)
+ if rotationList.size != 0:
+ rotateListDirectory = []
+ for idx, rotateNum in enumerate(rotationList):
+ rotateListDirectory.append(os.path.join(outputDirectory, 'rot_' + str(int(math.ceil(rotateNum)))))
+ if not os.path.isdir(rotateListDirectory[idx]):
+ os.mkdir(rotateListDirectory[idx])
+
+ fullimg = cv2.imread(rasterFileName)
+
+ # Import Source Shape File
+ shapef = ogr.Open(shapeFile, 0)
+ chipLayer = shapef.GetLayer()
+
+ # create Truth File
+ if truthFile != '':
+ shapeT = ogr.Open(truthFile)
+ truthLayer = shapeT.GetLayer()
+ else:
+ # create an output datasource in memory
+ # create an output datasource in memory
+ outdriver = ogr.GetDriverByName('MEMORY')
+ source = outdriver.CreateDataSource('memData')
+
+ # open the memory datasource with write access
+ tmp = outdriver.Open('memData', 1)
+ pipes_mem = source.CopyLayer(chipLayer, 'truthLayer', ['OVERWRITE=YES'])
+ truthLayer = source.GetLayer('truthLayer')
+ chipLayer.ResetReading()
+
+ idx = 0
+ imgShape = fullimg.shape
+ for tmpFeature in chipLayer:
+ tmpGeom = tmpFeature.GetGeometryRef()
+ angFromNor = tmpFeature.GetField('compassDeg')
+ # label = tmpFeature.GetField("id")
+ idx = idx + 1
+ # if label is None:
+ # label = 0
+ # Convert the layer extent to image pixel coordinates
+ centroidX, centroidY, centroidZ = tmpGeom.Centroid().GetPoint()
+ envelope = tmpGeom.GetEnvelope() #minX, maxX, minY, maxY
+
+ cX1, cY1 = gT.latlon2pixel(envelope[0], envelope[2], rasterFileName)
+ cX2, cY2 = gT.latlon2pixel(envelope[1], envelope[3], rasterFileName)
+
+ if windowSize == 'static':
+ numPixelWidth = numPixelWidth
+ elif windowSize == 'step4':
+ numPixelWidth = ceil((math.sqrt(abs(cY2 - cY1) ** 2 + abs(cX2 - cX1) ** 2) * 1.3) / 4) * 4
+
+ elif windowSize == 'step8':
+ numPixelWidth = ceil((math.sqrt(abs(cY2 - cY1) ** 2 + abs(cX2 - cX1) ** 2) * 1.3) / 8) * 8
+ elif windowSize == 'adjust':
+ numPixelWidth = ceil(math.sqrt(abs(cY2 - cY1) ** 2 + abs(cX2 - cX1) ** 2) * 1.3)
+
+ if numPixelWidth == 0.0:
+ continue
+ cX, cY = gT.latlon2pixel(centroidY, centroidX, rasterFileName)
+
+
+ if ((cX > numPixelWidth and cX < imgShape[1] - numPixelWidth) and
+ (cY > numPixelWidth and cY < imgShape[0] - numPixelWidth)):
+
+ ulX = cX + ceil(numPixelWidth)
+ lrX = cX - ceil(numPixelWidth)
+ ulY = cY + ceil(numPixelWidth)
+ lrY = cY - ceil(numPixelWidth)
+
+ # geoUtils.pixelToLonLat(xPi)
+ polyGeom = gT.returnBoundBox(cX, cY, numPixelWidth,
+ rasterFileName, targetSR='')
+ # calculate # of features within object
+
+ truthLayer.SetSpatialFilter(polyGeom)
+ countInBox = math.ceil(truthLayer.GetFeatureCount())
+ label = countInBox
+
+ # Calculate the pixel size of the new image
+ print(idx)
+ res = fullimg[lrY:ulY, lrX:ulX, :]
+ resImgShape = res.shape
+ ulXdst = resImgShape[1] / 2 + ceil(numPixelWidth / 2)
+ lrXdst = resImgShape[1] / 2 - ceil(numPixelWidth / 2)
+ ulYdst = resImgShape[0] / 2 + ceil(numPixelWidth / 2)
+ lrYdst = resImgShape[0] / 2 - ceil(numPixelWidth / 2)
+
+ # % Save Original Chip
+ if finalImageSize == -1:
+ dstFinal = res[lrYdst:ulYdst, lrXdst:ulXdst, :]
+
+ else:
+ print(finalImageSize)
+ print(numPixelWidth)
+
+ dstFinal = cv2.resize(res[lrYdst:ulYdst, lrXdst:ulXdst, :], (finalImageSize, finalImageSize),
+ interpolation=cv2.INTER_CUBIC)
+
+
+ fileName = os.path.join(originalDirectory,
+ baseName + str(int(ulX)) + '_' +
+ str(int(lrY)) +
+ '_ROT_' + str(round(angFromNor, 1)) +
+ '_label_' + str(int(label)) +
+ outputFormat)
+ cv2.imwrite(fileName, dstFinal)
+ idstring = baseName + str(int(ulX)) + '_' + str(int(lrY)) + '_ROT_' + str(
+ round(angFromNor, 1)) + '_label_' + str(int(label)) + outputFormat
+
+
+ if rotateNorth:
+
+ M = cv2.getRotationMatrix2D((resImgShape[1] / 2, resImgShape[0] / 2), angFromNor, 1)
+ dst = cv2.warpAffine(res, M, (resImgShape[1], resImgShape[0]))
+ dstRotateNorth = dst[lrYdst:ulYdst, lrXdst:ulXdst, :]
+
+ if finalImageSize == -1:
+ dstFinal = dstRotateNorth
+ else:
+ dstFinal = cv2.resize(dstRotateNorth, (finalImageSize, finalImageSize),
+ interpolation=cv2.INTER_CUBIC)
+
+ fileName = os.path.join(rotateNorthDirectory,
+ baseName + str(int(ulX)) + '_' +
+ str(int(lrY)) +
+ '_ROT_' + str(int(angFromNor)) +
+ '_label_' + str(int(label)) +
+ outputFormat)
+ cv2.imwrite(fileName, dstFinal)
+
+ if rotationList.size != 0:
+ resImgShape = res.shape
+ for idx, rotationNum in enumerate(rotationList):
+ if rotateNorth:
+ M = cv2.getRotationMatrix2D((resImgShape[1] / 2, resImgShape[0] / 2), -rotationNum, 1)
+ dstRotate = cv2.warpAffine(dst, M, (resImgShape[1], resImgShape[0]))
+ dstRotate = dstRotate[lrYdst:ulYdst, lrXdst:ulXdst, :]
+
+ else:
+ M = cv2.getRotationMatrix2D((resImgShape[1] / 2, resImgShape[0] / 2), -rotationNum, 1)
+ dstRotate = cv2.warpAffine(res, M, (resImgShape[1], resImgShape[0]))
+ dstRotate = dstRotate[lrYdst:ulYdst, lrXdst:ulXdst, :]
+
+ if finalImageSize == -1:
+ dstFinal = dstRotate
+ else:
+ dstFinal = cv2.resize(dstRotate, (finalImageSize, finalImageSize),
+ interpolation=cv2.INTER_CUBIC)
+
+ fileName = os.path.join(rotateListDirectory[idx],
+ baseName + str(int(ulX)) + '_' +
+ str(int(lrY)) +
+ '_ROT_' + str(int(rotationNum)) +
+ '_label_' + str(int(label)) +
+ outputFormat)
+ cv2.imwrite(fileName, dstFinal)
+
+ return outputDirectory
+
+
diff --git a/python/spaceNet/evalTools.py b/python/spaceNetUtilities/evalTools.py
similarity index 56%
rename from python/spaceNet/evalTools.py
rename to python/spaceNetUtilities/evalTools.py
index ca0b0a8..d1e9920 100644
--- a/python/spaceNet/evalTools.py
+++ b/python/spaceNetUtilities/evalTools.py
@@ -1,6 +1,7 @@
import numpy as np
import geoTools as gT
from osgeo import ogr
+import os
def iou(test_poly, truth_polys, truth_index=[]):
fidlistArray = []
@@ -9,34 +10,43 @@ def iou(test_poly, truth_polys, truth_index=[]):
fidlist = gT.search_rtree(test_poly, truth_index)
for fid in fidlist:
+ if not test_poly.IsValid():
+ test_poly = test_poly.Buffer(0.0)
+
intersection_result = test_poly.Intersection(truth_polys[fid])
fidlistArray.append(fid)
+
if intersection_result.GetGeometryName() == 'POLYGON' or \
intersection_result.GetGeometryName() == 'MULTIPOLYGON':
intersection_area = intersection_result.GetArea()
union_area = test_poly.Union(truth_polys[fid]).GetArea()
iou_list.append(intersection_area / union_area)
+
else:
iou_list.append(0)
else:
-
for idx, truth_poly in enumerate(truth_polys):
intersection_result = test_poly.Intersection(truth_poly)
intersection_result.GetGeometryName()
+
if intersection_result.GetGeometryName() == 'POLYGON' or \
intersection_result.GetGeometryName() == 'MULTIPOLYGON':
intersection_area = intersection_result.GetArea()
union_area = test_poly.Union(truth_poly).GetArea()
iou_list.append(intersection_area / union_area)
+
else:
+
iou_list.append(0)
- return (iou_list, fidlistArray)
+ return iou_list, fidlistArray
-def score(test_polys, truth_polys, threshold=0.5, truth_index=[]):
+def score(test_polys, truth_polys, threshold=0.5, truth_index=[],
+ resultGeoJsonName = [],
+ imageId = []):
# Define internal functions
@@ -45,6 +55,22 @@ def score(test_polys, truth_polys, threshold=0.5, truth_index=[]):
false_pos_count = 0
truth_poly_count = len(truth_polys)
+ if resultGeoJsonName:
+ if not imageId:
+ imageId = os.path.basename(os.path.splitext(resultGeoJsonName)[0])
+
+ driver = ogr.GetDriverByName('geojson')
+ if os.path.exists(resultGeoJsonName):
+ driver.DeleteDataSource(resultGeoJsonName)
+ datasource = driver.CreateDataSource(resultGeoJsonName)
+ layer = datasource.CreateLayer('buildings', geom_type=ogr.wkbPolygon)
+ field_name = ogr.FieldDefn("ImageId", ogr.OFTString)
+ field_name.SetWidth(75)
+ layer.CreateField(field_name)
+ layer.CreateField(ogr.FieldDefn("BuildingId", ogr.OFTInteger))
+ layer.CreateField(ogr.FieldDefn("IOUScore", ogr.OFTReal))
+
+
for test_poly in test_polys:
if truth_polys:
iou_list, fidlist = iou(test_poly, truth_polys, truth_index)
@@ -57,16 +83,55 @@ def score(test_polys, truth_polys, threshold=0.5, truth_index=[]):
true_pos_count += 1
truth_index.delete(fidlist[np.argmax(iou_list)], truth_polys[fidlist[np.argmax(iou_list)]].GetEnvelope())
#del truth_polys[fidlist[np.argmax(iou_list)]]
+ if resultGeoJsonName:
+ feature = ogr.Feature(layer.GetLayerDefn())
+ feature.SetField('ImageId', imageId)
+ feature.SetField('BuildingId', fidlist[np.argmax(iou_list)])
+ feature.SetField('IOUScore', maxiou)
+ feature.SetGeometry(test_poly)
+
+
else:
false_pos_count += 1
+
+ if resultGeoJsonName:
+ feature = ogr.Feature(layer.GetLayerDefn())
+ feature.SetField('ImageId', imageId)
+ feature.SetField('BuildingId', 0)
+ feature.SetField('IOUScore', maxiou)
+ feature.SetGeometry(test_poly)
+
+
+
+
+
+
else:
false_pos_count += 1
+
+ if resultGeoJsonName:
+ feature = ogr.Feature(layer.GetLayerDefn())
+ feature.SetField('ImageId', imageId)
+ feature.SetField('BuildingId', 0)
+ feature.SetField('IOUScore', 0)
+ feature.SetGeometry(test_poly)
+
+ if resultGeoJsonName:
+ layer.CreateFeature(feature)
+ feature.Destroy()
+
+ if resultGeoJsonName:
+ datasource.Destroy()
+
+
false_neg_count = truth_poly_count - true_pos_count
return true_pos_count, false_pos_count, false_neg_count
-def evalfunction((image_id, test_polys, truth_polys, truth_index)):
+def evalfunction((image_id, test_polys, truth_polys, truth_index),
+ resultGeoJsonName = [],
+ threshold = 0.5):
if len(truth_polys)==0:
@@ -74,7 +139,12 @@ def evalfunction((image_id, test_polys, truth_polys, truth_index)):
false_pos_count = len(test_polys)
false_neg_count = 0
else:
- true_pos_count, false_pos_count, false_neg_count = score(test_polys, truth_polys.tolist(), truth_index=truth_index)
+ true_pos_count, false_pos_count, false_neg_count = score(test_polys, truth_polys.tolist(),
+ truth_index=truth_index,
+ resultGeoJsonName=resultGeoJsonName,
+ imageId=image_id,
+ threshold=threshold
+ )
if (true_pos_count > 0):
@@ -84,7 +154,7 @@ def evalfunction((image_id, test_polys, truth_polys, truth_index)):
F1score = 2.0 * precision * recall / (precision + recall)
else:
F1score = 0
- return (F1score, true_pos_count, false_pos_count, false_neg_count)
+ return ((F1score, true_pos_count, false_pos_count, false_neg_count), image_id)
def create_eval_function_input((image_ids, (prop_polysIdList, prop_polysPoly), (sol_polysIdsList, sol_polysPoly))):
diff --git a/python/spaceNetUtilities/evalTools.pyc b/python/spaceNetUtilities/evalTools.pyc
new file mode 100644
index 0000000..1c9bf52
Binary files /dev/null and b/python/spaceNetUtilities/evalTools.pyc differ
diff --git a/python/spaceNetUtilities/geoTools.py b/python/spaceNetUtilities/geoTools.py
new file mode 100644
index 0000000..4f2346b
--- /dev/null
+++ b/python/spaceNetUtilities/geoTools.py
@@ -0,0 +1,1151 @@
+from osgeo import gdal, osr, ogr
+import numpy as np
+import os
+import csv
+import subprocess
+import math
+try:
+ import rtree
+except:
+ print("rtree not installed, Will break evaluation code")
+
+
+def import_summary_geojson(geojsonfilename, removeNoBuildings=True):
+ # driver = ogr.GetDriverByName('geojson')
+ datasource = ogr.Open(geojsonfilename, 0)
+
+ layer = datasource.GetLayer()
+ print(layer.GetFeatureCount())
+
+ buildingList = []
+ for idx, feature in enumerate(layer):
+
+ poly = feature.GetGeometryRef()
+
+ if poly:
+ if removeNoBuildings:
+ if feature.GetField('BuildingId') != -1:
+ buildingList.append({'ImageId': feature.GetField('ImageId'), 'BuildingId': feature.GetField('BuildingId'),
+ 'poly': feature.GetGeometryRef().Clone()})
+ else:
+
+ buildingList.append({'ImageId': feature.GetField('ImageId'), 'BuildingId': feature.GetField('BuildingId'),
+ 'poly': feature.GetGeometryRef().Clone()})
+
+ return buildingList
+
+
+def import_chip_geojson(geojsonfilename, ImageId=''):
+ # driver = ogr.GetDriverByName('geojson')
+ datasource = ogr.Open(geojsonfilename, 0)
+ if ImageId=='':
+ ImageId = geojsonfilename
+ layer = datasource.GetLayer()
+ print(layer.GetFeatureCount())
+
+ polys = []
+ BuildingId = 0
+ for idx, feature in enumerate(layer):
+
+ poly = feature.GetGeometryRef()
+
+ if poly:
+ BuildingId = BuildingId + 1
+ polys.append({'ImageId': ImageId, 'BuildingId': BuildingId,
+ 'poly': feature.GetGeometryRef().Clone()})
+
+ return polys
+
+
+def mergePolyList(geojsonfilename):
+
+ multipolygon = ogr.Geometry(ogr.wkbMultiPolygon)
+ datasource = ogr.Open(geojsonfilename, 0)
+
+ layer = datasource.GetLayer()
+ print(layer.GetFeatureCount())
+
+
+ for idx, feature in enumerate(layer):
+
+ poly = feature.GetGeometryRef()
+
+ if poly:
+
+ multipolygon.AddGeometry(feature.GetGeometryRef().Clone())
+
+ return multipolygon
+
+def readwktcsv(csv_path,removeNoBuildings=True):
+ #
+ # csv Format Expected = ['ImageId', 'BuildingId', 'PolygonWKT_Pix', 'PolygonWKT_Geo']
+ # returns list of Dictionaries {'ImageId': image_id, 'BuildingId': building_id, 'poly': poly}
+ # image_id is a string,
+ # BuildingId is an integer,
+ # poly is a ogr.Geometry Polygon
+
+ buildinglist = []
+ with open(csv_path, 'rb') as csvfile:
+ building_reader = csv.reader(csvfile, delimiter=',', quotechar='"')
+ next(building_reader, None) # skip the headers
+ for row in building_reader:
+
+ if removeNoBuildings:
+ if int(row[1]) != -1:
+ polyPix = ogr.CreateGeometryFromWkt(row[2])
+ polyGeo = ogr.CreateGeometryFromWkt(row[3])
+ buildinglist.append({'ImageId': row[0], 'BuildingId': int(row[1]), 'polyPix': polyPix,
+ 'polyGeo': polyGeo,
+ })
+
+ else:
+
+ polyPix = ogr.CreateGeometryFromWkt(row[2])
+ polyGeo = ogr.CreateGeometryFromWkt(row[3])
+ buildinglist.append({'ImageId': row[0], 'BuildingId': int(row[1]), 'polyPix': polyPix,
+ 'polyGeo': polyGeo,
+ })
+
+ return buildinglist
+
+
+def exporttogeojson(geojsonfilename, buildinglist):
+ #
+ # geojsonname should end with .geojson
+ # building list should be list of dictionaries
+ # list of Dictionaries {'ImageId': image_id, 'BuildingId': building_id, 'polyPix': poly,
+ # 'polyGeo': poly}
+ # image_id is a string,
+ # BuildingId is an integer,
+ # poly is a ogr.Geometry Polygon
+ #
+ # returns geojsonfilename
+
+ driver = ogr.GetDriverByName('geojson')
+ if os.path.exists(geojsonfilename):
+ driver.DeleteDataSource(geojsonfilename)
+ datasource = driver.CreateDataSource(geojsonfilename)
+ layer = datasource.CreateLayer('buildings', geom_type=ogr.wkbPolygon)
+ field_name = ogr.FieldDefn("ImageId", ogr.OFTString)
+ field_name.SetWidth(75)
+ layer.CreateField(field_name)
+ layer.CreateField(ogr.FieldDefn("BuildingId", ogr.OFTInteger))
+
+ # loop through buildings
+ for building in buildinglist:
+ # create feature
+ feature = ogr.Feature(layer.GetLayerDefn())
+ feature.SetField("ImageId", building['ImageId'])
+ feature.SetField("BuildingId", building['BuildingId'])
+ feature.SetGeometry(building['polyPix'])
+
+ # Create the feature in the layer (geojson)
+ layer.CreateFeature(feature)
+ # Destroy the feature to free resources
+ feature.Destroy()
+
+ datasource.Destroy()
+
+ return geojsonfilename
+
+
+def createmaskfrompolygons(polygons):
+ pass
+ ## see labelTools createRasterFromGeoJson
+
+
+def latlon2pixel(lat, lon, input_raster='', targetsr='', geom_transform=''):
+ # type: (object, object, object, object, object) -> object
+
+ sourcesr = osr.SpatialReference()
+ sourcesr.ImportFromEPSG(4326)
+
+ geom = ogr.Geometry(ogr.wkbPoint)
+ geom.AddPoint(lon, lat)
+
+ if targetsr == '':
+ src_raster = gdal.Open(input_raster)
+ targetsr = osr.SpatialReference()
+ targetsr.ImportFromWkt(src_raster.GetProjectionRef())
+ coord_trans = osr.CoordinateTransformation(sourcesr, targetsr)
+ if geom_transform == '':
+ src_raster = gdal.Open(input_raster)
+ transform = src_raster.GetGeoTransform()
+ else:
+ transform = geom_transform
+
+ x_origin = transform[0]
+ # print(x_origin)
+ y_origin = transform[3]
+ # print(y_origin)
+ pixel_width = transform[1]
+ # print(pixel_width)
+ pixel_height = transform[5]
+ # print(pixel_height)
+ geom.Transform(coord_trans)
+ # print(geom.GetPoint())
+ x_pix = (geom.GetPoint()[0] - x_origin) / pixel_width
+ y_pix = (geom.GetPoint()[1] - y_origin) / pixel_height
+
+ return (x_pix, y_pix)
+
+
+def returnBoundBox(xOff, yOff, pixDim, inputRaster, targetSR='', pixelSpace=False):
+ # Returns Polygon for a specific square defined by a center Pixel and
+ # number of pixels in each dimension.
+ # Leave targetSR as empty string '' or specify it as a osr.SpatialReference()
+ # targetSR = osr.SpatialReference()
+ # targetSR.ImportFromEPSG(4326)
+ if targetSR == '':
+ targetSR = osr.SpatialReference()
+ targetSR.ImportFromEPSG(4326)
+ xCord = [xOff - pixDim / 2, xOff - pixDim / 2, xOff + pixDim / 2,
+ xOff + pixDim / 2, xOff - pixDim / 2]
+ yCord = [yOff - pixDim / 2, yOff + pixDim / 2, yOff + pixDim / 2,
+ yOff - pixDim / 2, yOff - pixDim / 2]
+
+ ring = ogr.Geometry(ogr.wkbLinearRing)
+ for idx in xrange(len(xCord)):
+ if pixelSpace == False:
+ geom = pixelToGeoCoord(xCord[idx], yCord[idx], inputRaster)
+ ring.AddPoint(geom[0], geom[1], 0)
+ else:
+ ring.AddPoint(xCord[idx], yCord[idx], 0)
+
+ poly = ogr.Geometry(ogr.wkbPolygon)
+ if pixelSpace == False:
+ poly.AssignSpatialReference(targetSR)
+
+ poly.AddGeometry(ring)
+
+ return poly
+
+def createBoxFromLine(tmpGeom, ratio=1, halfWidth=-999, transformRequired=True, transform_WGS84_To_UTM='', transform_UTM_To_WGS84=''):
+ # create Polygon Box Oriented with the line
+
+ if transformRequired:
+ if transform_WGS84_To_UTM == '':
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84 = createUTMTransform(tmpGeom)
+
+ tmpGeom.Transform(transform_WGS84_To_UTM)
+
+
+ # calculatuate Centroid
+ centroidX, centroidY, centroidZ = tmpGeom.Centroid().GetPoint()
+ lengthM = tmpGeom.Length()
+ if halfWidth ==-999:
+ halfWidth = lengthM/(2*ratio)
+
+ envelope=tmpGeom.GetPoints()
+ cX1 = envelope[0][0]
+ cY1 = envelope[0][1]
+ cX2 = envelope[1][0]
+ cY2 = envelope[1][1]
+ angRad = math.atan2(cY2-cY1,cX2-cX1)
+
+ d_X = math.cos(angRad-math.pi/2)*halfWidth
+ d_Y = math.sin(angRad-math.pi/2)*halfWidth
+
+ e_X = math.cos(angRad+math.pi/2)*halfWidth
+ e_Y = math.sin(angRad+math.pi/2)*halfWidth
+
+ ring = ogr.Geometry(ogr.wkbLinearRing)
+
+ ring.AddPoint(cX1+d_X, cY1+d_Y)
+ ring.AddPoint(cX1+e_X, cY1+e_Y)
+ ring.AddPoint(cX2+e_X, cY2+e_Y)
+ ring.AddPoint(cX2+d_X, cY2+d_Y)
+ ring.AddPoint(cX1+d_X, cY1+d_Y)
+ polyGeom = ogr.Geometry(ogr.wkbPolygon)
+ polyGeom.AddGeometry(ring)
+ areaM = polyGeom.GetArea()
+
+ if transformRequired:
+ tmpGeom.Transform(transform_UTM_To_WGS84)
+ polyGeom.Transform(transform_UTM_To_WGS84)
+
+
+ return (polyGeom, areaM, angRad, lengthM)
+
+
+def pixelToGeoCoord(xPix, yPix, inputRaster, sourceSR='', geomTransform='', targetSR=''):
+ # If you want to garuntee lon lat output, specify TargetSR otherwise, geocoords will be in image geo reference
+ # targetSR = osr.SpatialReference()
+ # targetSR.ImportFromEPSG(4326)
+ # Transform can be performed at the polygon level instead of pixel level
+
+ if targetSR =='':
+ performReprojection=False
+ targetSR = osr.SpatialReference()
+ targetSR.ImportFromEPSG(4326)
+ else:
+ performReprojection=True
+
+ if geomTransform=='':
+ srcRaster = gdal.Open(inputRaster)
+ geomTransform = srcRaster.GetGeoTransform()
+
+ source_sr = osr.SpatialReference()
+ source_sr.ImportFromWkt(srcRaster.GetProjectionRef())
+
+
+
+
+ geom = ogr.Geometry(ogr.wkbPoint)
+ xOrigin = geomTransform[0]
+ yOrigin = geomTransform[3]
+ pixelWidth = geomTransform[1]
+ pixelHeight = geomTransform[5]
+
+ xCoord = (xPix * pixelWidth) + xOrigin
+ yCoord = (yPix * pixelHeight) + yOrigin
+ geom.AddPoint(xCoord, yCoord)
+
+
+ if performReprojection:
+ if sourceSR=='':
+ srcRaster = gdal.Open(inputRaster)
+ sourceSR = osr.SpatialReference()
+ sourceSR.ImportFromWkt(srcRaster.GetProjectionRef())
+ coord_trans = osr.CoordinateTransformation(sourceSR, targetSR)
+ geom.Transform(coord_trans)
+
+ return (geom.GetX(), geom.GetY())
+
+
+def geoPolygonToPixelPolygonWKT(geom, inputRaster, targetSR, geomTransform, breakMultiPolygonGeo=True,
+ pixPrecision=2):
+ # Returns Pixel Coordinate List and GeoCoordinateList
+
+ polygonPixBufferList = []
+ polygonPixBufferWKTList = []
+ polygonGeoWKTList = []
+ if geom.GetGeometryName() == 'POLYGON':
+ polygonPix = ogr.Geometry(ogr.wkbPolygon)
+ for ring in geom:
+ # GetPoint returns a tuple not a Geometry
+ ringPix = ogr.Geometry(ogr.wkbLinearRing)
+
+ for pIdx in xrange(ring.GetPointCount()):
+ lon, lat, z = ring.GetPoint(pIdx)
+ xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+
+ xPix = round(xPix, pixPrecision)
+ yPix = round(yPix, pixPrecision)
+ ringPix.AddPoint(xPix, yPix)
+
+ polygonPix.AddGeometry(ringPix)
+ polygonPixBuffer = polygonPix.Buffer(0.0)
+ polygonPixBufferList.append([polygonPixBuffer, geom])
+
+ elif geom.GetGeometryName() == 'MULTIPOLYGON':
+
+ for poly in geom:
+ polygonPix = ogr.Geometry(ogr.wkbPolygon)
+ for ring in poly:
+ # GetPoint returns a tuple not a Geometry
+ ringPix = ogr.Geometry(ogr.wkbLinearRing)
+
+ for pIdx in xrange(ring.GetPointCount()):
+ lon, lat, z = ring.GetPoint(pIdx)
+ xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+
+ xPix = round(xPix, pixPrecision)
+ yPix = round(yPix, pixPrecision)
+ ringPix.AddPoint(xPix, yPix)
+
+ polygonPix.AddGeometry(ringPix)
+ polygonPixBuffer = polygonPix.Buffer(0.0)
+ if breakMultiPolygonGeo:
+ polygonPixBufferList.append([polygonPixBuffer, poly])
+ else:
+ polygonPixBufferList.append([polygonPixBuffer, geom])
+
+ for polygonTest in polygonPixBufferList:
+ if polygonTest[0].GetGeometryName() == 'POLYGON':
+ polygonPixBufferWKTList.append([polygonTest[0].ExportToWkt(), polygonTest[1].ExportToWkt()])
+ elif polygonTest[0].GetGeometryName() == 'MULTIPOLYGON':
+ for polygonTest2 in polygonTest[0]:
+ polygonPixBufferWKTList.append([polygonTest2.ExportToWkt(), polygonTest[1].ExportToWkt()])
+
+ return polygonPixBufferWKTList
+
+def pixelWKTToGeoWKT(geomWKT, inputRaster, targetSR='', geomTransform='', breakMultiPolygonPix=False):
+ # Returns GeoCoordinateList from PixelCoordinateList
+ if geomTransform=='':
+ targetRaster = gdal.Open(inputRaster)
+ geomTransform = targetRaster.GetGeoTransform()
+
+
+ geom = ogr.CreateGeometryFromWkt(geomWKT)
+
+ polygonGeoBufferWKTList = []
+ polygonGeoBufferList = []
+ if geom.GetGeometryName() == 'POLYGON':
+ polygonGeo = ogr.Geometry(ogr.wkbPolygon)
+ for ring in geom:
+ # GetPoint returns a tuple not a Geometry
+ ringGeo = ogr.Geometry(ogr.wkbLinearRing)
+
+ for pIdx in xrange(ring.GetPointCount()):
+ xPix, yPix, zPix = ring.GetPoint(pIdx)
+ #xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+ lon, lat = pixelToGeoCoord(xPix, yPix, inputRaster=inputRaster, targetSR=targetSR, geomTransform=geomTransform)
+
+
+ polygonGeo.AddGeometry(ringGeo)
+ polygonGeoBuffer = polygonGeo.Buffer(0.0)
+ polygonGeoBufferList.append([polygonGeoBuffer, geom])
+
+ elif geom.GetGeometryName() == 'MULTIPOLYGON':
+
+ for poly in geom:
+ polygonGeo = ogr.Geometry(ogr.wkbPolygon)
+ for ring in poly:
+ # GetPoint returns a tuple not a Geometry
+ ringGeo = ogr.Geometry(ogr.wkbLinearRing)
+
+ for pIdx in xrange(ring.GetPointCount()):
+ xPix, yPix, zPix = ring.GetPoint(pIdx)
+ # xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+ lon, lat = pixelToGeoCoord(xPix, yPix, inputRaster=inputRaster, targetSR=targetSR,
+ geomTransform=geomTransform)
+ ringGeo.AddPoint(lon, lat)
+
+ polygonGeo.AddGeometry(ringGeo)
+ polygonGeoBuffer = polygonGeo.Buffer(0.0)
+ if breakMultiPolygonPix:
+ polygonGeoBufferList.append([polygonGeoBuffer, poly])
+ else:
+ polygonGeoBufferList.append([polygonGeoBuffer, geom])
+
+
+ elif geom.GetGeometryName() == 'LINESTRING':
+ lineGeo = ogr.Geometry(ogr.wkbLineString)
+ for pIdx in xrange(geom.GetPointCount()):
+ xPix, yPix, zPix = geom.GetPoint(pIdx)
+ lon, lat = pixelToGeoCoord(xPix, yPix, inputRaster=inputRaster, targetSR=targetSR,
+ geomTransform=geomTransform)
+ lineGeo.AddPoint(lon, lat)
+
+ polygonGeoBufferList.append([lineGeo, geom])
+
+ elif geom.GetGeometryName() == 'POINT':
+ pointGeo = ogr.Geometry(ogr.wkbPoint)
+
+ for pIdx in xrange(geom.GetPointCount()):
+ xPix, yPix, zPix = geom.GetPoint(pIdx)
+ lon, lat = pixelToGeoCoord(xPix, yPix, inputRaster=inputRaster, targetSR=targetSR,
+ geomTransform=geomTransform)
+ pointGeo.AddPoint(lon, lat)
+
+ polygonGeoBufferList.append([pointGeo, geom])
+
+
+
+
+
+
+ #for polygonTest in polygonGeoBufferList:
+ # if polygonTest[0].GetGeometryName() == 'POLYGON':
+ # polygonGeoBufferWKTList.append([polygonTest[0].ExportToWkt(), polygonTest[1].ExportToWkt()])
+ # elif polygonTest[0].GetGeometryName() == 'MULTIPOLYGON':
+ # for polygonTest2 in polygonTest[0]:
+ # polygonGeoBufferWKTList.append([polygonTest2.ExportToWkt(), polygonTest[1].ExportToWkt()])
+
+
+
+ # [GeoWKT, PixWKT])
+ return polygonGeoBufferList
+
+def geoWKTToPixelWKT(geom, inputRaster, targetSR, geomTransform, pixPrecision=2):
+ # Returns Pixel Coordinate List and GeoCoordinateList
+
+ geom_list = []
+ geom_pix_wkt_list = []
+ if geom.GetGeometryName() == 'POLYGON':
+ polygonPix = ogr.Geometry(ogr.wkbPolygon)
+ for ring in geom:
+ # GetPoint returns a tuple not a Geometry
+ ringPix = ogr.Geometry(ogr.wkbLinearRing)
+
+ for pIdx in xrange(ring.GetPointCount()):
+ lon, lat, z = ring.GetPoint(pIdx)
+ xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+
+ xPix = round(xPix, pixPrecision)
+ yPix = round(yPix, pixPrecision)
+ ringPix.AddPoint(xPix, yPix)
+
+ polygonPix.AddGeometry(ringPix)
+ polygonPixBuffer = polygonPix.Buffer(0.0)
+ geom_list.append([polygonPixBuffer, geom])
+
+ elif geom.GetGeometryName() == 'MULTIPOLYGON':
+
+ for poly in geom:
+ polygonPix = ogr.Geometry(ogr.wkbPolygon)
+ for ring in geom:
+ # GetPoint returns a tuple not a Geometry
+ ringPix = ogr.Geometry(ogr.wkbLinearRing)
+
+ for pIdx in xrange(ring.GetPointCount()):
+ lon, lat, z = ring.GetPoint(pIdx)
+ xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+
+ xPix = round(xPix, pixPrecision)
+ yPix = round(yPix, pixPrecision)
+ ringPix.AddPoint(xPix, yPix)
+
+ polygonPix.AddGeometry(ringPix)
+ polygonPixBuffer = polygonPix.Buffer(0.0)
+ geom_list.append([polygonPixBuffer, geom])
+ elif geom.GetGeometryName() == 'LINESTRING':
+ line = ogr.Geometry(ogr.wkbLineString)
+ for pIdx in xrange(geom.GetPointCount()):
+ lon, lat, z = geom.GetPoint(pIdx)
+ xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+
+ xPix = round(xPix, pixPrecision)
+ yPix = round(yPix, pixPrecision)
+ line.AddPoint(xPix, yPix)
+ geom_list.append([line, geom])
+
+ elif geom.GetGeometryName() == 'POINT':
+ point = ogr.Geometry(ogr.wkbPoint)
+ for pIdx in xrange(geom.GetPointCount()):
+ lon, lat, z = geom.GetPoint(pIdx)
+ xPix, yPix = latlon2pixel(lat, lon, inputRaster, targetSR, geomTransform)
+
+ xPix = round(xPix, pixPrecision)
+ yPix = round(yPix, pixPrecision)
+ point.AddPoint(xPix, yPix)
+ geom_list.append([point, geom])
+
+
+ for polygonTest in geom_list:
+ if polygonTest[0].GetGeometryName() == 'POLYGON' or \
+ polygonTest[0].GetGeometryName() == 'LINESTRING' or \
+ polygonTest[0].GetGeometryName() == 'POINT':
+ geom_pix_wkt_list.append([polygonTest[0].ExportToWkt(), polygonTest[1].ExportToWkt()])
+ elif polygonTest[0].GetGeometryName() == 'MULTIPOLYGON':
+ for polygonTest2 in polygonTest[0]:
+ geom_pix_wkt_list.append([polygonTest2.ExportToWkt(), polygonTest[1].ExportToWkt()])
+
+ return geom_pix_wkt_list
+
+
+def convert_wgs84geojson_to_pixgeojson(wgs84geojson, inputraster, image_id=[], pixelgeojson=[], only_polygons=True,
+ breakMultiPolygonGeo=True, pixPrecision=2):
+ dataSource = ogr.Open(wgs84geojson, 0)
+ layer = dataSource.GetLayer()
+ #print(layer.GetFeatureCount())
+ building_id = 0
+ # check if geoJsonisEmpty
+ buildinglist = []
+ if not image_id:
+ image_id = inputraster.replace(".tif", "")
+
+ if layer.GetFeatureCount() > 0:
+
+ if len(inputraster)>0:
+ if os.path.isfile(inputraster):
+ srcRaster = gdal.Open(inputraster)
+ targetSR = osr.SpatialReference()
+ targetSR.ImportFromWkt(srcRaster.GetProjectionRef())
+ geomTransform = srcRaster.GetGeoTransform()
+
+
+
+
+ for feature in layer:
+
+ geom = feature.GetGeometryRef()
+ if len(inputraster)>0:
+ ## Calculate 3 band
+ if only_polygons:
+ geom_wkt_list = geoPolygonToPixelPolygonWKT(geom, inputraster, targetSR, geomTransform,
+ breakMultiPolygonGeo=breakMultiPolygonGeo,
+ pixPrecision=pixPrecision)
+ else:
+ geom_wkt_list = geoWKTToPixelWKT(geom, inputraster, targetSR, geomTransform,
+ pixPrecision=pixPrecision)
+
+ for geom_wkt in geom_wkt_list:
+ building_id += 1
+ buildinglist.append({'ImageId': image_id,
+ 'BuildingId': building_id,
+ 'polyGeo': ogr.CreateGeometryFromWkt(geom_wkt[1]),
+ 'polyPix': ogr.CreateGeometryFromWkt(geom_wkt[0])
+ })
+ else:
+ building_id += 1
+ buildinglist.append({'ImageId': image_id,
+ 'BuildingId': building_id,
+ 'polyGeo': ogr.CreateGeometryFromWkt(geom.ExportToWkt()),
+ 'polyPix': ogr.CreateGeometryFromWkt('POLYGON EMPTY')
+ })
+ else:
+ #print("no File exists")
+ pass
+ if pixelgeojson:
+ exporttogeojson(pixelgeojson, buildinglist=buildinglist)
+
+
+
+ return buildinglist
+
+def convert_pixgwktList_to_wgs84wktList(inputRaster, wktPolygonPixList):
+ ## returns # [[GeoWKT, PixWKT], ...]
+ wgs84WKTList=[]
+ if os.path.isfile(inputRaster):
+ srcRaster = gdal.Open(inputRaster)
+ targetSR = osr.SpatialReference()
+ targetSR.ImportFromWkt(srcRaster.GetProjectionRef())
+ geomTransform = srcRaster.GetGeoTransform()
+
+ for wktPolygonPix in wktPolygonPixList:
+ geom_wkt_list = pixelWKTToGeoWKT(wktPolygonPix, inputRaster, targetSR='',
+ geomTransform=geomTransform,
+ breakMultiPolygonPix=False)
+
+ wgs84WKTList.extend(geom_wkt_list)
+
+ # [[GeoWKT, PixWKT], ...]
+ return wgs84WKTList
+
+def create_rtreefromdict(buildinglist):
+ # create index
+ index = rtree.index.Index(interleaved=False)
+ for idx, building in enumerate(buildinglist):
+ index.insert(idx, building['poly'].GetEnvelope())
+
+ return index
+
+
+def create_rtree_from_poly(poly_list):
+ # create index
+ index = rtree.index.Index(interleaved=False)
+ for idx, building in enumerate(poly_list):
+ index.insert(idx, building.GetEnvelope())
+
+ return index
+
+
+def search_rtree(test_building, index):
+ # input test poly ogr.Geometry and rtree index
+ if test_building.GetGeometryName() == 'POLYGON' or \
+ test_building.GetGeometryName() == 'MULTIPOLYGON':
+ fidlist = index.intersection(test_building.GetEnvelope())
+ else:
+ fidlist = []
+
+ return fidlist
+
+
+def get_envelope(poly):
+ env = poly.GetEnvelope()
+
+ # Get Envelope returns a tuple (minX, maxX, minY, maxY)
+ # Create ring
+ ring = ogr.Geometry(ogr.wkbLinearRing)
+ ring.AddPoint(env[0], env[2])
+ ring.AddPoint(env[0], env[3])
+ ring.AddPoint(env[1], env[3])
+ ring.AddPoint(env[1], env[2])
+ ring.AddPoint(env[0], env[2])
+
+ # Create polygon
+ poly1 = ogr.Geometry(ogr.wkbPolygon)
+ poly1.AddGeometry(ring)
+
+ return poly1
+
+def utm_getZone(longitude):
+ return (int(1+(longitude+180.0)/6.0))
+
+
+def utm_isNorthern(latitude):
+ if (latitude < 0.0):
+ return 0
+ else:
+ return 1
+
+
+def createUTMTransform(polyGeom):
+ # pt = polyGeom.Boundary().GetPoint()
+ utm_zone = utm_getZone(polyGeom.GetEnvelope()[0])
+ is_northern = utm_isNorthern(polyGeom.GetEnvelope()[2])
+ utm_cs = osr.SpatialReference()
+ utm_cs.SetWellKnownGeogCS('WGS84')
+ utm_cs.SetUTM(utm_zone, is_northern);
+ wgs84_cs = osr.SpatialReference()
+ wgs84_cs.ImportFromEPSG(4326)
+
+ transform_WGS84_To_UTM = osr.CoordinateTransformation(wgs84_cs, utm_cs)
+ transform_UTM_To_WGS84 = osr.CoordinateTransformation(utm_cs, wgs84_cs)
+
+ return transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs
+
+
+def getRasterExtent(srcImage):
+ geoTrans = srcImage.GetGeoTransform()
+ ulX = geoTrans[0]
+ ulY = geoTrans[3]
+ xDist = geoTrans[1]
+ yDist = geoTrans[5]
+ rtnX = geoTrans[2]
+ rtnY = geoTrans[4]
+
+ cols = srcImage.RasterXSize
+ rows = srcImage.RasterYSize
+
+ lrX = ulX + xDist * cols
+ lrY = ulY + yDist * rows
+
+ # Create ring
+ ring = ogr.Geometry(ogr.wkbLinearRing)
+ ring.AddPoint(lrX, lrY)
+ ring.AddPoint(lrX, ulY)
+ ring.AddPoint(ulX, ulY)
+ ring.AddPoint(ulX, lrY)
+ ring.AddPoint(lrX, lrY)
+
+ # Create polygon
+ poly = ogr.Geometry(ogr.wkbPolygon)
+ poly.AddGeometry(ring)
+
+ return geoTrans, poly, ulX, ulY, lrX, lrY
+
+def createPolygonFromCenterPoint(cX,cY, radiusMeters, transform_WGS_To_UTM_Flag=True):
+
+ point = ogr.Geometry(ogr.wkbPoint)
+ point.AddPoint(cX, cY)
+
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = createUTMTransform(point)
+ if transform_WGS_To_UTM_Flag:
+ point.Transform(transform_WGS84_To_UTM)
+
+ poly = point.Buffer(radiusMeters)
+
+ if transform_WGS_To_UTM_Flag:
+ poly.Transform(transform_UTM_To_WGS84)
+
+ return poly
+
+
+def createPolygonFromCorners(lrX,lrY,ulX, ulY):
+ # Create ring
+ ring = ogr.Geometry(ogr.wkbLinearRing)
+ ring.AddPoint(lrX, lrY)
+ ring.AddPoint(lrX, ulY)
+ ring.AddPoint(ulX, ulY)
+ ring.AddPoint(ulX, lrY)
+ ring.AddPoint(lrX, lrY)
+
+ # Create polygon
+ poly = ogr.Geometry(ogr.wkbPolygon)
+ poly.AddGeometry(ring)
+
+ return poly
+
+
+def clipShapeFile(shapeSrc, outputFileName, polyToCut, minpartialPerc=0.0, shapeLabel='Geo', debug=False):
+
+ source_layer = shapeSrc.GetLayer()
+ source_srs = source_layer.GetSpatialRef()
+ # Create the output Layer
+
+ outGeoJSon = os.path.splitext(outputFileName)[0] + '.geojson'
+ if not os.path.exists(os.path.dirname(outGeoJSon)):
+ os.makedirs(os.path.dirname(outGeoJSon))
+
+ outDriver = ogr.GetDriverByName("geojson")
+ if os.path.exists(outGeoJSon):
+ outDriver.DeleteDataSource(outGeoJSon)
+
+ if debug:
+ outGeoJSonDebug = outputFileName.replace('.tif', 'outline.geojson')
+ outDriverDebug = ogr.GetDriverByName("geojson")
+ if os.path.exists(outGeoJSonDebug):
+ outDriverDebug.DeleteDataSource(outGeoJSonDebug)
+ outDataSourceDebug = outDriver.CreateDataSource(outGeoJSonDebug)
+ outLayerDebug = outDataSourceDebug.CreateLayer("groundTruth", source_srs, geom_type=ogr.wkbPolygon)
+
+ outFeatureDebug = ogr.Feature(source_layer.GetLayerDefn())
+ outFeatureDebug.SetGeometry(polyToCut)
+ outLayerDebug.CreateFeature(outFeatureDebug)
+
+
+ outDataSource = outDriver.CreateDataSource(outGeoJSon)
+ outLayer = outDataSource.CreateLayer("groundTruth", source_srs, geom_type=ogr.wkbPolygon)
+ # Add input Layer Fields to the output Layer
+ inLayerDefn = source_layer.GetLayerDefn()
+ for i in range(0, inLayerDefn.GetFieldCount()):
+ fieldDefn = inLayerDefn.GetFieldDefn(i)
+ outLayer.CreateField(fieldDefn)
+ outLayer.CreateField(ogr.FieldDefn("partialBuilding", ogr.OFTReal))
+ outLayer.CreateField(ogr.FieldDefn("partialDec", ogr.OFTReal))
+ outLayerDefn = outLayer.GetLayerDefn()
+ source_layer.SetSpatialFilter(polyToCut)
+ for inFeature in source_layer:
+
+ outFeature = ogr.Feature(outLayerDefn)
+
+ for i in range (0, inLayerDefn.GetFieldCount()):
+ outFeature.SetField(inLayerDefn.GetFieldDefn(i).GetNameRef(), inFeature.GetField(i))
+
+ geom = inFeature.GetGeometryRef()
+ geomNew = geom.Intersection(polyToCut)
+ partialDec = -1
+ if geomNew:
+
+ if geomNew.GetGeometryName()=='POINT':
+ outFeature.SetField("partialDec", 1)
+ outFeature.SetField("partialBuilding", 1)
+ else:
+
+ if geom.GetArea() > 0:
+ partialDec = geomNew.GetArea() / geom.GetArea()
+ else:
+ partialDec = 0
+
+ outFeature.SetField("partialDec", partialDec)
+
+ if geom.GetArea() == geomNew.GetArea():
+ outFeature.SetField("partialBuilding", 0)
+ else:
+ outFeature.SetField("partialBuilding", 1)
+
+
+ else:
+ outFeature.SetField("partialBuilding", 1)
+ outFeature.SetField("partialBuilding", 1)
+
+
+ outFeature.SetGeometry(geomNew)
+ if partialDec >= minpartialPerc:
+ outLayer.CreateFeature(outFeature)
+ #print ("AddFeature")
+
+
+def cutChipFromMosaic(rasterFileList, shapeFileSrcList, outlineSrc='',outputDirectory='', outputPrefix='clip_',
+ clipSizeMX=100, clipSizeMY=100, clipOverlap=0.0, minpartialPerc=0.0, createPix=False,
+ baseName='',
+ imgIdStart=-1,
+ parrallelProcess=False,
+ noBlackSpace=False,
+ randomClip=-1):
+ #rasterFileList = [['rasterLocation', 'rasterDescription']]
+ # i.e rasterFileList = [['/path/to/3band_AOI_1.tif, '3band'],
+ # ['/path/to/8band_AOI_1.tif, '8band']
+ # ]
+ # open Base Image
+ #print(rasterFileList[0][0])
+ srcImage = gdal.Open(rasterFileList[0][0])
+ geoTrans, poly, ulX, ulY, lrX, lrY = getRasterExtent(srcImage)
+ # geoTrans[1] w-e pixel resolution
+ # geoTrans[5] n-s pixel resolution
+ if outputDirectory=="":
+ outputDirectory=os.path.dirname(rasterFileList[0][0])
+
+ rasterFileBaseList = []
+ for rasterFile in rasterFileList:
+ rasterFileBaseList.append(os.path.basename(rasterFile[0]))
+
+ if not createPix:
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = createUTMTransform(poly)
+ poly.Transform(transform_WGS84_To_UTM)
+
+ env = poly.GetEnvelope()
+ minX = env[0]
+ minY = env[2]
+ maxX = env[1]
+ maxY = env[3]
+
+ #return poly to WGS84
+ if not createPix:
+ poly.Transform(transform_UTM_To_WGS84)
+
+ shapeSrcList = []
+ for shapeFileSrc in shapeFileSrcList:
+ print(shapeFileSrc[1])
+ shapeSrcList.append([ogr.Open(shapeFileSrc[0],0), shapeFileSrc[1]])
+
+
+ if outlineSrc == '':
+ geomOutline = poly
+ else:
+ outline = ogr.Open(outlineSrc)
+ layer = outline.GetLayer()
+ featureOutLine = layer.GetFeature(0)
+ geomOutlineBase = featureOutLine.GetGeometryRef()
+ geomOutline = geomOutlineBase.Intersection(poly)
+
+ chipSummaryList = []
+
+ for rasterFile in rasterFileList:
+ if not os.path.exists(os.path.join(outputDirectory, rasterFile[1])):
+ os.makedirs(os.path.join(outputDirectory, rasterFile[1]))
+ idx = 0
+ if createPix:
+ print(geoTrans)
+ clipSizeMX=clipSizeMX*geoTrans[1]
+ clipSizeMY=abs(clipSizeMY*geoTrans[5])
+
+ xInterval = np.arange(minX, maxX, clipSizeMX*(1.0-clipOverlap))
+ print('minY = {}'.format(minY))
+ print('maxY = {}'.format(maxY))
+ print('clipsizeMX ={}'.format(clipSizeMX))
+ print('clipsizeMY ={}'.format(clipSizeMY))
+
+ yInterval = np.arange(minY, maxY, clipSizeMY*(1.0-clipOverlap))
+ print(xInterval)
+ print(yInterval)
+ for llX in xInterval:
+ if parrallelProcess:
+ for llY in yInterval:
+ pass
+
+ else:
+ for llY in yInterval:
+ uRX = llX+clipSizeMX
+ uRY = llY+clipSizeMY
+
+ # check if uRX or uRY is outside image
+ if noBlackSpace:
+ if uRX > maxX:
+ uRX = maxX
+ llX = maxX - clipSizeMX
+ if uRY > maxY:
+ uRY = maxY
+ llY = maxY - clipSizeMY
+
+
+
+
+ polyCut = createPolygonFromCorners(llX, llY, uRX, uRY)
+
+
+
+
+
+
+
+ if not createPix:
+ polyCut.Transform(transform_UTM_To_WGS84)
+ ## add debug line do cuts
+ if (polyCut).Intersects(geomOutline):
+ print("Do it.")
+ #envCut = polyCut.GetEnvelope()
+ #minXCut = envCut[0]
+ #minYCut = envCut[2]
+ #maxXCut = envCut[1]
+ #maxYCut = envCut[3]
+
+ #debug for real
+ minXCut = llX
+ minYCut = llY
+ maxXCut = uRX
+ maxYCut = uRY
+
+ #print('minYCut = {}'.format(minYCut))
+ #print('maxYCut = {}'.format(maxYCut))
+ #print('minXCut = {}'.format(minXCut))
+ #print('maxXCut = {}'.format(maxXCut))
+
+ #print('clipsizeMX ={}'.format(clipSizeMX))
+ #print('clipsizeMY ={}'.format(clipSizeMY))
+
+
+
+ idx = idx+1
+ if imgIdStart == -1:
+ imgId = -1
+ else:
+ imgId = idx
+
+ chipSummary = createclip(outputDirectory, rasterFileList, shapeSrcList,
+ maxXCut, maxYCut, minYCut, minXCut,
+ rasterFileBaseList=rasterFileBaseList,
+ minpartialPerc=minpartialPerc,
+ outputPrefix=outputPrefix,
+ createPix=createPix,
+ rasterPolyEnvelope=poly,
+ baseName=baseName,
+ imgId=imgId)
+ chipSummaryList.append(chipSummary)
+
+ return chipSummaryList
+
+def createclip(outputDirectory, rasterFileList, shapeSrcList,
+ maxXCut, maxYCut, minYCut, minXCut,
+ rasterFileBaseList=[],
+ minpartialPerc=0,
+ outputPrefix='',
+ createPix=False,
+ rasterPolyEnvelope=ogr.CreateGeometryFromWkt("POLYGON EMPTY"),
+ className='',
+ baseName='',
+ imgId=-1):
+
+ #rasterFileList = [['rasterLocation', 'rasterDescription']]
+ # i.e rasterFileList = [['/path/to/3band_AOI_1.tif, '3band'],
+ # ['/path/to/8band_AOI_1.tif, '8band']
+ # ]
+
+ polyCutWGS = createPolygonFromCorners(minXCut, minYCut, maxXCut, maxYCut)
+
+
+ if not rasterFileBaseList:
+ rasterFileBaseList = []
+ for rasterFile in rasterFileList:
+ rasterFileBaseList.append(os.path.basename(rasterFile[0]))
+
+ if rasterPolyEnvelope == '':
+ pass
+
+ chipNameList = []
+ for rasterFile in rasterFileList:
+ if className == '':
+ if imgId==-1:
+ chipNameList.append(outputPrefix + rasterFile[1] +
+ "_" + baseName + "_{}_{}.tif".format(minXCut, minYCut))
+ else:
+ chipNameList.append(outputPrefix + rasterFile[1] +
+ "_" + baseName + "_img{}.tif".format(imgId))
+ else:
+ if imgId==-1:
+ chipNameList.append(outputPrefix + className + "_" +
+ rasterFile[1] + "_" + baseName + "_{}_{}.tif".format(minXCut, minYCut))
+ else:
+ chipNameList.append(outputPrefix + className + '_' +
+ rasterFile[1] + "_" + baseName + "_img{}.tif".format(imgId))
+
+ # clip raster
+
+ for chipName, rasterFile in zip(chipNameList, rasterFileList):
+ outputFileName = os.path.join(outputDirectory, rasterFile[1], className, chipName)
+ ## Clip Image
+ print(rasterFile)
+ print(outputFileName)
+ subprocess.call(["gdalwarp", "-te", "{}".format(minXCut), "{}".format(minYCut), "{}".format(maxXCut),
+ "{}".format(maxYCut),
+ rasterFile[0], outputFileName])
+
+ baseLayerRasterName = os.path.join(outputDirectory, rasterFileList[0][1], className, chipNameList[0])
+ outputFileName = os.path.join(outputDirectory, rasterFileList[0][1], chipNameList[0])
+
+
+ ### Clip poly to cust to Raster Extent
+ if rasterPolyEnvelope.GetArea() == 0:
+ srcImage = gdal.Open(rasterFileList[0][0])
+ geoTrans, rasterPolyEnvelope, ulX, ulY, lrX, lrY = getRasterExtent(srcImage)
+ polyVectorCut = polyCutWGS.Intersection(rasterPolyEnvelope)
+ else:
+ polyVectorCut = polyCutWGS.Intersection(rasterPolyEnvelope)
+
+ # Interate thorough Vector Src List
+ for shapeSrc in shapeSrcList:
+ if imgId == -1:
+ outGeoJson = outputPrefix + shapeSrc[1] + \
+ "_" + baseName + "_{}_{}.geojson".format(minXCut, minYCut)
+ else:
+ outGeoJson = outputPrefix + shapeSrc[1] + \
+ "_" + baseName + "_img{}.geojson".format(imgId)
+
+ outGeoJson = os.path.join(outputDirectory, 'geojson', shapeSrc[1], outGeoJson)
+
+ clipShapeFile(shapeSrc[0], outGeoJson, polyVectorCut, minpartialPerc=minpartialPerc)
+
+
+ chipSummary = {'rasterSource': baseLayerRasterName,
+ 'chipName': chipNameList[0],
+ 'geoVectorName': outGeoJson,
+ 'pixVectorName': ''
+ }
+
+ return chipSummary
+
+def cutChipFromRasterCenter(rasterFileList, shapeFileSrc, outlineSrc='',
+ outputDirectory='', outputPrefix='clip_',
+ clipSizeMeters=50, createPix=False,
+ classFieldName = 'TYPE',
+ minpartialPerc=0.1,
+ ):
+ #rasterFileList = [['rasterLocation', 'rasterDescription']]
+ # i.e rasterFileList = [['/path/to/3band_AOI_1.tif, '3band'],
+ # ['/path/to/8band_AOI_1.tif, '8band']
+ # ]
+ srcImage = gdal.Open(rasterFileList[0][0])
+ geoTrans, poly, ulX, ulY, lrX, lrY = getRasterExtent(srcImage)
+
+ if outputDirectory == "":
+ outputDirectory = os.path.dirname(rasterFileList[0])
+
+ rasterFileBaseList = []
+ for rasterFile in rasterFileList:
+ rasterFileBaseList.append(os.path.basename(rasterFile[0]))
+
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = createUTMTransform(poly)
+ poly.Transform(transform_WGS84_To_UTM)
+ env = poly.GetEnvelope()
+
+ # return poly to WGS84
+ poly.Transform(transform_UTM_To_WGS84)
+
+ shapeSrc = ogr.Open(shapeFileSrc, 0)
+ if outlineSrc == '':
+ geomOutline = poly
+ else:
+ outline = ogr.Open(outlineSrc)
+ layer = outline.GetLayer()
+ featureOutLine = layer.GetFeature(0)
+ geomOutlineBase = featureOutLine.GetGeometryRef()
+ geomOutline = geomOutlineBase.Intersection(poly)
+
+ shapeSrcBase = ogr.Open(shapeFileSrc, 0)
+ layerBase = shapeSrcBase.GetLayer()
+ layerBase.SetSpatialFilter(geomOutline)
+ for rasterFile in rasterFileList:
+ if not os.path.exists(os.path.join(outputDirectory, rasterFile[1])):
+ os.makedirs(os.path.join(outputDirectory, rasterFile[1]))
+ for feature in layerBase:
+ featureGeom = feature.GetGeometryRef()
+ cx, cy, cz = featureGeom.Centroid().GetPoint()
+ polyCut = createPolygonFromCenterPoint(cx, cy, radiusMeters=clipSizeMeters)
+ print(classFieldName)
+ classDescription = feature.GetField(classFieldName)
+ classDescription = classDescription.replace(" ","")
+ envCut = polyCut.GetEnvelope()
+ minXCut = envCut[0]
+ minYCut = envCut[2]
+ maxXCut = envCut[1]
+ maxYCut = envCut[3]
+ createclip(outputDirectory, rasterFileList, shapeSrc,
+ maxXCut, maxYCut, minYCut, minXCut,
+ rasterFileBaseList=rasterFileBaseList,
+ minpartialPerc=minpartialPerc,
+ outputPrefix=outputPrefix,
+ createPix=createPix,
+ rasterPolyEnvelope=poly,
+ className=classDescription)
+
+
+
+def rotateClip(clipFileName, sourceGeoJson, rotaionList=[0,90,180,275]):
+ # will add "_{}.ext".formate(rotationList[i]
+ pass
+
+
+
+def createMaskedMosaic(input_raster, output_raster, outline_file):
+
+ subprocess.call(["gdalwarp", "-q", "-cutline", outline_file, "-of", "GTiff", input_raster, output_raster,
+ '-wo', 'OPTIMIZE_SIZE=YES',
+ '-co', 'COMPRESS=JPEG',
+ '-co', 'PHOTOMETRIC=YCBCR',
+ '-co', 'TILED=YES'])
+
+
+
diff --git a/python/spaceNetUtilities/geoTools.pyc b/python/spaceNetUtilities/geoTools.pyc
new file mode 100644
index 0000000..9ae04e8
Binary files /dev/null and b/python/spaceNetUtilities/geoTools.pyc differ
diff --git a/python/spaceNetUtilities/labelTools.py b/python/spaceNetUtilities/labelTools.py
new file mode 100644
index 0000000..e040e81
--- /dev/null
+++ b/python/spaceNetUtilities/labelTools.py
@@ -0,0 +1,1075 @@
+from osgeo import gdal, osr, ogr, gdalnumeric
+import numpy as np
+import os
+import geoTools as gT
+import math
+import cPickle as pickle
+import csv
+import glob
+from PIL import Image
+from xml.etree.ElementTree import Element, SubElement, Comment, tostring
+from xml.etree import ElementTree
+from xml.dom import minidom
+import subprocess
+import scipy.io
+from scipy.sparse import csr_matrix
+import json
+import re
+
+
+def evaluateLineStringPlane(geom, label='Airplane'):
+ ring = ogr.Geometry(ogr.wkbLinearRing)
+
+ for i in range(0, geom.GetPointCount()):
+ # GetPoint returns a tuple not a Geometry
+ pt = geom.GetPoint(i)
+ ring.AddPoint(pt[0], pt[1])
+ pt = geom.GetPoint(0)
+ ring.AddPoint(pt[0], pt[1])
+ poly = ogr.Geometry(ogr.wkbPolygon)
+ poly.AddGeometry(ring)
+
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = gT.createUTMTransform(geom)
+ geom.Transform(transform_WGS84_To_UTM)
+ pt0 = geom.GetPoint(0) # Tail
+ pt1 = geom.GetPoint(1) # Wing
+ pt2 = geom.GetPoint(2) # Nose
+ pt3 = geom.GetPoint(3) # Wing
+ Length = math.sqrt((pt2[0]-pt0[0])**2 + (pt2[1]-pt0[1])**2)
+ Width = math.sqrt((pt3[0] - pt1[0])**2 + (pt3[1] - pt1[1])**2)
+ Aspect = Length/Width
+ Direction = (math.atan2(pt2[0]-pt0[0], pt2[1]-pt0[1])*180/math.pi) % 360
+
+
+ geom.Transform(transform_UTM_To_WGS84)
+
+ return [poly, Length, Width, Aspect, Direction]
+
+def evaluateLineStringBoat(geom, label='Boat', aspectRatio=3):
+
+
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = gT.createUTMTransform(geom)
+
+ geom.Transform(transform_WGS84_To_UTM)
+ pt0 = geom.GetPoint(0) # Stern
+ pt1 = geom.GetPoint(1) # Bow
+ Length = math.sqrt((pt1[0]-pt0[0])**2 + (pt1[1]-pt0[1])**2)
+ Direction = (math.atan2(pt1[0]-pt0[0], pt1[1]-pt0[1])*180/math.pi) % 360
+ geom.Transform(transform_UTM_To_WGS84)
+
+ poly, areaM, angRad, lengthM = gT.createBoxFromLine(geom, aspectRatio,
+ transformRequired=True,
+ transform_WGS84_To_UTM=transform_WGS84_To_UTM,
+ transform_UTM_To_WGS84=transform_UTM_To_WGS84)
+
+ Width = Length/aspectRatio
+ Aspect = aspectRatio
+
+ return [poly, Length, Width, Aspect, Direction]
+
+
+def convertLabelStringToPoly(shapeFileSrc, outGeoJSon, labelType='Airplane'):
+
+ shapeSrc = ogr.Open(shapeFileSrc)
+ source_layer = shapeSrc.GetLayer()
+ source_srs = source_layer.GetSpatialRef()
+ # Create the output Layer
+ outDriver = ogr.GetDriverByName("geojson")
+ if os.path.exists(outGeoJSon):
+ outDriver.DeleteDataSource(outGeoJSon)
+
+
+ outDataSource = outDriver.CreateDataSource(outGeoJSon)
+ outLayer = outDataSource.CreateLayer("groundTruth", source_srs, geom_type=ogr.wkbPolygon)
+ # Add input Layer Fields to the output Layer
+ inLayerDefn = source_layer.GetLayerDefn()
+ for i in range(0, inLayerDefn.GetFieldCount()):
+ fieldDefn = inLayerDefn.GetFieldDefn(i)
+ outLayer.CreateField(fieldDefn)
+ outLayer.CreateField(ogr.FieldDefn("Length_m", ogr.OFTReal))
+ outLayer.CreateField(ogr.FieldDefn("Width_m", ogr.OFTReal))
+ outLayer.CreateField(ogr.FieldDefn("Aspect(L/W)", ogr.OFTReal))
+ outLayer.CreateField(ogr.FieldDefn("compassDeg", ogr.OFTReal))
+
+ outLayerDefn = outLayer.GetLayerDefn()
+ for inFeature in source_layer:
+
+ outFeature = ogr.Feature(outLayerDefn)
+
+ for i in range(0, inLayerDefn.GetFieldCount()):
+ outFeature.SetField(inLayerDefn.GetFieldDefn(i).GetNameRef(), inFeature.GetField(i))
+
+ geom = inFeature.GetGeometryRef()
+ if labelType == 'Airplane':
+ poly, Length, Width, Aspect, Direction = evaluateLineStringPlane(geom, label='Airplane')
+ elif labelType == 'Boat':
+ poly, Length, Width, Aspect, Direction = evaluateLineStringBoat(geom, label='Boat')
+
+ outFeature.SetGeometry(poly)
+ outFeature.SetField("Length_m", Length)
+ outFeature.SetField("Width_m", Width)
+ outFeature.SetField("Aspect(L/W)", Aspect)
+ outFeature.SetField("compassDeg", Direction)
+
+ outLayer.CreateFeature(outFeature)
+
+
+def createTruthPixelLinePickle(truthLineFile, pickleLocation=''):
+ if pickleLocation=='':
+ extension = os.path.splitext(truthLineFile)[1]
+ pickleLocation = truthLineFile.replace(extension, 'Pixline.p')
+ if truthLineFile != '':
+ # get Source Line File Information
+ shapef = ogr.Open(truthLineFile, 0)
+ truthLayer = shapef.GetLayer()
+ pt1X = []
+ pt1Y = []
+ pt2X = []
+ pt2Y = []
+ for tmpFeature in truthLayer:
+ tmpGeom = tmpFeature.GetGeometryRef()
+ for i in range(0, tmpGeom.GetPointCount()):
+ pt = tmpGeom.GetPoint(i)
+
+ if i == 0:
+ pt1X.append(pt[0])
+ pt1Y.append(pt[1])
+ elif i == 1:
+ pt2X.append(pt[0])
+ pt2Y.append(pt[1])
+
+ lineData = {'pt1X': np.asarray(pt1X),
+ 'pt1Y': np.asarray(pt1Y),
+ 'pt2X': np.asarray(pt2X),
+ 'pt2Y': np.asarray(pt2Y)
+ }
+
+ with open(pickleLocation, 'wb') as f:
+ pickle.dump(lineData, f)
+ # get Source Line File Information
+
+
+def createTruthPixelPolyPickle(truthPoly, pickleLocation=''):
+ # returns dictionary with list of minX, maxX, minY, maxY
+
+ if pickleLocation=='':
+ extension = os.path.splitext(truthPoly)[1]
+ pickleLocation = truthPoly.replace(extension, 'PixPoly.p')
+ if truthPoly != '':
+ # get Source Line File Information
+ shapef = ogr.Open(truthPoly, 0)
+ truthLayer = shapef.GetLayer()
+ envList = []
+
+ for tmpFeature in truthLayer:
+ tmpGeom = tmpFeature.GetGeometryRef()
+ env = tmpGeom.GetEvnelope()
+ envList.append(env)
+
+ envArray = np.asarray(envList)
+ envelopeData = {'minX': envArray[:,0],
+ 'maxX': envArray[:,1],
+ 'minY': envArray[:,2],
+ 'maxY': envArray[:,3]
+ }
+
+
+ with open(pickleLocation, 'wb') as f:
+ pickle.dump(envelopeData, f)
+ # get Source Line File Information
+
+
+def createNPPixArrayDist(rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+
+ ## open source vector file that truth data
+ source_ds = ogr.Open(vectorSrc)
+ source_layer = source_ds.GetLayer()
+
+ ## extract data from src Raster File to be emulated
+ ## open raster file that is to be emulated
+ srcRas_ds = gdal.Open(rasterSrc)
+ cols = srcRas_ds.RasterXSize
+ rows = srcRas_ds.RasterYSize
+ noDataValue = 0
+
+ if units=='meters':
+ geoTrans, poly, ulX, ulY, lrX, lrY = gT.getRasterExtent(srcRas_ds)
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = gT.createUTMTransform(poly)
+ line = ogr.Geometry(ogr.wkbLineString)
+ line.AddPoint(geoTrans[0], geoTrans[3])
+ line.AddPoint(geoTrans[0]+geoTrans[1], geoTrans[3])
+
+ line.Transform(transform_WGS84_To_UTM)
+ metersIndex = line.Length()
+ else:
+ metersIndex = 1
+
+ ## create First raster memory layer
+ memdrv = gdal.GetDriverByName('MEM')
+ dst_ds = memdrv.Create('', cols, rows, 1, gdal.GDT_Byte)
+ dst_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ dst_ds.SetProjection(srcRas_ds.GetProjection())
+ band = dst_ds.GetRasterBand(1)
+ band.SetNoDataValue(noDataValue)
+
+ gdal.RasterizeLayer(dst_ds, [1], source_layer, burn_values=[255])
+ srcBand = dst_ds.GetRasterBand(1)
+
+ memdrv2 = gdal.GetDriverByName('MEM')
+ prox_ds = memdrv2.Create('', cols, rows, 1, gdal.GDT_Int16)
+ prox_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ prox_ds.SetProjection(srcRas_ds.GetProjection())
+ proxBand = prox_ds.GetRasterBand(1)
+ proxBand.SetNoDataValue(noDataValue)
+
+ options = ['NODATA=0']
+
+ gdal.ComputeProximity(srcBand, proxBand, options)
+
+ memdrv3 = gdal.GetDriverByName('MEM')
+ proxIn_ds = memdrv3.Create('', cols, rows, 1, gdal.GDT_Int16)
+ proxIn_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ proxIn_ds.SetProjection(srcRas_ds.GetProjection())
+ proxInBand = proxIn_ds.GetRasterBand(1)
+ proxInBand.SetNoDataValue(noDataValue)
+ options = ['NODATA=0', 'VALUES=0']
+ gdal.ComputeProximity(srcBand, proxInBand, options)
+
+ proxIn = gdalnumeric.BandReadAsArray(proxInBand)
+ proxOut = gdalnumeric.BandReadAsArray(proxBand)
+
+ proxTotal = proxIn.astype(float) - proxOut.astype(float)
+ proxTotal = proxTotal*metersIndex
+
+ if npDistFileName != '':
+ np.save(npDistFileName, proxTotal)
+
+ return proxTotal
+
+
+def createGeoJSONFromRaster(geoJsonFileName, array2d, geom, proj,
+ layerName="BuildingID",
+ fieldName="BuildingID"):
+
+ memdrv = gdal.GetDriverByName('MEM')
+ src_ds = memdrv.Create('', array2d.shape[1], array2d.shape[0], 1)
+ src_ds.SetGeoTransform(geom)
+ src_ds.SetProjection(proj)
+ band = src_ds.GetRasterBand(1)
+ band.WriteArray(array2d)
+
+ dst_layername = "BuildingID"
+ drv = ogr.GetDriverByName("geojson")
+ dst_ds = drv.CreateDataSource(geoJsonFileName)
+ dst_layer = dst_ds.CreateLayer(layerName, srs=None)
+
+ fd = ogr.FieldDefn(fieldName, ogr.OFTInteger)
+ dst_layer.CreateField(fd)
+ dst_field = 1
+
+ gdal.Polygonize(band, None, dst_layer, dst_field, [], callback=None)
+
+ return
+
+
+def createCSVSummaryFile(chipSummaryList, outputFileName, rasterChipDirectory='', replaceImageID='',
+ createProposalsFile=False,
+ pixPrecision=2):
+
+
+ with open(outputFileName, 'wb') as csvfile:
+ writerTotal = csv.writer(csvfile, delimiter=',', lineterminator='\n')
+ if createProposalsFile:
+ writerTotal.writerow(['ImageId', 'BuildingId', 'PolygonWKT_Pix', 'Confidence'])
+ else:
+ writerTotal.writerow(['ImageId', 'BuildingId', 'PolygonWKT_Pix', 'PolygonWKT_Geo'])
+
+ for chipSummary in chipSummaryList:
+ chipName = chipSummary['chipName']
+ print(chipName)
+ geoVectorName = chipSummary['geoVectorName']
+ #pixVectorName = chipSummary['pixVectorName']
+ buildingList = gT.convert_wgs84geojson_to_pixgeojson(geoVectorName,
+ os.path.join(rasterChipDirectory, chipName),
+ pixPrecision=pixPrecision)
+
+ if len(buildingList) > 0:
+ for building in buildingList:
+ imageId = os.path.basename(building['ImageId']).replace(replaceImageID, "")
+ if createProposalsFile:
+ writerTotal.writerow([imageId, building['BuildingId'],
+ building['polyPix'], 1])
+ else:
+ writerTotal.writerow([imageId, building['BuildingId'],
+ building['polyPix'], building['polyGeo']])
+ else:
+ imageId = os.path.splitext(os.path.basename(chipName))[0].replace(replaceImageID, "")
+ if createProposalsFile:
+ writerTotal.writerow([imageId, -1,
+ 'POLYGON EMPTY', 1])
+ else:
+ writerTotal.writerow([imageId, -1,
+ 'POLYGON EMPTY', 'POLYGON EMPTY'])
+
+
+def createCSVSummaryFileFromJsonList(geoJsonList, outputFileName, chipnameList=[],
+ input='Geo',
+ replaceImageID=''):
+
+ # Note will not Create Pixle inputs. No current input for Raster
+ if chipnameList:
+ pass
+ else:
+ for geoJson in geoJsonList:
+ chipnameList.append(os.path.basename(os.path.splitext(geoJson)[0]))
+
+
+
+ with open(outputFileName, 'wb') as csvfile:
+ writerTotal = csv.writer(csvfile, delimiter=',', lineterminator='\n')
+
+ writerTotal.writerow(['ImageId', 'BuildingId', 'PolygonWKT_Pix', 'PolygonWKT_Geo'])
+
+ for geoVectorName, chipName in zip(geoJsonList, chipnameList):
+ try:
+ buildingList = gT.convert_wgs84geojson_to_pixgeojson(geoVectorName, '',
+ image_id=chipName)
+ if len(buildingList) > 0:
+ for building in buildingList:
+ imageId = os.path.basename(building['ImageId']).replace(replaceImageID,"")
+ writerTotal.writerow([imageId, building['BuildingId'],
+ building['polyPix'], building['polyGeo']])
+ else:
+ imageId = os.path.splitext(os.path.basename(chipName))[0].replace(replaceImageID,"")
+ writerTotal.writerow([imageId, -1,'"POLYGON EMPTY"', '"POLYGON EMPTY"'])
+ except:
+ pass
+
+
+
+def createCSVSummaryFromDirectory(geoJsonDirectory, rasterFileDirectoryList,
+ aoi_num=0,
+ aoi_name='TEST',
+ outputDirectory=''):
+ if outputDirectory == '':
+ outputDirectory == geoJsonDirectory
+ outputbaseName = "AOI_{}_{}_polygons_solution".format(aoi_num, aoi_name)
+ #rasterFileDirectoryList = [
+ # ['/usr/local/share/data/AOI_1_RIO/processed2/3band', '3band', '*.tif'],
+ # ['/usr/local/share/data/AOI_1_RIO/processed2/8band', '8band', '*.tif']
+ # ]
+
+ geoJsonList = glob.glob(os.path.join(geoJsonDirectory, '*.geojson'))
+
+
+ jsonList = []
+ chipSummaryList8band = []
+ chipSummaryList3band = []
+ # AOI_2_RIO_3Band_img997.tif
+ # AOI_2_RIO_img635.geojson
+ chipsSummaryList = []
+ for idx, rasterFile in enumerate(rasterFileDirectoryList):
+ chipsSummaryList[idx] = []
+
+
+ for imageId in geoJsonList:
+ imageId = os.path.basename(imageId)
+
+ for idx, rasterFile in enumerate(rasterFileDirectoryList):
+ bandName = imageId.replace('.geojson', '.tif')
+ bandName = bandName.replace('Geo_', rasterFile[1]+'_')
+ print imageId
+ print os.path.join(rasterFile[0], bandName)
+ chipSummaryBand = {'chipName': os.path.join(rasterFile[0], bandName),
+ 'geoVectorName': os.path.join(geoJsonDirectory, imageId),
+ 'imageId': os.path.splitext(imageId)[0]}
+
+ chipsSummaryList[idx].append(chipSummaryBand)
+
+
+ print "starting"
+ for idx, rasterFile in enumerate(rasterFileDirectoryList):
+ createCSVSummaryFile(chipsSummaryList[idx], os.path.join(outputDirectory,
+ outputbaseName+'_'+rasterFile[1]+'.csv'),
+ replaceImageID=rasterFile[1]+'_')
+
+
+ print "finished"
+
+
+
+def createRasterFromGeoJson(srcGeoJson, srcRasterFileName, outRasterFileName):
+ NoData_value = 0
+ source_ds = ogr.Open(srcGeoJson)
+ source_layer = source_ds.GetLayer()
+
+ srcRaster = gdal.Open(srcRasterFileName)
+
+
+ # Create the destination data source
+ target_ds = gdal.GetDriverByName('GTiff').Create(outRasterFileName, srcRaster.RasterXSize, srcRaster.RasterYSize, 1, gdal.GDT_Byte)
+ target_ds.SetGeoTransform(srcRaster.GetGeoTransform())
+ target_ds.SetProjection(srcRaster.GetProjection())
+ band = target_ds.GetRasterBand(1)
+ band.SetNoDataValue(NoData_value)
+
+ # Rasterize
+ gdal.RasterizeLayer(target_ds, [1], source_layer, burn_values=[1])
+ band.FlushCache()
+
+
+
+
+def createAOIName(AOI_Name, AOI_Num,
+ srcImageryListOrig,
+ srcVectorAOIFile,
+ srcVectorFileList,
+ outputDirectory,
+ clipImageryToAOI=True,
+ windowSizeMeters=200,
+ clipOverlap=0.0,
+ minpartialPerc=0.0,
+ vrtMosaic=True,
+ createPix=False,
+ createSummaryCSVChallenge=True,
+ csvLabel='All',
+ featureName='Buildings'):
+
+ srcImageryList = []
+ if clipImageryToAOI:
+
+
+ for srcImagery in srcImageryListOrig:
+
+ print(srcImagery)
+ AOI_HighResMosaicName = os.path.join(outputDirectory, 'AOI_{}_{}_{}.vrt'.format(AOI_Num, AOI_Name, srcImagery[1]))
+ if vrtMosaic:
+ AOI_HighResMosaicClipName = AOI_HighResMosaicName.replace('.vrt', 'clipped.vrt')
+ else:
+ AOI_HighResMosaicClipName = AOI_HighResMosaicName.replace('.vrt', 'clipped.TIF')
+
+ if os.path.isfile(AOI_HighResMosaicClipName):
+ os.remove(AOI_HighResMosaicClipName)
+
+ if vrtMosaic:
+ command = 'gdalwarp -of VRT ' + \
+ '-cutline ' + srcVectorAOIFile + ' ' + \
+ srcImagery[0] + ' ' + \
+ AOI_HighResMosaicClipName
+ else:
+ command = 'gdalwarp ' + \
+ '-cutline ' + srcVectorAOIFile + ' ' + \
+ srcImagery[0] + ' ' + \
+ AOI_HighResMosaicClipName
+ print(command)
+ os.system(command)
+ srcImageryList.append([AOI_HighResMosaicClipName, srcImagery[1]])
+
+
+ else:
+ srcImageryList = srcImageryListOrig
+
+
+ # rasterFileList = [['rasterLocation', 'rasterDescription']]
+ # i.e rasterFileList = [['/path/to/3band_AOI_1.tif, '3band'],
+ # ['/path/to/8band_AOI_1.tif, '8band']
+ # ]
+ chipSummaryList = gT.cutChipFromMosaic(srcImageryList, srcVectorFileList, outlineSrc=srcVectorAOIFile,
+ outputDirectory=outputDirectory, outputPrefix='',
+ clipSizeMX=windowSizeMeters, clipSizeMY=windowSizeMeters, clipOverlap=clipOverlap,
+ minpartialPerc=minpartialPerc, createPix=createPix,
+ baseName='AOI_{}_{}'.format(AOI_Num, AOI_Name),
+ imgIdStart=1)
+
+
+ outputCSVSummaryName = 'AOI_{}_{}_{}_{}_solutions.csv'.format(AOI_Num, AOI_Name, csvLabel,featureName)
+ createCSVSummaryFile(chipSummaryList, outputCSVSummaryName, rasterChipDirectory='', replaceImageID='',
+ createProposalsFile=False,
+ pixPrecision=2)
+
+
+
+
+def prettify(elem):
+ """Return a pretty-printed XML string for the Element.
+ """
+ rough_string = ElementTree.tostring(elem, 'utf-8')
+ reparsed = minidom.parseString(rough_string)
+ return reparsed.toprettyxml(indent=" ")
+
+
+
+def geoJsonToPASCALVOC2012(xmlFileName, geoJson, rasterImageName, im_id='',
+ dataset ='SpaceNet',
+ folder_name='spacenet',
+ annotationStyle = 'PASCAL VOC2012',
+ segment=True,
+ bufferSizePix=2.5,
+ convertTo8Bit=True,
+ outputPixType='Byte',
+ outputFormat='GTiff'):
+
+ print("creating {}".format(xmlFileName))
+ buildingList = gT.convert_wgs84geojson_to_pixgeojson(geoJson, rasterImageName, image_id=[], pixelgeojson=[], only_polygons=True,
+ breakMultiPolygonGeo=True, pixPrecision=2)
+ # buildinglist.append({'ImageId': image_id,
+ #'BuildingId': building_id,
+ #'polyGeo': ogr.CreateGeometryFromWkt(geom.ExportToWkt()),
+ #'polyPix': ogr.CreateGeometryFromWkt('POLYGON EMPTY')
+ #})
+
+
+
+
+
+ srcRaster = gdal.Open(rasterImageName)
+ outputRaster = rasterImageName
+ if convertTo8Bit:
+ cmd = ['gdal_translate', '-ot', outputPixType, '-of', outputFormat]
+ scaleList = []
+ for bandId in range(srcRaster.RasterCount):
+ bandId = bandId+1
+ band=srcRaster.GetRasterBand(bandId)
+ min = band.GetMinimum()
+ max = band.GetMaximum()
+
+ # if not exist minimum and maximum values
+ if min is None or max is None:
+ (min, max) = band.ComputeRasterMinMax(1)
+ cmd.append('-scale_{}'.format(bandId))
+ cmd.append('{}'.format(0))
+ cmd.append('{}'.format(max))
+ cmd.append('{}'.format(0))
+ cmd.append('{}'.format(255))
+
+ cmd.append(rasterImageName)
+
+ if outputFormat == 'JPEG':
+ outputRaster = xmlFileName.replace('.xml', '.jpg')
+ else:
+ outputRaster = xmlFileName.replace('.xml', '.tif')
+
+ outputRaster = outputRaster.replace('_img', '_8bit_img')
+ cmd.append(outputRaster)
+ print(cmd)
+ subprocess.call(cmd)
+
+
+ if segment:
+ segmented = 1 # 1=True, 0 = False
+ else:
+ segmented = 0
+
+ top = Element('annotation')
+ ## write header
+ childFolder = SubElement(top, 'folder')
+ childFolder.text = dataset
+ childFilename = SubElement(top, 'filename')
+ childFilename.text = rasterImageName
+
+ # write source block
+ childSource = SubElement(top, 'source')
+ SubElement(childSource, 'database').text = dataset
+ SubElement(childSource, 'annotation').text = annotationStyle
+
+ # write size block
+ childSize = SubElement(top, 'size')
+ SubElement(childSize, 'width').text = str(srcRaster.RasterXSize)
+ SubElement(childSize, 'height').text = str(srcRaster.RasterYSize)
+ SubElement(childSize, 'depth').text = str(srcRaster.RasterCount)
+
+ SubElement(top, 'segmented').text = str(segmented)
+
+ # start object segment
+ for building in buildingList:
+ objectType = 'building'
+ objectPose = 'Left'
+ objectTruncated = 0 # 1=True, 0 = False
+ objectDifficulty = 0 # 0 Easy - 3 Hard
+ # Get Envelope returns a tuple (minX, maxX, minY, maxY)
+ env = building['polyPix'].GetEnvelope()
+
+ childObject = SubElement(top, 'object')
+ SubElement(childObject, 'name').text = objectType
+ SubElement(childObject, 'pose').text = objectPose
+ SubElement(childObject, 'truncated').text = str(objectTruncated)
+ SubElement(childObject, 'difficult').text = str(objectDifficulty)
+ # write bounding box
+ childBoundBox = SubElement(childObject, 'bndbox')
+ SubElement(childBoundBox, 'xmin').text = str(int(round(env[0])))
+ SubElement(childBoundBox, 'ymin').text = str(int(round(env[2])))
+ SubElement(childBoundBox, 'xmax').text = str(int(round(env[1])))
+ SubElement(childBoundBox, 'ymax').text = str(int(round(env[3])))
+
+ with open(xmlFileName, 'w') as f:
+ f.write(prettify(top))
+
+
+ print('creating segmentation')
+ if segment:
+ NoData_value = -9999
+
+ source_ds = ogr.Open(geoJson)
+ source_layer = source_ds.GetLayer()
+ srs = source_layer.GetSpatialRef()
+ memDriver = ogr.GetDriverByName('MEMORY')
+ outerBuffer=memDriver.CreateDataSource('outer')
+ outerBufferLayer = outerBuffer.CreateLayer("test", srs, geom_type=ogr.wkbPolygon)
+ innerBuffer = memDriver.CreateDataSource('inner')
+ innerBufferLayer = innerBuffer.CreateLayer("test2", srs, geom_type=ogr.wkbPolygon)
+
+ idField = ogr.FieldDefn("objid", ogr.OFTInteger)
+ innerBufferLayer.CreateField(idField)
+
+ featureDefn = innerBufferLayer.GetLayerDefn()
+ bufferDist = srcRaster.GetGeoTransform()[1]*bufferSizePix
+ for idx, feature in enumerate(source_layer):
+ ingeom = feature.GetGeometryRef()
+ geomBufferOut = ingeom.Buffer(bufferDist)
+ geomBufferIn = ingeom.Buffer(-bufferDist)
+ print(geomBufferIn.ExportToWkt())
+ print(geomBufferIn.IsEmpty())
+ print(geomBufferIn.IsSimple())
+
+ if not geomBufferIn.IsEmpty():
+ outBufFeature = ogr.Feature(featureDefn)
+ outBufFeature.SetGeometry(geomBufferOut)
+
+ outerBufferLayer.CreateFeature(outBufFeature)
+
+ inBufFeature = ogr.Feature(featureDefn)
+ inBufFeature.SetGeometry(geomBufferIn)
+ inBufFeature.SetField('objid', idx)
+ innerBufferLayer.CreateFeature(inBufFeature)
+
+ outBufFeature = None
+ inBufFeature = None
+
+
+
+ print('writing GTIFF sgcls')
+ print('rasterToWrite = {}'.format(xmlFileName.replace('.xml', 'segcls.tif')))
+ target_ds = gdal.GetDriverByName('GTiff').Create(xmlFileName.replace('.xml', 'segcls.tif'), srcRaster.RasterXSize, srcRaster.RasterYSize, 1, gdal.GDT_Byte)
+ print('setTransform')
+ target_ds.SetGeoTransform(srcRaster.GetGeoTransform())
+ print('setProjection')
+ target_ds.SetProjection(srcRaster.GetProjection())
+ print('getBand')
+ band = target_ds.GetRasterBand(1)
+ print('setnodata')
+ band.SetNoDataValue(NoData_value)
+
+ # Rasterize
+ print('rasterize outer buffer')
+ gdal.RasterizeLayer(target_ds, [1], outerBufferLayer, burn_values=[255])
+ print('rasterize inner buffer')
+ gdal.RasterizeLayer(target_ds, [1], innerBufferLayer, burn_values=[100])
+ print('writing png sgcls')
+ # write to .png
+ imageArray = np.array(target_ds.GetRasterBand(1).ReadAsArray())
+ im = Image.fromarray(imageArray)
+ im.save(xmlFileName.replace('.xml', 'segcls.png'))
+
+ print('writing GTIFF sgobj')
+ ## create objectSegment
+ target_ds = gdal.GetDriverByName('GTiff').Create(xmlFileName.replace('.xml', 'segobj.tif'),
+ srcRaster.RasterXSize, srcRaster.RasterYSize, 1, gdal.GDT_Byte)
+ target_ds.SetGeoTransform(srcRaster.GetGeoTransform())
+ target_ds.SetProjection(srcRaster.GetProjection())
+ band = target_ds.GetRasterBand(1)
+
+ band.SetNoDataValue(NoData_value)
+
+ # Rasterize
+ gdal.RasterizeLayer(target_ds, [1], outerBufferLayer, burn_values=[255])
+ gdal.RasterizeLayer(target_ds, [1], innerBufferLayer, burn_values=[100], options=['ATTRIBUTE=objid'])
+ print('writing png sgobj')
+ # write to .png
+ imageArray = np.array(target_ds.GetRasterBand(1).ReadAsArray())
+ im = Image.fromarray(imageArray)
+ im.save(xmlFileName.replace('.xml', 'segobj.png'))
+
+ entry = {'rasterFileName': outputRaster,
+ 'geoJsonFileName': geoJson,
+ 'annotationName': xmlFileName,
+ 'width': srcRaster.RasterXSize,
+ 'height': srcRaster.RasterYSize,
+ 'depth' : srcRaster.RasterCount,
+ 'basename': os.path.splitext(os.path.basename(rasterImageName))[0]
+ }
+
+ return entry
+
+def convert(size, box):
+ '''Input = image size: (w,h), box: [x0, x1, y0, y1]'''
+ dw = 1./size[0]
+ dh = 1./size[1]
+ xmid = (box[0] + box[1])/2.0
+ ymid = (box[2] + box[3])/2.0
+ w0 = box[1] - box[0]
+ h0 = box[3] - box[2]
+ x = xmid*dw
+ y = ymid*dh
+ w = w0*dw
+ h = h0*dh
+
+ return (x,y,w,h)
+
+def geoJsonToDARKNET(xmlFileName, geoJson, rasterImageName, im_id='',
+ dataset ='SpaceNet',
+ folder_name='spacenet',
+ annotationStyle = 'DARKNET',
+ segment=True,
+ bufferSizePix=2.5,
+ convertTo8Bit=True,
+ outputPixType='Byte',
+ outputFormat='GTiff'):
+ xmlFileName = xmlFileName.replace(".xml", ".txt")
+ print("creating {}".format(xmlFileName))
+
+ buildingList = gT.convert_wgs84geojson_to_pixgeojson(geoJson, rasterImageName, image_id=[], pixelgeojson=[], only_polygons=True,
+ breakMultiPolygonGeo=True, pixPrecision=2)
+ # buildinglist.append({'ImageId': image_id,
+ #'BuildingId': building_id,
+ #'polyGeo': ogr.CreateGeometryFromWkt(geom.ExportToWkt()),
+ #'polyPix': ogr.CreateGeometryFromWkt('POLYGON EMPTY')
+ #})
+
+
+ srcRaster = gdal.Open(rasterImageName)
+ outputRaster = rasterImageName
+ if convertTo8Bit:
+ cmd = ['gdal_translate', '-ot', outputPixType, '-of', outputFormat]
+ scaleList = []
+ for bandId in range(srcRaster.RasterCount):
+ bandId = bandId+1
+ band=srcRaster.GetRasterBand(bandId)
+ min = band.GetMinimum()
+ max = band.GetMaximum()
+
+ # if not exist minimum and maximum values
+ if min is None or max is None:
+ (min, max) = band.ComputeRasterMinMax(1)
+ cmd.append('-scale_{}'.format(bandId))
+ cmd.append('{}'.format(0))
+ cmd.append('{}'.format(max))
+ cmd.append('{}'.format(0))
+ cmd.append('{}'.format(255))
+
+ cmd.append(rasterImageName)
+ if outputFormat == 'JPEG':
+ outputRaster = xmlFileName.replace('.txt', '.jpg')
+ else:
+ outputRaster = xmlFileName.replace('.txt', '.tif')
+
+ outputRaster = outputRaster.replace('_img', '_8bit_img')
+ cmd.append(outputRaster)
+ print(cmd)
+ subprocess.call(cmd)
+
+ with open(xmlFileName, 'w') as f:
+
+ for building in buildingList:
+
+
+ # Get Envelope returns a tuple (minX, maxX, minY, maxY)
+
+ boxDim = building['polyPix'].GetEnvelope()
+ rasterSize = (srcRaster.RasterXSize, srcRaster.RasterYSize)
+
+ lineOutput = convert(rasterSize, boxDim)
+ classNum=0
+ f.write('{} {} {} {} {}\n'.format(classNum,
+ lineOutput[0],
+ lineOutput[1],
+ lineOutput[2],
+ lineOutput[3])
+ )
+
+ entry = {'rasterFileName': outputRaster,
+ 'geoJsonFileName': geoJson,
+ 'annotationName': xmlFileName,
+ 'width': srcRaster.RasterXSize,
+ 'height': srcRaster.RasterYSize,
+ 'depth' : srcRaster.RasterCount,
+ 'basename': os.path.splitext(os.path.basename(rasterImageName))[0]
+ }
+
+ return entry
+
+def createDistanceTransform(rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+
+ ## open source vector file that truth data
+ source_ds = ogr.Open(vectorSrc)
+ source_layer = source_ds.GetLayer()
+
+ ## extract data from src Raster File to be emulated
+ ## open raster file that is to be emulated
+ srcRas_ds = gdal.Open(rasterSrc)
+ cols = srcRas_ds.RasterXSize
+ rows = srcRas_ds.RasterYSize
+ noDataValue = 0
+
+ if units=='meters':
+ geoTrans, poly, ulX, ulY, lrX, lrY = gT.getRasterExtent(srcRas_ds)
+ transform_WGS84_To_UTM, transform_UTM_To_WGS84, utm_cs = gT.createUTMTransform(poly)
+ line = ogr.Geometry(ogr.wkbLineString)
+ line.AddPoint(geoTrans[0], geoTrans[3])
+ line.AddPoint(geoTrans[0]+geoTrans[1], geoTrans[3])
+
+ line.Transform(transform_WGS84_To_UTM)
+ metersIndex = line.Length()
+ else:
+ metersIndex = 1
+
+ ## create First raster memory layer
+ memdrv = gdal.GetDriverByName('MEM')
+ dst_ds = memdrv.Create('', cols, rows, 1, gdal.GDT_Byte)
+ dst_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ dst_ds.SetProjection(srcRas_ds.GetProjection())
+ band = dst_ds.GetRasterBand(1)
+ band.SetNoDataValue(noDataValue)
+
+ gdal.RasterizeLayer(dst_ds, [1], source_layer, burn_values=[255])
+ srcBand = dst_ds.GetRasterBand(1)
+
+ memdrv2 = gdal.GetDriverByName('MEM')
+ prox_ds = memdrv2.Create('', cols, rows, 1, gdal.GDT_Int16)
+ prox_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ prox_ds.SetProjection(srcRas_ds.GetProjection())
+ proxBand = prox_ds.GetRasterBand(1)
+ proxBand.SetNoDataValue(noDataValue)
+ options = ['NODATA=0']
+
+ ##compute distance to non-zero pixel values and scrBand and store in proxBand
+ gdal.ComputeProximity(srcBand, proxBand, options)
+
+ memdrv3 = gdal.GetDriverByName('MEM')
+ proxIn_ds = memdrv3.Create('', cols, rows, 1, gdal.GDT_Int16)
+ proxIn_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ proxIn_ds.SetProjection(srcRas_ds.GetProjection())
+ proxInBand = proxIn_ds.GetRasterBand(1)
+ proxInBand.SetNoDataValue(noDataValue)
+ options = ['NODATA=0', 'VALUES=0']
+
+ ##compute distance to zero pixel values and scrBand and store in proxInBand
+ gdal.ComputeProximity(srcBand, proxInBand, options)
+
+ proxIn = gdalnumeric.BandReadAsArray(proxInBand)
+ proxOut = gdalnumeric.BandReadAsArray(proxBand)
+
+ ##distance tranform is the distance to zero pixel values minus distance to non-zero pixel values
+ proxTotal = proxIn.astype(float) - proxOut.astype(float)
+ proxTotal = proxTotal*metersIndex
+
+ if npDistFileName != '':
+ np.save(npDistFileName, proxTotal)
+
+ return proxTotal
+
+
+def createClassSegmentation(rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+ dist_trans = createDistanceTransform(rasterSrc, vectorSrc, npDistFileName='', units='pixels')
+ dist_trans[dist_trans > 0] = 1
+ dist_trans[dist_trans < 0] = 0
+ return dist_trans
+
+
+def createClassBoundaries(rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+ dist_trans = createDistanceTransform(rasterSrc, vectorSrc, npDistFileName='', units='pixels')
+ #From distance transform to boundary
+ dist_trans[dist_trans > 1.0] = 255
+ dist_trans[dist_trans < -1.0] = 255
+ dist_trans[dist_trans != 255] = 1
+ dist_trans[dist_trans == 255] = 0
+ sparse_total = csr_matrix(dist_trans)
+ return sparse_total.astype(np.uint8)
+
+
+def createClassCategoriesPresent(vectorSrc):
+ with open(vectorSrc) as my_file:
+ data = json.load(my_file)
+ if(len(data['features']) == 0):
+ return np.array([],dtype=np.uint8)
+ else:
+ return np.array([1],dtype=np.uint8)
+
+def createDistanceTransformByFeatureIndex(feature_index, rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+ ## open source vector file that truth data
+ source_ds = ogr.Open(vectorSrc)
+ source_layer = source_ds.GetLayer()
+
+ #Define feature
+ my_feature = source_layer[feature_index]
+
+ #Spatial Reference
+ srs = source_layer.GetSpatialRef()
+
+ #Create feature Layer
+ outDriver = ogr.GetDriverByName('MEMORY')
+ outDataSource = outDriver.CreateDataSource('memData')
+ Feature_Layer = outDataSource.CreateLayer("this_feature", srs, geom_type=ogr.wkbPolygon)
+
+ #Add feature to layer
+ Feature_Layer.CreateFeature(my_feature)
+
+
+ ## extract data from src Raster File to be emulated
+ ## open raster file that is to be emulated
+ srcRas_ds = gdal.Open(rasterSrc)
+ cols = srcRas_ds.RasterXSize
+ rows = srcRas_ds.RasterYSize
+ noDataValue = 0
+ metersIndex = 1
+
+ ## create First raster memory layer
+ memdrv = gdal.GetDriverByName('MEM')
+ dst_ds = memdrv.Create('', cols, rows, 1, gdal.GDT_Byte)
+ dst_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ dst_ds.SetProjection(srcRas_ds.GetProjection())
+ band = dst_ds.GetRasterBand(1)
+ band.SetNoDataValue(noDataValue)
+
+ gdal.RasterizeLayer(dst_ds, [1], Feature_Layer, burn_values=[255])
+ srcBand = dst_ds.GetRasterBand(1)
+
+ memdrv2 = gdal.GetDriverByName('MEM')
+ prox_ds = memdrv2.Create('', cols, rows, 1, gdal.GDT_Int16)
+ prox_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ prox_ds.SetProjection(srcRas_ds.GetProjection())
+ proxBand = prox_ds.GetRasterBand(1)
+ proxBand.SetNoDataValue(noDataValue)
+
+ options = ['NODATA=0']
+
+ gdal.ComputeProximity(srcBand, proxBand, options)
+
+ memdrv3 = gdal.GetDriverByName('MEM')
+ proxIn_ds = memdrv3.Create('', cols, rows, 1, gdal.GDT_Int16)
+ proxIn_ds.SetGeoTransform(srcRas_ds.GetGeoTransform())
+ proxIn_ds.SetProjection(srcRas_ds.GetProjection())
+ proxInBand = proxIn_ds.GetRasterBand(1)
+ proxInBand.SetNoDataValue(noDataValue)
+ options = ['NODATA=0', 'VALUES=0']
+ gdal.ComputeProximity(srcBand, proxInBand, options)
+
+ proxIn = gdalnumeric.BandReadAsArray(proxInBand)
+ proxOut = gdalnumeric.BandReadAsArray(proxBand)
+
+ proxTotal = proxIn.astype(float) - proxOut.astype(float)
+ proxTotal = proxTotal*metersIndex
+
+ if npDistFileName != '':
+ np.save(npDistFileName, proxTotal)
+
+ return proxTotal
+
+def createSegmentationByFeatureIndex(feature_index, rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+ dist_trans_by_feature = createDistanceTransformByFeatureIndex(feature_index, rasterSrc, vectorSrc, npDistFileName='', units='pixels')
+ dist_trans_by_feature[dist_trans_by_feature > 0] = feature_index + 1
+ dist_trans_by_feature[dist_trans_by_feature < 0] = 0
+ return dist_trans_by_feature.astype(np.uint8)
+
+def createInstanceSegmentation(rasterSrc, vectorSrc):
+ json_data = open(vectorSrc)
+ data = json.load(json_data)
+ num_features = len(data['features'])
+
+ cell_array = np.zeros((num_features,), dtype=np.object)
+ for i in range(num_features):
+ cell_array[i] = createSegmentationByFeatureIndex(i, rasterSrc, vectorSrc, npDistFileName='', units='pixels')
+ return cell_array
+
+def createBoundariesByFeatureIndex(feature_index, rasterSrc, vectorSrc, npDistFileName='', units='pixels'):
+ dist_trans_by_feature = createDistanceTransformByFeatureIndex(feature_index, rasterSrc, vectorSrc, npDistFileName='', units='pixels')
+ dist_trans_by_feature[dist_trans_by_feature > 1.0] = 255
+ dist_trans_by_feature[dist_trans_by_feature < -1.0] = 255
+ dist_trans_by_feature[dist_trans_by_feature != 255] = 1
+ dist_trans_by_feature[dist_trans_by_feature == 255] = 0
+ return dist_trans_by_feature.astype(np.uint8)
+
+def createInstanceBoundaries(rasterSrc, vectorSrc):
+ json_data = open(vectorSrc)
+ data = json.load(json_data)
+ num_features = len(data['features'])
+
+ cell_array = np.zeros((num_features,), dtype=np.object)
+ for i in range(num_features):
+ full_boundary_matrix = createBoundariesByFeatureIndex(i, rasterSrc, vectorSrc, npDistFileName='', units='pixels')
+ cell_array[i] = csr_matrix(full_boundary_matrix)
+ return cell_array
+
+def createInstanceCategories(vectorSrc):
+ with open(vectorSrc) as my_file:
+ data = json.load(my_file)
+ if(len(data['features']) == 0):
+ return np.array([],dtype=np.uint8)
+ else:
+ return np.ones(len(data['features']),dtype=np.uint8).reshape((len(data['features']), 1))
+
+
+
+def geoJsonToSBD(annotationName_cls, annotationName_inst, geoJson, rasterSource,
+ dataset='spacenetV2',
+ folder_name='spacenetV2',
+ annotationStyle='SBD',
+ segment=True,
+ convertTo8Bit='',
+ outputPixType='',
+ outputFormat=''
+ ):
+
+ #Print raster file name
+ my_raster_source = rasterSource
+ print("Raster directory : ",my_raster_source)
+ srcRaster = gdal.Open(my_raster_source)
+
+
+ my_vector_source = geoJson
+ print("Vector directory : ",my_vector_source)
+
+
+ #Call main functions to create label datafor cls
+ my_cls_segmentation = createClassSegmentation(my_raster_source, my_vector_source, npDistFileName='', units='pixels')
+ my_cls_boundaries = createClassBoundaries(my_raster_source, my_vector_source, npDistFileName='', units='pixels')
+ my_cls_categories = createClassCategoriesPresent(my_vector_source)
+
+ #Call main functions to create label datafor inst
+ my_inst_segmentation = createInstanceSegmentation(my_raster_source, my_vector_source)
+ my_inst_boundaries = createInstanceBoundaries(my_raster_source, my_vector_source)
+ my_inst_categories = createInstanceCategories(my_vector_source)
+
+ #Wraps for cls struct
+ cls_boundaries_wrap = np.array([my_cls_boundaries])
+ cls_categories_wrap = my_cls_categories
+
+ #Wraps for inst struct
+ inst_boundaries_wrap = np.array([my_inst_boundaries])
+ inst_categories_wrap = my_inst_categories
+
+ #Create a class struct
+ GTcls = {'Segmentation': my_cls_segmentation , 'Boundaries': cls_boundaries_wrap, 'CategoriesPresent': cls_categories_wrap}
+
+
+ #Create the instance struct
+ GTinst = {'Segmentation': my_inst_segmentation , 'Boundaries': inst_boundaries_wrap, 'Categories': inst_categories_wrap}
+
+ #Save the files
+ scipy.io.savemat(annotationName_cls,{'GTcls': GTcls})
+ scipy.io.savemat(annotationName_inst,{'GTinst': GTinst})
+
+ print("Done with "+str())
+
+ entry = {'rasterFileName': my_raster_source,
+ 'geoJsonFileName': geoJson,
+ 'annotationName' : annotationName_cls,
+ 'annotationName_cls': annotationName_cls,
+ 'annotationName_inst':annotationName_inst,
+ 'width': srcRaster.RasterXSize,
+ 'height': srcRaster.RasterYSize,
+ 'depth' : srcRaster.RasterCount,
+ 'basename': os.path.splitext(os.path.basename(my_raster_source))[0]
+ }
+
+ return entry
+
diff --git a/python/spaceNetUtilities/labelTools.pyc b/python/spaceNetUtilities/labelTools.pyc
new file mode 100644
index 0000000..a5d7411
Binary files /dev/null and b/python/spaceNetUtilities/labelTools.pyc differ
diff --git a/python/splitAOI_Train_Test_Val.py b/python/splitAOI_Train_Test_Val.py
new file mode 100644
index 0000000..5dd2e75
--- /dev/null
+++ b/python/splitAOI_Train_Test_Val.py
@@ -0,0 +1,136 @@
+from spaceNetUtilities import labelTools as lT
+import glob
+import argparse
+import random
+import os
+import errno
+import shutil
+
+
+def mkdir_p(path):
+ try:
+ os.makedirs(path)
+ except OSError as exc: # Python >2.5
+ if exc.errno == errno.EEXIST and os.path.isdir(path):
+ pass
+ else:
+ raise
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument("-imgDir", "--imgDir", action='append', type=str,
+ help="Directory of Raster Images BaseLayer")
+ parser.add_argument("-geoDir", "--geojsonDir", type=str,
+ help="Directory of geojson files")
+
+ parser.add_argument("-o", "--outputCSV", type=str,
+ help="Output File Name and Location for CSV")
+ parser.add_argument("-pixPrecision", "--pixelPrecision", type=int,
+ help="Number of decimal places to include for pixel, uses round(xPix, pixPrecision)"
+ "Default = 2",
+ default=2)
+ parser.add_argument("-outputDir", "--outputDir", type=str,
+ help="baseLocation PlaceOutproduct")
+
+ parser.add_argument("--CreateProposalFile", help="Create ProposalsFile",
+ action="store_true")
+ parser.add_argument("-strip", "--stripOutFromGeoJson", type=str,
+ help="string delimited")
+ parser.add_argument("--DontstripFirstUnderScore", action="store_false")
+ args = parser.parse_args()
+
+ rasterDirectory = args.imgDir
+ geoJsonDirectory = args.geojsonDir
+ outputCSVFileName = args.outputCSV
+ createProposalFile = args.CreateProposalFile
+ if args.stripOutFromGeoJson:
+ stripList = args.stripOutFromGeoJson.split(' ')
+ else:
+ stripList = []
+
+ imageDirList = args.imgDir
+
+ jsonList = []
+ chipSummaryList = []
+
+ rasterDirList = []
+ secondaryImagePrefix = []
+ for imageDir in imageDirList:
+ rasListTemp = glob.glob(os.path.join(imageDir, '*.tif'))
+ rasterDirList.append([imageDir, os.path.basename(rasListTemp[0]).split("_")[0], rasListTemp
+ ])
+
+ geoJsonList = glob.glob(os.path.join(geoJsonDirectory, '*.geojson'))
+ for imageId in geoJsonList:
+ imageId = os.path.basename(imageId)
+ rasterName = imageId.replace('.geojson', '.tif')
+
+ for stripItem in stripList:
+ rasterName = rasterName.replace(stripItem, '')
+
+ chipNameList = []
+ for rastDir in rasterDirList:
+ if args.DontstripFirstUnderScore:
+
+ rasterName = rastDir[1] + "_" + rasterName.split('_', 1)[1]
+
+ else:
+ rasterName = rastDir[1] + "_" + rasterName
+ chipName = [rastDir[1], os.path.join(rastDir[0], rasterName)]
+ chipNameList.append(chipName)
+ print(imageId)
+
+ chipSummary = {'chipName': chipNameList,
+ 'geoVectorName': os.path.join(geoJsonDirectory, imageId),
+ 'imageId': os.path.splitext(imageId)[0]}
+
+ chipSummaryList.append(chipSummary)
+
+ random.shuffle(chipSummaryList)
+ trainSplitPoint = int(round(len(chipSummaryList)*0.6))
+ valSplitPoint = int(round(len(chipSummaryList)*0.8))
+
+ splitInformationList = [['train', chipSummaryList[0:trainSplitPoint]],
+ ['test', chipSummaryList[trainSplitPoint+1:valSplitPoint]],
+ ['validate', chipSummaryList[valSplitPoint+1:]]
+ ]
+
+ outputDir = args.outputDir
+
+ # Perform Split
+ for splitInformation in splitInformationList:
+
+ for rastDir in rasterDirList:
+ print(outputDir)
+ print(splitInformationList)
+
+ mkdir_p(os.path.join(outputDir, splitInformation[0], rastDir[1]))
+
+ mkdir_p(os.path.join(outputDir, splitInformation[0], 'geojson', 'buildings'))
+
+ for chipSummary in splitInformation[1]:
+ for chip in chipSummary['chipName']:
+ shutil.copy(chip[1], os.path.join(outputDir, splitInformation[0], chip[0]))
+
+ shutil.copy(chipSummary['geoVectorName'], os.path.join(outputDir,
+ splitInformation[0],
+ 'geojson',
+ 'buildings'))
+
+ rasterPrefix = 'PAN'
+ chipSummaryListTmp = []
+
+ for chipSummary in splitInformation[1]:
+
+ chipSummaryTmp = chipSummary
+ chipSummaryTmp['chipName'] = chipSummary['chipName'][0][1]
+ chipSummaryListTmp.append(chipSummaryTmp)
+
+ print("starting")
+ outputCSVFileName = os.path.join(outputDir, args.outputCSV + splitInformation[0] + ".csv")
+
+ lT.createCSVSummaryFile(chipSummaryListTmp, outputCSVFileName,
+ replaceImageID=rasterPrefix+"_",
+ pixPrecision=args.pixelPrecision)
+
+ print("finished")