Skip to content

Commit

Permalink
Import open-source version
Browse files Browse the repository at this point in the history
  • Loading branch information
alexisakers committed Apr 2, 2018
0 parents commit 98da9ef
Show file tree
Hide file tree
Showing 119 changed files with 11,854 additions and 0 deletions.
Binary file added .github/VideoThumbnail.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
77 changes: 77 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Xcode
#
# gitignore contributors: remember to update Global/Xcode.gitignore, Objective-C.gitignore & Swift.gitignore

## Build generated
build/
DerivedData/

## Various settings
*.pbxuser
!default.pbxuser
*.mode1v3
!default.mode1v3
*.mode2v3
!default.mode2v3
*.perspectivev3
!default.perspectivev3
xcuserdata/

## Other
*.moved-aside
*.xccheckout
*.xcscmblueprint

## Obj-C/Swift specific
*.hmap
*.ipa
*.dSYM.zip
*.dSYM

## Playgrounds
timeline.xctimeline
playground.xcworkspace

# Swift Package Manager
#
# Add this line if you want to avoid checking in source code from Swift Package Manager dependencies.
# Packages/
# Package.pins
.build/

# CocoaPods
#
# We recommend against adding the Pods directory to your .gitignore. However
# you should judge for yourself, the pros and cons are mentioned at:
# https://guides.cocoapods.org/using/using-cocoapods.html#should-i-check-the-pods-directory-into-source-control
#
# Pods/

# Carthage
#
# Add this line if you want to avoid checking in source code from Carthage dependencies.
# Carthage/Checkouts

Carthage/Build

# fastlane
#
# It is recommended to not store the screenshots in the git repo. Instead, use fastlane to re-generate the
# screenshots whenever they are needed.
# For more information about the recommended setup visit:
# https://docs.fastlane.tools/best-practices/source-control/#source-control

fastlane/report.xml
fastlane/Preview.html
fastlane/screenshots
fastlane/test_output

**/.DS_Store
**/*.pyc
Data-Model/Training/input/augmented
Data-Model/Training/input/normalized
Data-Model/Training/input/training-data/**/*.jpg
Data-Model/Training/input/training-data/**/*.JPG
Data-Model/Training/input/training-data/**/*.jpeg
Data-Model/Training/input/training-data/**/*.JPEG
Data-Model/Training/output/tf_files
Binary file added Data-Model/EmojiSketches.mlmodel
Binary file not shown.
32 changes: 32 additions & 0 deletions Data-Model/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Data Model

The Core ML model was built using transfer learning. To perform this task, I used

- A data set of hand-drawn emojis I created
- TensorFlow and Docker
- A pre-trained [MobileNet](https://arxiv.org/abs/1704.04861) snapshot
- Data augmentation

## Classes

The data model can recognize seven emojis:

- 😊 `smile`
- 😂 `laugh`
- ☀️ `sun`
- ☁️ `cloud`
- ❤️ `heart`
- ✔️ `checkmark`
- 🥐 `croissant`

## Training with your own data set

To build your own data set, you can use the `SampleCollection` iOS app. Once you want to train your model, export the saved samples using iTunes or Xcode and put them in the `Training/input/training-data` folder.

Open a shell in the `Training` folder.

Run the `prepare.sh` script to download the dependencies and prepare the images. If you do not want to perform data augmentation (which requires high CPU resources), you can edit the `input/normalize` script to use the `input/training-data` folder instead of `input/augmented`.

Once the `input/normalized` folder is filled with the normalized images, you can run the `retrain.sh` script to trai create the Core ML model. This can take around half an hour, depending on your computer's capabilities.

The `EmojiSketches.coreml` model file in this directory will be updated when training has completed.
20 changes: 20 additions & 0 deletions Data-Model/Training/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
FROM gcr.io/tensorflow/tensorflow:1.6.0

ENV IMAGE_SIZE 224
ENV OUTPUT_GRAPH tf_files/retrained_graph.pb
ENV OUTPUT_LABELS tf_files/retrained_labels.txt
ENV ARCHITECTURE mobilenet_1.0_${IMAGE_SIZE}
ENV TRAINING_STEPS 1000

VOLUME /output
VOLUME /input

RUN curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/image_retraining/retrain.py

ENTRYPOINT python -m retrain \
--how_many_training_steps="${TRAINING_STEPS}" \
--model_dir=/output/tf_files/models/ \
--output_graph=/output/"${OUTPUT_GRAPH}" \
--output_labels=/output/"${OUTPUT_LABELS}" \
--architecture="${ARCHITECTURE}" \
--image_dir=/input/
17 changes: 17 additions & 0 deletions Data-Model/Training/export.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#
# MLMOJI
#
# This file is part of Alexis Aubry's WWDC18 scolarship submission open source project.
#
# Copyright (c) 2018 Alexis Aubry. Available under the terms of the MIT License.
#

import tfcoreml as tf_converter

tf_converter.convert(tf_model_path = 'output/tf_files/retrained_graph.pb',
mlmodel_path = '../EmojiSketches.mlmodel',
output_feature_names = ['final_result:0'],
class_labels = 'output/tf_files/retrained_labels.txt',
input_name_shape_dict = {'input:0' : [1, 224, 224, 3]},
image_input_names = ['input:0'])

30 changes: 30 additions & 0 deletions Data-Model/Training/input/augment.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/bin/bash

#
# MLMOJI
#
# This file is part of Alexis Aubry's WWDC18 scolarship submission open source project.
#
# Copyright © 2018 Alexis Aubry. Available under the terms of the MIT License.
#

# Prepare directory
rm -rf augmented
cp -R training-data augmented

# Generate Data
python augmentor/main.py augmented/ fliph
python augmentor/main.py augmented/ noise_0.01
python augmentor/main.py augmented/ fliph,noise_0.01
python augmentor/main.py augmented/ fliph,rot_-30
python augmentor/main.py augmented/ fliph,rot_30
python augmentor/main.py augmented/ rot_15,trans_20_20
python augmentor/main.py augmented/ rot_33,trans_-20_50
python augmentor/main.py augmented/ trans_0_20,zoom_100_50_300_300
python augmentor/main.py augmented/ fliph,trans_50_20,zoom_60_50_200_200
python augmentor/main.py augmented/ rot_-15,zoom_75_50_300_300
python augmentor/main.py augmented/ rot_30
python augmentor/main.py augmented/ blur_4.0
python augmentor/main.py augmented/ fliph,blur_4.0
python augmentor/main.py augmented/ fliph,rot_30,blur_4.0
python augmentor/main.py augmented/ zoom_50_50_250_250
123 changes: 123 additions & 0 deletions Data-Model/Training/input/augmentor/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
## Image Augmentor

> Original repository: [codebox/image_augmentor](https://github/com/codebox/image_augmentor)
This is a simple data augmentation tool for image files, intended for use with machine learning data sets.
The tool scans a directory containing image files, and generates new images by performing a specified set of
augmentation operations on each file that it finds. This process multiplies the number of training examples that can
be used when developing a neural network, and should significantly improve the resulting network's performance,
particularly when the number of training examples is relatively small.

Run the utility from the command-line as follows:

python main.py <image dir> <transform1> <transform2> ...

The `<image dir>` argument should be the path to a directory containing the image files to be augmented.
The utility will search the directory recursively for files with any of the following extensions:
`jpg, jpeg, bmp, png`.

The `transform` arguments determine what types of augmentation operations will be performed,
using the codes listed in the table below:

|Code|Description|Example Values|
|---|---|------|
|`fliph`|Horizontal Flip|`fliph`|
|`flipv`|Vertical Flip|`flipv`|
|`noise`|Adds random noise to the image|`noise_0.01`,`noise_0.5`|
|`rot`|Rotates the image by the specified amount|`rot_90`,`rot_-45`|
|`trans`|Shifts the pixels of the image by the specified amounts in the x and y directions|`trans_20_10`,`trans_-10_0`|
|`zoom`|Zooms into the specified region of the image, performing stretching/shrinking as necessary|`zoom_0_0_20_20`,`zoom_-10_-20_10_10`|
|`blur`|Blurs the image by the specified amount|`blur_1.5`|


Each transform argument results in one additional output image being generated for each input image.
An argument may consist of one or more augmentation operations. Multiple operations within a single argument
must be separated by commas, and the order in which the operations are performed will match the order in which they
are specified within the argument.

### Examples
Produce 2 output images for each input image, one of which is flipped horizontally, and one of which is flipped vertically:

python main.py ./my_images fliph flipv

Produce 1 output image for each input image, by first rotating the image by 90&deg; and then flipping it horizontally:

python main.py ./my_images rot_90,fliph

### Operations

#### Horizontal Flip
Mirrors the image around a vertical line running through its center

python main.py ./my_images fliph

<img style="border: 1px solid grey" style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__fliph.png" alt="Flipped Image" width="150" height="150"/>

#### Vertical Flip
Mirrors the image around a horizontal line running through its center

python main.py ./my_images flipv

<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__flipv.png" alt="Flipped Image" width="150" height="150"/>

#### Noise
Adds random noise to the image. The amount of noise to be added is specified by a floating-point numeric value that is included
in the transform argument, the numeric value must be greater than 0.

python main.py ./my_images noise_0.01 noise_0.02 noise_0.05

<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__noise0.01.png" alt="Noisy Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__noise0.02.png" alt="Noisy Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__noise0.05.png" alt="Noisy Image" width="150" height="150"/>

#### Rotate
Rotates the image. The angle of rotation is specified by a integer value that is included in the transform argument

python main.py ./my_images rot_90 rot_180 rot_-90

<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__rot90.png" alt="Rotated Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__rot180.png" alt="Rotated Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__rot-90.png" alt="Rotated Image" width="150" height="150"/>

#### Translate
Performs a translation on the image. The size of the translation in the x and y directions are specified by integer values that
are included in the transform argument

python main.py ./my_images trans_20_20 trans_0_100

<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__trans20_20.png" alt="Translated Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__trans0_100.png" alt="Translated Image" width="150" height="150"/>

#### Zoom/Stretch
Zooms in (or out) to a particular area of the image. The top-left and bottom-right coordinates of the target region are
specified by integer values included in the transform argument. By specifying a target region with an aspect ratio that
differs from that of the source image, stretching transformations can be performed.

python main.py ./my_images zoom_150_0_300_150 zoom_0_50_300_150 zoom_200_0_300_300

<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__zoom150_0_300_150.png" alt="Zoomed Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__zoom0_50_300_150.png" alt="Stretched Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__zoom200_0_300_300.png" alt="Stretched Image" width="150" height="150"/>

#### Blur
Blurs the image. The amount of blurring is specified by a floating-point value included in the transform argument.

python main.py ./my_images blur_1.0 blur_2.0 blur_4.0

<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw.png" alt="Original Image" width="150" height="150"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__blur1.0.png" alt="Blurred Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__blur2.0.png" alt="Blurred Image" width="150" height="150"/>
<img style="border: 1px solid grey" src="http://codebox.net/graphics/image_augmentor/macaw__blur4.0.png" alt="Blurred Image" width="150" height="150"/>
29 changes: 29 additions & 0 deletions Data-Model/Training/input/augmentor/counter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from multiprocessing.dummy import Lock

class Counter:
def __init__(self):
self.lock = Lock()
self._processed = 0
self._error = 0
self._skipped_no_match = 0
self._skipped_augmented = 0

def processed(self):
with self.lock:
self._processed += 1

def error(self):
with self.lock:
self._error += 1

def skipped_no_match(self):
with self.lock:
self._skipped_no_match += 1

def skipped_augmented(self):
with self.lock:
self._skipped_augmented += 1

def get(self):
with self.lock:
return {'processed' : self._processed, 'error' : self._error, 'skipped_no_match' : self._skipped_no_match, 'skipped_augmented' : self._skipped_augmented}
Loading

0 comments on commit 98da9ef

Please sign in to comment.