diff --git a/README.md b/README.md index 01df3236fd2..50f5cebd324 100644 --- a/README.md +++ b/README.md @@ -11,8 +11,8 @@ running TensorFlow 0.12 or earlier, please ## Models -- [adversarial_text](adversarial_text): semi-supervised sequence learning with - adversarial training. +- [adversarial_crypto](adversarial_crypto): protecting communications with adversarial neural cryptography. +- [adversarial_text](adversarial_text): semi-supervised sequence learning with adversarial training. - [attention_ocr](attention_ocr): a model for real-world image text extraction. - [autoencoder](autoencoder): various autoencoders. - [cognitive_mapping_and_planning](cognitive_mapping_and_planning): implementation of a spatial memory based mapping and planning architecture for visual navigation. diff --git a/adversarial_crypto/README.md b/adversarial_crypto/README.md index 431a9d41c49..504ca234beb 100644 --- a/adversarial_crypto/README.md +++ b/adversarial_crypto/README.md @@ -4,15 +4,15 @@ This is a slightly-updated model used for the paper ["Learning to Protect Communications with Adversarial Neural Cryptography"](https://arxiv.org/abs/1610.06918). -> We ask whether neural networks can learn to use secret keys to protect -> information from other neural networks. Specifically, we focus on ensuring -> confidentiality properties in a multiagent system, and we specify those -> properties in terms of an adversary. Thus, a system may consist of neural -> networks named Alice and Bob, and we aim to limit what a third neural -> network named Eve learns from eavesdropping on the communication between -> Alice and Bob. We do not prescribe specific cryptographic algorithms to -> these neural networks; instead, we train end-to-end, adversarially. -> We demonstrate that the neural networks can learn how to perform forms of +> We ask whether neural networks can learn to use secret keys to protect +> information from other neural networks. Specifically, we focus on ensuring +> confidentiality properties in a multiagent system, and we specify those +> properties in terms of an adversary. Thus, a system may consist of neural +> networks named Alice and Bob, and we aim to limit what a third neural +> network named Eve learns from eavesdropping on the communication between +> Alice and Bob. We do not prescribe specific cryptographic algorithms to +> these neural networks; instead, we train end-to-end, adversarially. +> We demonstrate that the neural networks can learn how to perform forms of > encryption and decryption, and also how to apply these operations > selectively in order to meet confidentiality goals. @@ -22,7 +22,7 @@ pairs. ## Prerequisites -The only software requirements for running the encoder and decoder is having +The only software requirements for running the encoder and decoder is having Tensorflow installed. Requires Tensorflow r0.12 or later. @@ -32,8 +32,10 @@ Requires Tensorflow r0.12 or later. After installing TensorFlow and ensuring that your paths are configured appropriately: - python train_eval.py - +``` +python train_eval.py +``` + This will begin training a fresh model. If and when the model becomes sufficiently well-trained, it will reset the Eve model multiple times and retrain it from scratch, outputting the accuracy thus obtained @@ -46,7 +48,7 @@ the paper - the convolutional layer width was reduced by a factor of two. In the version in the paper, there was a nonlinear unit after the fully-connected layer; that nonlinear has been removed here. These changes improve the robustness of training. The -initializer for the convolution layers has switched to the +initializer for the convolution layers has switched to the tf.contrib.layers default of xavier_initializer instead of a simpler truncated_normal.