Skip to content

Latest commit

 

History

History
20 lines (9 loc) · 1.18 KB

keras.md

File metadata and controls

20 lines (9 loc) · 1.18 KB

Title

  • Easy and fully managed distributed training with Keras and Cloud ML Engine
  • KerasとCloud ML Engineでかんたん分散学習

Time

45 min

Target audience and level

Data analysts, data scientists and database engineers. Entry level

Agenda

Keras is the popular deep learning framework that provides easy-to-learn APIs for quick data analytics, best suited for prototyping, PoC or educational use cases. But the challenge is, it's not so easy to extend your Keras code to production-scale distributed training with multiple GPUs and nodes. you would need to hustle with modifying your Keras code and build a GPU cluster to distribute the training workload into multiple GPU nodes. Google's Cloud ML Engine is the solution for this. By using ML Engine, you can easily convert your Keras model into scalable TensorFlow graph, and train it with scalable fully-managed training environment with NVIDIA K80 and P100 GPUs, without modifying your code to dispatch the workload into specific GPU device. In this session, we will learn how you can empower Keras with multiple GPUs and cloud without the burden of building your own GPU cluster for large scale distributed training.