Journey through the theory and practice of modern deep learning, and apply innovative techniques to solve everyday data problems.
In Inside Deep Learning, you will learn how to:
Implement deep learning with PyTorch
Select the right deep learning components
Train and evaluate a deep learning model
Fine tune deep learning models to maximize performance
Understand deep learning terminology
Adapt existing PyTorch code to solve new problems
Inside Deep Learning is an accessible guide to implementing deep learning with the PyTorch framework. It demystifies complex deep learning concepts and teaches you to understand the vocabulary of deep learning so you can keep pace in a rapidly evolving field. No detail is skipped—you’ll dive into math, theory, and practical applications. Everything is clearly explained in plain English.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology
Deep learning doesn’t have to be a black box! Knowing how your models and algorithms actually work gives you greater control over your results. And you don’t have to be a mathematics expert or a senior data scientist to grasp what’s going on inside a deep learning system. This book gives you the practical insight you need to understand and explain your work with confidence.
About the book
Inside Deep Learning illuminates the inner workings of deep learning algorithms in a way that even machine learning novices can understand. You’ll explore deep learning concepts and tools through plain language explanations, annotated code, and dozens of instantly useful PyTorch examples. Each type of neural network is clearly presented without complex math, and every solution in this book can run using readily available GPU hardware!
What's inside
Select the right deep learning components
Train and evaluate a deep learning model
Fine tune deep learning models to maximize performance
Understand deep learning terminology
About the reader
For Python programmers with basic machine learning skills.
About the author
Edward Raff is a Chief Scientist at Booz Allen Hamilton, and the author of the JSAT machine learning library.
Table of Contents
PART 1 FOUNDATIONAL METHODS
1 The mechanics of learning
2 Fully connected networks
3 Convolutional neural networks
4 Recurrent neural networks
5 Modern training techniques
6 Common design building blocks
PART 2 BUILDING ADVANCED NETWORKS
7 Autoencoding and self-supervision
8 Object detection
9 Generative adversarial networks
10 Attention mechanisms
11 Sequence-to-sequence
12 Network design alternatives to RNNs
13 Transfer learning
14 Advanced building blocks