In this page you can find more informations on the book, its content and its philosophy


Download the code

In the Apress respository you can find the code I used for the book and additional material that will help you understanding the concepts explained in the book. The repository is free and can be accessed by anyone. I am adding continously material, notebooks with exercises and material to expand the book content.

Book Description

Why write a book on applied deep learning? After all, try a google search on the subject and you will be overwhelmed by the huge number of results. The problem is that there is no course, blog or book that teaches in a consolidated and beginner friendly way advanced subjects like regularization, advanced optimisers as Adam or RMSProp, mini-batches gradient descent, dynamical learning rate decay, dropout, hyperparameter search, bayesian optimisation, metric analysis and so on.

I found material (and typically of very bad quality) only to implement very basic models on very simple datasets. If you want to learn how to classify the MNIST (hand written digits) dataset of 10 digits you are in luck (almost everyone with a blog havs done that, mostly copying the code you find on the tensorflow website). Searching something else to learn how logistic regression works? Not so easy. How to prepare a dataset to perform an interesting binary classification? Even more difficult.

I felt the need of filling this gap. I spent hours trying to debug models for reasons as dumb as having the labels wrong: instead of 0 and 1 I had 1 and 2, but no blog warned me about that. Is important to do a proper metric analysis when developing your models, but nobody is teaching you how (at least not on easy to access material). This gap needed to be filled. I find that covering more complex examples from data preparation to error analysis is a very efficient and fun way to learn the right techniques. In this book, I will always cover complete and complex examples to explain concepts that are not so easy to understand in any other way. It is not possible to understand why it is important to choose the right learning rate if you don’t see what can happen when you select the wrong value for example. Note that the goal of this course is not to make you a Python or tensorflow expert, or someone that can develop new complex algorithms. Python and tensorflow are simply tools that are very well suited to develop models and get results quickly. Therefore, I use them. I could have used other tools, but those are the ones mostly used by practitioners, so it makes sense to choose them.

The topics are extremely well explained with great detail. Content is very rich and the examples are also easy to understand.

I really appreciate the extent to which the author has added content to help readers easily consume the math.

Jojo Moolayil

AI and Machine Learning Expert

The depth for each topic in this chapter is truly commendable. […]

The chapter is rich in content, very lucid and maintains an excellent flow. Examples are great and the codes are also easy to understand

Jojo Moolayil

AI and Machine Learning Expert


  • Computational Graphs, Introduction to tensorflow („construction“ and „evaluation“ Phase)
  • Linear Regression with Tensorflow
  • Python Environment Setup, development of linear Regression Example in tensorflow
  • Network with One Neuron
  • Logistic and linear Regression with 1 Neuron
  • Preparation of a real dataset
  • Neural Networks with many layers
  • Overfitting concept explanation
  • Weights initialisation (Xavier and He)
  • Gradient descent algorithm
  • Dynamical learning rate decay
  • Optimizers (Momentum, RMSProp, Adam)
  • Regularisation: L1, L2 und Dropout.
  • Metric analysis
  • Explanation of why we need train, dev and test datasets
  • How to split datasets in the deep learning context
  • Strategies to solve and identify different dataset problems (overfitting, data from different sources or distributions, etc.)
  • Hyperparameter Tuning
  • Grid Search
  • Random Search
  • Bayesian Optimization
  • Coarse to fine optimization
  • Parameter search on a logarithmic scale
  • A complete research project described
  • Logistic regression with one neuron developed completely from scratch (without any library except numpy)