A mathematical guide on the theory behind Deep Reinforcement Learning

This is the first article of the multi-part series on self learning AI-Agents or to call it more precisely — Deep Reinforcement Learning. The aim of the series isn’t just to give you an intuition on these topics. Rather I want to provide you with more in depth comprehension of the theory, mathematics and implementation behind the most popular and effective methods of Deep Reinforcement Learning.

Self Learning AI-Agents Series — Table of Content

Fig. 1. AI agent learned how to run and overcome obstacles.

Markov Decision Processes — Table of Content


Not sure if good model… or just overfitting?

Source: Authors own image.

In applied Deep Learning, we very often face the problem of overfitting and underfitting. This is a detailed guide that should answer the questions of what is Overfitting and Underfitting in Deep Learning and how to prevent these phenomena.

In Short: Overfitting means that the neural network performs very well on training data, but fails as soon it sees some new data from the problem domain. Underfitting, on the other hand, means, that the model performs poorly on both datasets.

Table of Content

  1. Overfitting
  2. Underfitting
  3. Variance Bias Tradeoff
  4. Identifying Overfitting and Underfitting during Training
  5. How to avoid Overfitting?
  6. How…


A Guide on the Concept of Loss Functions in Deep Learning — What they are, Why we need them…

Source: https://unsplash.com/photos/yNvVnPcurD8

This in-depth article addresses the questions of why we need loss functions in deep learning and which loss functions should be used for which tasks.

In Short: Loss functions in deep learning are used to measure how well a neural network model performs a certain task.

Table of Content

  1. Why do we need Loss Functions in Deep Learning?
  2. Mean Squared Error Loss Function
  3. Cross-Entropy Loss Function
  4. Mean Absolute Percentage Error
  5. Take-Home-Message

1. Why do we need Loss Functions in Deep Learning?

Before we discuss different kinds of loss functions used in Deep Learning, it would be a good idea to address the question of why we need loss functions in…


Getting Started

A Guide on the Theory of Activation Functions in Neural Networks and why we need them in the first place.

Source: https://unsplash.com/photos/dQejX2ucPBs

In this detailed guide, I will explain everything there is to know about activation functions in deep learning. Especially what activation functions are and why we must use them when implementing neural networks.

Short answer: We must use activation functions such as ReLu, sigmoid and tanh in order to add a non-linear property to the neural network. In this way, the network can model more complex relationships and patterns in the data.

But let us discuss this in more detail in the following.

Table of Content

  1. Neural Network is a Function
  2. Why do we need Activation Functions?
  3. Different Kinds of…


Analysis of the Technology of our Future — Trends, Projections, Opportunities

Artificial Intelligence is on the rise. The pace of growth for artificial intelligence within the consumer, enterprise, government, and defense sectors continues. In this article, we will analyze the current size of the AI market and make forecasts for the future.

1. Artificial Intelligence in the corporate Sector

Let’s first take a look at the current state of the usage of artificial intelligence in the corporate sector. In the following, I refer myself to the results of the survey conducted by the technology research company Vanson Bourne.

The company was commissioned by software company Teradata to ask executive decision-makers on the topic of artificial intelligence for the…


Why do we need Stochastic, Batch, and Mini Batch Gradient Descent when implementing Deep Neural Networks?

This is a detailed guide that should answer the questions of why and when we need Stochastic-, Batch-, and Mini-Batch Gradient Descent when implementing Deep Neural Networks.

In Short: We need these different ways of implementing gradient descent to address several issues we will most certainly encounter when training Neural Networks which are local minima and saddle points of the loss function and noisy gradients.

More on that will be explained in the following article — nice ;)

Table of Content

  1. 2. Common Problems when Training Neural Networks (local minima, saddle points, noisy gradients)
  2. 3. Batch-Gradient Descent


A Guide on the Theory and Practicality of the most important Regularization Techniques in Deep Learning

https://www.spacetelescope.org/images/heic0611b/

Regularization is a set of techniques that can prevent overfitting in neural networks and thus improve the accuracy of a Deep Learning model when facing completely new data from the problem domain. In this article, we will address the most popular regularization techniques which are called L1, L2, and dropout.

Table of Content

  1. What is Regularization?
  2. L2 Regularization
  3. L1 Regularization
  4. Why do L1 and L2 Regularizations work?
  5. Dropout
  6. Take-Home-Message

1. Recap: Overfitting

One of the most important aspects when training neural networks is avoiding overfitting. We have addressed the issue of overfitting in more detail in this article.

However let us do a quick…


A Guide on how to implement Neural Networks in TensorFlow 2.0 to detect anomalies.

In this detailed guide, I will explain how Deep Learning can be used in the field of Anomaly Detection. Furthermore, I will explain how to implement a Deep Neural Network Model for Anomaly Detection in TensorFlow 2.0. All source code and the corresponding dataset is, of course, available for you to download- nice ;)

Table of Content

  1. Anomaly Detection
  2. Uses Cases for Anomaly Detection Systems
  3. Anomaly Case Study: Financial Fraud
  4. How does an Autoencoder work?
  5. Anomaly Detection with AutoEncoder
  6. Fraud Detection in TensorFlow 2.0

1. Introduction

An anomaly refers to a data instance that is significantly different from other instances in the dataset. Often…


Learn the most important Basics of Deep Learning and Neural Networks in this detailed Tutorial.

This is a beginner’s guide to Deep Learning and Neural networks. In the following article, we are going to discuss the meaning of Deep Learning and Neural Networks. In particular, we will focus on how Deep Learning works in practice.

If you liked the article and want to share your thoughts, ask questions or stay in touch feel free to connect with me via LinkedIn.

Table of Content

  1. Why is Deep Learning so popular these Days?
  2. Biological Neural Networks
  3. Artificial Neural Networks
  4. Neural Network Architecture
  5. Layer Connections
  6. Learning Process in a Neural Network
  7. Loss Functions
  8. Gradient Descent

Have you ever wondered how Google’s translator App is able to translate entire paragraphs from one language into another in a matter of milliseconds?

How Netflix…


Learn the Difference between the most popular Buzzwords in today's tech. World — AI, Machine Learning and Deep Learning

Evolution of Artificial Intelligence

In this article, we are going to discuss we difference between Artificial Intelligence, Machine Learning, and Deep Learning.

Furthermore, we will address the question of why Deep Learning as a young emerging field is far superior to traditional Machine Learning.

Artificial Intelligence, Machine Learning, and Deep Learning are popular buzzwords that everyone seems to use nowadays.

But still, there is a big misconception among many people about the meaning of these terms.

In the worst case, one may think that these terms describe the same thing — which is simply false.

A large number of companies claim nowadays to incorporate…

Artem Oppermann

Deep Learning & AI Software Developer | MSc. Physics | Educator|

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store