A mathematical guide on the theory behind Deep Reinforcement Learning

This is the first article of the multi-part series on self learning AI-Agents or to call it more precisely — Deep Reinforcement Learning. The aim of the series isn’t just to give you an intuition on these topics. Rather I want to provide you with more in depth comprehension of the theory, mathematics and implementation behind the most popular and effective methods of Deep Reinforcement Learning.

Self Learning AI-Agents Series — Table of Content

Fig. 1. AI agent learned how to run and overcome obstacles.

Markov Decision Processes — Table of Content


A Guide on the Concept of Loss Functions in Deep Learning — What they are, Why we need them…

Source: https://unsplash.com/photos/yNvVnPcurD8

This in-depth article addresses the questions of why we need loss functions in deep learning and which loss functions should be used for which tasks.

In Short: Loss functions in deep learning are used to measure how well a neural network model performs a certain task.

Table of Content

1. Why do we need Loss Functions in Deep Learning?

Before we discuss different kinds of loss functions used in Deep Learning, it would be a good idea to address the question of why we need loss functions in…


Getting Started

A Guide on the Theory of Activation Functions in Neural Networks and why we need them in the first place.

Source: https://unsplash.com/photos/dQejX2ucPBs

In this detailed guide, I will explain everything there is to know about activation functions in deep learning. Especially what activation functions are and why we must use them when implementing neural networks.

Short answer: We must use activation functions such as ReLu, sigmoid and tanh in order to add a non-linear property to the neural network. In this way, the network can model more complex relationships and patterns in the data.

But let us discuss this in more detail in the following.

Table of Content


Analysis of the Technology of our Future — Trends, Projections, Opportunities

Artificial Intelligence is on the rise. The pace of growth for artificial intelligence within the consumer, enterprise, government, and defense sectors continues. In this article, we will analyze the current size of the AI market and make forecasts for the future.

1. Artificial Intelligence in the corporate Sector

Let’s first take a look at the current state of the usage of artificial intelligence in the corporate sector. In the following, I refer myself to the results of the survey conducted by the technology research company Vanson Bourne.

The company was commissioned by software company Teradata to ask executive decision-makers on the topic of artificial intelligence for the…


Why do we need Stochastic, Batch, and Mini Batch Gradient Descent when implementing Deep Neural Networks?

This is a detailed guide that should answer the questions of why and when we need Stochastic-, Batch-, and Mini-Batch Gradient Descent when implementing Deep Neural Networks.

In Short: We need these different ways of implementing gradient descent to address several issues we will most certainly encounter when training Neural Networks which are local minima and saddle points of the loss function and noisy gradients.

More on that will be explained in the following article — nice ;)

Table of Content


A Guide on the Theory and Practicality of the most important Regularization Techniques in Deep Learning

https://www.spacetelescope.org/images/heic0611b/

Regularization is a set of techniques that can prevent overfitting in neural networks and thus improve the accuracy of a Deep Learning model when facing completely new data from the problem domain. In this article, we will address the most popular regularization techniques which are called L1, L2, and dropout.

Table of Content

1. Recap: Overfitting

One of the most important aspects when training neural networks is avoiding overfitting. We have addressed the issue of overfitting in more detail in this article.

However let us do a quick…


A Guide on how to implement Neural Networks in TensorFlow 2.0 to detect anomalies.

In this detailed guide, I will explain how Deep Learning can be used in the field of Anomaly Detection. Furthermore, I will explain how to implement a Deep Neural Network Model for Anomaly Detection in TensorFlow 2.0. All source code and the corresponding dataset is, of course, available for you to download- nice ;)

Table of Content

1. Introduction

An anomaly refers to a data instance that is significantly different from other instances in the dataset. Often…


Learn the most important Basics of Deep Learning and Neural Networks in this detailed Tutorial.

This is a beginner’s guide to Deep Learning and Neural networks. In the following article, we are going to discuss the meaning of Deep Learning and Neural Networks. In particular, we will focus on how Deep Learning works in practice.

If you liked the article and want to share your thoughts, ask questions or stay in touch feel free to connect with me via LinkedIn.

Table of Content

Have you ever wondered how Google’s translator App is able to translate entire paragraphs from one language into another in a matter of milliseconds?

How Netflix…


Learn the Difference between the most popular Buzzwords in today's tech. World — AI, Machine Learning and Deep Learning

Evolution of Artificial Intelligence

In this article, we are going to discuss we difference between Artificial Intelligence, Machine Learning, and Deep Learning.

Furthermore, we will address the question of why Deep Learning as a young emerging field is far superior to traditional Machine Learning.

Originally published at https://www.deeplearning-academy.com.

Artificial Intelligence, Machine Learning, and Deep Learning are popular buzzwords that everyone seems to use nowadays.

But still, there is a big misconception among many people about the meaning of these terms.

In the worst case, one may think that these terms describe the same thing — which is simply false.

A large number of companies…


AdaGrad, RMSProp, Gradient Descent with Momentum & Adam Optimizer demystified

In this article, I will present to you the most sophisticated optimization algorithms in Deep Learning that allow neural networks to learn faster and achieve better performance.

These algorithms are Stochastic Gradient Descent with Momentum, AdaGrad, RMSProp, and Adam Optimizer.

Originally published at https://www.deeplearning-academy.com.

If you liked the article and want to share your thoughts, ask questions or stay in touch feel free to connect with me via LinkedIn.

Table of Content

1. Why do we need better optimization Algorithms?

To train a neural network model, we must define a loss function in order to measure the difference between our model predictions and the label that we want to predict. What…

Artem Oppermann

Deep Learning & AI Software Developer | MSc. Physics | Educator|

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store