**This is the first article of the multi-part series on self learning AI-Agents or to call it more precisely — Deep Reinforcement Learning. The aim of the series isn’t just to give you an intuition on these topics. Rather I want to provide you with more in depth comprehension of the theory, mathematics and implementation behind the most popular and effective methods of Deep Reinforcement Learning.**

- Part I: Markov Decision Processes (
**This article**) - Part II: Deep Q-Learning
- Part III: Deep (Double) Q-Learning
- Part IV: Policy Gradients for Continues Action Spaces
- Part V: Dueling Networks
- Part VI: Asynchronous Actor-Critic Agents
- …

**…**

**This in-depth article addresses the questions of why we need loss functions in deep learning and which loss functions should be used for which tasks.**

**In Short:** *Loss functions in deep learning are used to measure how well a neural network model performs a certain task.*

**Table of Content**

- Why do we need Loss Functions in Deep Learning?
- Mean Squared Error Loss Function
- Cross-Entropy Loss Function
- Mean Absolute Percentage Error
- Take-Home-Message

Before we discuss different kinds of loss functions used in Deep Learning, it would be a good idea to address the question of why we need loss functions in…

**In this detailed guide, I will explain everything there is to know about activation functions in deep learning. Especially what activation functions are and why we must use them when implementing neural networks.**

**Short answer**: We must use a*ctivation functions such as ReLu, sigmoid and tanh in order to add a non-linear property to the neural network. In this way, the network can model more complex relationships and patterns in the data.*

But let us discuss this in more detail in the following.

**Recap: Forward Propagation****Neural Network is a Function****Why do we need Activation Functions?****Different Kinds of…**

**Artificial Intelligence is on the rise. The pace of growth for artificial intelligence within the consumer, enterprise, government, and defense sectors continues. In this article, we will analyze the current size of the AI market and make forecasts for the future.**

Let’s first take a look at the current state of the usage of artificial intelligence in the corporate sector. In the following, I refer myself to the results of the survey conducted by the technology research company *Vanson Bourne*.

The company was commissioned by software company *Teradata *to ask executive decision-makers on the topic of artificial intelligence for the…

**This is a detailed guide that should answer the questions of why and when we need Stochastic-, Batch-, and Mini-Batch Gradient Descent when implementing Deep Neural Networks.**

**In Short**: *We need these different ways of implementing gradient descent to address several issues we will most certainly encounter when training Neural Networks which are local minima and saddle points of the loss function and noisy gradients.*

More on that will be explained in the following article — nice ;)

- 1. Introduction: Let’s recap Gradient Descent
- 2. Common Problems when Training Neural Networks (local minima, saddle points, noisy gradients)
- 3. Batch-Gradient Descent
- …

**Regularization is a set of techniques that can prevent overfitting in neural networks and thus improve the accuracy of a Deep Learning model when facing completely new data from the problem domain.** **In this article, we will address the most popular regularization techniques which are called L1, L2, and dropout.**

**Recap: Overfitting****What is Regularization?****L2 Regularization****L1 Regularization****Why do L1 and L2 Regularizations work?****Dropout****Take-Home-Message**

*One of the most important aspects when training neural networks is avoiding overfitting. *We have addressed the issue of overfitting in more detail in this article.

**However let us do a quick…**

**In this detailed guide, I will explain how Deep Learning can be used in the field of Anomaly Detection. Furthermore, I will explain how to implement a Deep Neural Network Model for Anomaly Detection in TensorFlow 2.0. All source code and the corresponding dataset is, of course, available for you to download- nice ;)**

**Introduction****Anomaly Detection****Uses Cases for Anomaly Detection Systems****Anomaly Case Study: Financial Fraud****How does an Autoencoder work?****Anomaly Detection with AutoEncoder****Fraud Detection in TensorFlow 2.0**

An anomaly refers to a data instance that is significantly different from other instances in the dataset. Often…

**This is a beginner’s guide to Deep Learning and Neural networks. In the following article, we are going to discuss the meaning of Deep Learning and Neural Networks. In particular, we will focus on how Deep Learning works in practice.**

- What exactly is Deep Learning?
- Why is Deep Learning so popular these Days?
- Biological Neural Networks
- Artificial Neural Networks
- Neural Network Architecture
- Layer Connections
- Learning Process in a Neural Network
- Loss Functions
- Gradient Descent

Have you ever wondered how Google’s translator App is able to translate entire paragraphs from one language into another in a matter of milliseconds?

How Netflix…

**In this article, we are going to discuss we difference between Artificial Intelligence, Machine Learning, and Deep Learning.**

**Furthermore, we will address the question of why Deep Learning as a young emerging field is far superior to traditional Machine Learning.**

*Originally published at **https://www.deeplearning-academy.com**.*

Artificial Intelligence, Machine Learning, and Deep Learning are popular buzzwords that everyone seems to use nowadays.

But still, there is a big misconception among many people about the meaning of these terms.

In the worst case, one may think that these terms describe the same thing — which is simply false.

A large number of companies…

**In this article, I will present to you the most sophisticated optimization algorithms in Deep Learning that allow neural networks to learn faster and achieve better performance.**

**These algorithms are Stochastic Gradient Descent with Momentum, AdaGrad, RMSProp, and Adam Optimizer.**

*Originally published at **https://www.deeplearning-academy.com**.*

- Why do we need better optimization Algorithms?
- Stochastic Gradient Descent with Momentum
- AdaGrad
- RMSProp
- Adam Optimizer
- What is the best Optimization Algorithm for Deep Learning?

To train a neural network model, we must define a loss function in order to measure the difference between our model predictions and the label that we want to predict. What…

Deep Learning & AI Software Developer | MSc. Physics | Educator|