Loss Functions in Deep Learning

A Guide on the Concept of Loss Functions in Deep Learning — What they are, Why we need them…

Artem Oppermann
10 min readMar 7, 2021

This in-depth article addresses the questions of why we need loss functions in deep learning and which loss functions should be used for which tasks.

In Short: Loss functions in deep learning are used to measure how well a neural network model performs a certain task.

Table of Content

  1. Why do we need Loss Functions in Deep Learning?
  2. Mean Squared Error Loss Function
  3. Cross-Entropy Loss Function
  4. Mean Absolute Percentage Error
  5. Take-Home-Message

1. Why do we need Loss Functions in Deep Learning?

Before we discuss different kinds of loss functions used in Deep Learning, it would be a good idea to address the question of why we need loss functions in the first place.

I think you must be familiar by now with the mathematical operations which are happening inside a neural network. Basically, there are just two:

  • Forward Propagation
  • Backpropagation with Gradient Descent

--

--