Member-only story
Loss Functions in Deep Learning
A Guide on the Concept of Loss Functions in Deep Learning — What they are, Why we need them…
This in-depth article addresses the questions of why we need loss functions in deep learning and which loss functions should be used for which tasks.
In Short: Loss functions in deep learning are used to measure how well a neural network model performs a certain task.
Table of Content
- Why do we need Loss Functions in Deep Learning?
- Mean Squared Error Loss Function
- Cross-Entropy Loss Function
- Mean Absolute Percentage Error
- Take-Home-Message
1. Why do we need Loss Functions in Deep Learning?
Before we discuss different kinds of loss functions used in Deep Learning, it would be a good idea to address the question of why we need loss functions in the first place.
I think you must be familiar by now with the mathematical operations which are happening inside a neural network. Basically, there are just two:
- Forward Propagation
- Backpropagation with Gradient Descent