🔴Problem: no hierarchical representation

The reason MLPs have been proposed and studied for decades is the belief that adding more layers could lead to increasingly sophisticated levels of abstraction.
For example, it was thought that an MLP applied to handwritten digit recognition could learn concepts such as combinations of object parts: for instance, the number nine as a loop at the top completed by a vertical stroke.

Note

Humans, at the visual level, use the brain in exactly this way: building abstractions from raw data.

Important

This is the idea we would like to see realized in MLPs, but in practice it does not happen.
If we were to inspect the weights of an MLP, we would find no evidence of reasoning similar to that of humans.
MLPs behave like a black box: they are neither interpretable nor easily understandable.

✅ Solution