In artificial neural networks (ANN), the hidden layer is layer of artificial neurons that may be applied. It is neither the input nor the output layer and is positioned between both. An example of an ANN utilizing a hidden layer is the feedforward neural network.[1]

Example of hidden layer in a deep neural network

The hidden layers transform inputs from the input layer to the output layer. This is accomplished by applying what are called weights to the inputs and passing them through what is called an activation function, which calculate input based on input and weight. This allows the artificial neural network to learn non-linear relationships between the input and output data.

The weighted inputs can be randomly assigned. They can also be fine-tuned and calibrated through what is called backpropagation.[2]

Limitations edit

A large number of hidden layers in terms of the complexity at hand can cause what is called overfitting, where the network matches the data to the level where generalization is limited. With the opposite situation of the number of hidden layers being less than the complexity at hand can cause underfitting, and the system may struggle to take on the problem given to it.[3]

References edit

  1. ^ Antoniadis, Panagiotis (March 18, 2024). "Hidden Layers in a Neural Network | Baeldung on Computer Science". Baeldung. Retrieved May 2, 2024.
  2. ^ Rouse, Margaret (2018-09-05). "Hidden Layer". Techopedia. Retrieved May 2, 2024.
  3. ^ Effects of Hidden Layers on the Efficiency of Neural Networks Muhammad Uzair, Noreen Jamil IEEE 23rd Multitopic Conference