What is LeNet and its Architecture
In this blog, we would discuss What is LeNet and its Architecture. LeCun et al. introduced the LeNet convolutional neural network structure in 1998. LeNet refers to LeNet-5, a straightforward convolutional neural network. Large-scale image processing benefits from the performance of convolutional neural networks, a type of feed-forward neural network with artificial neurons that can respond to a portion of the surrounding cells in the coverage area.
One of the first convolutional neural networks, LeNet-5 helped deep learning advance. The groundbreaking work has been known as LeNet-5 since 1988, following years of study and numerous successful revisions. This model’s simplistic and uncomplicated architecture was largely responsible for its success. It is an image classification convolution neural network with many layers.
Background of LeNet
The backpropagation approach was initially used in 1989 by Yann LeCun and colleagues at Bell Labs. They hypothesized that by supplying constraints from the task’s domain, learning network generalization could be significantly improved. In order to read handwritten numbers, he coupled a convolutional neural network trained by backpropagation algorithms, and he successfully used it to recognize handwritten zip code digits provided by the US Postal Service. This was the original version of the LeNet system.
After eight more years of research, Yann LeCun and Leon Bottou analyzed numerous techniques for reading handwritten characters on paper in 1998 and used common handwritten digits to select benchmark tasks. When these models were compared, it became clear that the network performed better than every other model. Additionally, they gave instances of how neural networks were used in real-world scenarios, including two systems for online handwriting recognition and models that could read millions of checks every day.
Architecture of LeNet
Now, we shall discuss the Architecture of LeNet. LeNet, a prototype of the first convolutional neural network, possesses the fundamental components of a convolutional neural network, including the convolutional layer, pooling layer, and complete connection layer, providing the groundwork for its future advancement. Seven layers make up LeNet-5. Every other layer can train parameters in addition to input.
Convolution layer Layer C1 has six 5×5 convolution kernels and a 28×28 feature mapping, which can prevent input picture data from falling outside the convolution kernel’s bounds.
The subsampling/pooling layer, Layer S2, generates six feature graphs with a 14×14 size. Each feature map’s cell is linked to the matching feature map in C1’s 2×2 neighborhood.
Convolution layer C3 has 16 convolution kernels with a 5-5 matrix. Each continuous subset of the three feature maps in S2 is the input for the first six C3 feature maps, the next six feature maps receive their input from the next four continuous subsets, and the final three feature maps receive their input from the following four discontinuous subsets. Lastly, all feature graphs from S2 are used as the input for the final feature graph.
Similar to layer S2, layer S4 has a 2×2 output size and 16 5×5 feature graphs.
120 convolution kernels of size 5×5 are present in Layer C5, a convolution layer. The 16 feature graphs of S4 each have a cell associated with the 5*5 neighborhood. Here, the output size of C5 is 1*1 since S4’s feature graph is also 5×5 in size. The connection between S4 and C5 is thus complete. LeNet-5’s input can be increased without changing the structure of C5, making C5 a convolutional layer rather than a fully connected layer. This is because C5’s output size will be more than 1×1, making it not a fully connected layer.
Fully connected to C5, the F6 layer outputs 84 feature graphs.
Also, read FuzzyWuzzy Library in Python.
Pingback: What is Chebyshev's inequality and Examples - Study Experts
Pingback: What is Map function and its Implementation - Study Experts
Pingback: What is Inception v3 and its architecture - Study Experts