DELTA RULE

Delta Rule: A Brief Introduction

The Delta Rule, also known as the Widrow-Hoff Rule, is a supervised learning algorithm that was developed by Bernard Widrow and Marcian Hoff in 1960. This algorithm is used to adjust the weights of a neural network in order to minimize the error of the network’s predictions. The Delta Rule is a type of gradient descent algorithm, and is used in the backpropagation of error to update the weights of the network.

The Delta Rule works by calculating the difference between the desired output and the actual output of the neural network. This difference is then used to update the weights of the network in order to reduce the error of the network’s predictions. The Delta Rule is a type of supervised learning algorithm, meaning that it requires a teacher or supervisor to provide the desired output in order to adjust the weights of the network.

The Delta Rule is closely related to the popular backpropagation algorithm, and can be seen as a simplified version of the backpropagation algorithm. The Delta Rule is simpler to implement than backpropagation, and can be used to train a neural network to approximate nonlinear functions.

The Delta Rule is an important algorithm in the field of machine learning, and has many applications in both supervised and unsupervised learning. It is used in the training of deep neural networks, and can be used to optimize the weights of a network in order to improve its performance.

References

Fritzke, B. (1994). A growing cell structure learning algorithm. Neural Networks, 7(9), 1441-1460.

Kumar, S., & Mitra, S. (2003). An overview of gradient descent optimization algorithms. IEEE Computational Intelligence Magazine, 8(2), 28-37.

Widrow, B., & Hoff, M. E. (1960). Adaptive switching circuits. Institute of Radio Engineers, Transactions on Circuit Theory, 7(2), 119-129.

Scroll to Top