Back
 IJMNTA  Vol.6 No.2 , June 2017
The Computational Theory of Intelligence: Feedback
Abstract: In this paper we discuss the applications of feedback to intelligent agents. We show that it adds a momentum component to the learning algorithm. We derive via Lyapunov stability theory the condition necessary in order that the entropy minimization principal of computational intelligence is preserved in the presence of feedback.
Cite this paper: Kovach, D. (2017) The Computational Theory of Intelligence: Feedback. International Journal of Modern Nonlinear Theory and Application, 6, 70-73. doi: 10.4236/ijmnta.2017.62006.
References

[1]   Kovach, D. (2014) The Computational Theory of Intelligence: Information Entropy. International Journal of Modern Nonlinear Theory and Application, 3, 182-190.
https://doi.org/10.4236/ijmnta.2014.34020

[2]   Kaastra, I. and Boyd, M. (1996) Designing a Neural Network for Forecasting Financial and Economic Time Series. Neurocomputing, 10, 215-236.
https://doi.org/10.1016/0925-2312(95)00039-9

[3]   Lee, K.Y., A. Sode-Yome, and June Ho Park. (1998) Adaptive Hopfield Neural Networks for Economic Load Dispatch IEEE Transactions on Power Systems, 13, 519-526.

[4]   Istook, E. and Martinez, T. (2002) Improved Backpropagation Learning in Neural Networks with Windowed Momentum. International Journal of Neural Systems, 12, 303-318.
https://doi.org/10.1142/S0129065702001114

[5]   Bird, R.J. (2003) Chaos and Life: Complexity and Order in Evolution and Thought. Columbia University Press, New York.
https://doi.org/10.7312/bird12662

[6]   Latora, V., Baranger, M., Rapisarda, A. and Tsallis, C. (2000) The Rate of Entropy Increase at the Edge of Chaos. Physics Letters A, 273, 97-103.
https://doi.org/10.1016/S0375-9601(00)00484-9

[7]   Cencini, M., Cecconi, F. and Vulpiani, A. (2010) Chaos: From Simple Models to Complex Systems. World Scientific, Hackensack, NJ.

 
 
Top