IIM  Vol.2 No.4 , April 2010
Hybrid Neural Network Architecture for On-Line Learning
Abstract: Approaches to machine intelligence based on brain models use neural networks for generalization but they do so as signal processing black boxes. In reality, the brain consists of many modules that operate in parallel at different levels. In this paper we propose a more realistic biologically inspired hybrid neural network architecture that uses two kinds of neural networks simultaneously to consider short-term and long-term characteristics of the signal. The first of these networks quickly adapts to new modes of operation whereas the second one provides more accurate learning within a specific mode. We call these networks the surfacing and deep learning agents and show that this hybrid architecture performs complementary functions that improve the overall learning. The performance of the hybrid architecture has been compared with that of back-propagation perceptrons and the CC and FC networks for chaotic time-series prediction, the CATS benchmark test, and smooth function approximation. It is shown that the proposed architecture provides a superior performance based on the RMS error criterion.
Cite this paper: nullY. Chen, S. Kak and L. Wang, "Hybrid Neural Network Architecture for On-Line Learning," Intelligent Information Management, Vol. 2 No. 4, 2010, pp. 253-261. doi: 10.4236/iim.2010.23030.

[1]   V. N. Vapnik, “The Nature of Statistical Learning Theory,” Springer-Verlag, Berlin, 1995.

[2]   M. J. Kearns and U. V. Vazirani, “An Introduction to Computational Learning Theory,” MIT Press, Cambridge, 1994.

[3]   C. G. Looney, “Pattern Recognition Using Neural Networks,” Oxford University Press, New York, 1997.

[4]   I. T. Joliffe, “Principal Component Analysis,” Springer- Verlag, Berlin, 2002.

[5]   I. Frank and J. Friedman, “A Statistical View of Some Chemometrics Regression Tools,” Technometrics, Vol. 35, No. 2, 1993, pp. 109-148.

[6]   C. H. Papadimitrou and K. Steiglitz, “Combinatorial Optimization,” Dover Publications, New York, 1998.

[7]   C. M. Bishop, “Neural Networks for Pattern Recognitio,” Oxford University Press, Oxford, 1996.

[8]   S. Kak, “Three Languages of the Brain: Quantum, Reorganizational, and Associative,” In: K. Pribram and J. King Eds., Mahwah, 1996, pp. 185-219.

[9]   S. Kak, “Faster Web Search and Prediction Using Instantaneously Trained Neural Networks,” IEEE Intelligent Systems, Vol. 14, 1999, pp. 79-82.

[10]   S. Kak , “Stream computing,” arXiv:0801.1336, 2008.

[11]   S. Kak, “Artificial and Biological Intelligence,” ACM Ubiquity, Vol. 6, No. 42, 2005, pp. 1-20.

[12]   E. R. Kandel, “In Search of Memory: The Emergence of A New Science of Mind,” W.W. Norton, New York, 2007.

[13]   N. Chomsky, Syntactic Structures, Mouton, 1957.

[14]   S. Haykin, Neural Networks, Prentice-Hall, 1999.

[15]   A. R. Barron, “Universal Approximation Bounds for Superpositions of A Sigmoidal Function,” IEEE Transactions on Information Theory, Vol. 39, No. 3, 1993, pp. 930-945.

[16]   K.W. Tang and S. Kak, “A New Corner Classification Approach to Neural Network Training,” Circuits, Systems, Signal Processing, Vol. 17, 1998, pp. 459-469.

[17]   S. Kak, “A Class of Instantaneously Trained Neural Networks,” Information Sciences, Vol. 148, 2002, pp. 97-102.

[18]   K. W. Tang and S. Kak, “Fast Classification Networks for Signal Processing,” Circuits, Systems, Signal Processing, Vol. 21, 2002, pp. 207-224.

[19]   T. M. Cover, “Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition,” IEEE Transactions on Electronic Computers, Vol. 14, 1965, pp. 326-334.