JSIP  Vol.3 No.3 , August 2012
Rapid Algorithm for Independent Component Analysis
Show more
Abstract: A class of rapid algorithms for independent component analysis (ICA) is presented. This method utilizes multi-step past information with respect to an existing fixed-point style for increasing the non-Gaussianity. This can be viewed as the addition of a variable-size momentum term. The use of past information comes from the idea of surrogate optimization. There is little additional cost for either software design or runtime execution when past information is included. The speed of the algorithm is evaluated on both simulated and real-world data. The real-world data includes color images and electroencephalograms (EEGs), which are an important source of data on human-computer interactions. From these experiments, it is found that the method we present here, the RapidICA, performs quickly, especially for the demixing of super-Gaussian signals.
Cite this paper: R. Yokote and Y. Matsuyama, "Rapid Algorithm for Independent Component Analysis," Journal of Signal and Information Processing, Vol. 3 No. 3, 2012, pp. 275-285. doi: 10.4236/jsip.2012.33037.

[1]   P. Common and C. Jutten, Eds., “Handbook of Blind Source Separation: Independent Component Analysis and Applications,” Academic Press, Oxford, 2010.

[2]   A Hyv?rinen, “Fast and Robust Fixed-Point Algorithms for Independent Component Analysis,” IEEE Transactions on Neural Networks, Vol. 10, No. 3, 1999, pp. 626- 634.

[3]   Y. Matsuyama, N. Katsumata, Y. Suzuki and S. Imahara, “The α-ICA Algorithm,” Proceedings of 2nd International Workshop on ICA and BSS, Helsinki, 2000, pp. 297-302.

[4]   Y. Matsuyama, N. Katsumata and S. Imahara, “Convex Divergence as a Surrogate Function for Independence: The f-Divergence ICA,” Proceedings of 4th International Workshop on ICA and BSS, Nara, 2003, pp. 173-178.

[5]   Y. Matsuyama, N. Katsumata and R. Kawamura, “Independent Component Analysis Minimizing Convex Divergence,” Lecture Notes in Computer Science, No. 2714, 2003, pp. 27-34.

[6]   V. Zarzoro, P. Common and M. Kallel, “How Fast Is FastICA?” 14th European Signal Processing Conference (EUSIPCO), 4-8 September 2006, pp. 4-8.

[7]   N. Katsumata and Y. Matsuyama, “Database Retrieval from Similar Images Using ICA and PCA Bases,” Engineering Applications of Artificial Intelligence, Vol. 18, No. 6, 2005, pp. 705-717. doi:10.1016/j.engappai.2005.01.002

[8]   T. Cover and J. Thomas, “Elements of Information Theory,” John Wiley and Sons, New York, 1991. doi:10.1002/0471200611

[9]   I. Csiszár, “Information-Type Measures of Difference of Probability Distributions and Indirect Observations,” Studia Scientiarum Mathematicarum Hungarica, Vol. 2, 1967, pp. 299-318.

[10]   Y. Matsuyama, “The α-EM Algorithm: Surrogate Likelihood Maximization Using α-Logarithmic Information Measures,” IEEE Transactions on Information Theory, Vol. 49, No. 3, 2003, pp. 692-706. doi:10.1109/TIT.2002.808105

[11]   C. M. Bishop, “Neural Networks for Pattern Recognition,” Oxford University Press, Oxford, 1995.

[12]   Y. Matsuyama, “Hidden Markov Model Estimation Based on Alpha-EM Algorithm: Discrete and Continuous Alpha-HMMs,” Proceedings of International Joint Conference on Neural Networks, San Jose, 7 July-5 August 2011, pp. 808-816.

[13]   H. H. Yang and S. Amari, “Adaptive Online Learning Algorithm for Blind Separation: Maximum Entropy and Minimum Mutual Information,” Neural Computation, Vol. 9, No. 7, 1997, pp. 1457-1482. doi:10.1162/neco.1997.9.7.1457

[14]   B. Blankertz, G. Dornhege, M. Lrauledat, K. R. Muller and G. Curio, “The Non-Invasive Brain-Computer Interface: Fast Acquisition of Effective Performance in Untrained Subjects,” Nueroimage, Vol. 37, No. 2, 2007, pp. 539-550. doi:10.1016/j.neuroimage.2007.01.051