Many methods of data mining exist, including machine learning which is a major research direction of artificial intelligence. The theory of statistics plays a fundamental role in machine learning. However the classic theory of statistics is often based on large sample properties, while in reality we often face small samples, sometimes with a very limited number of observations due to resource or other constraints. Consequently the performance of some large sample methods may be unsatisfactory in certain real applications. Vapnik and collaborators pioneered in the 1960s machine learning for finite samples and developed the statistical learning theory   . To deal with the inconsistency between minimizing the empirical risk and minimizing the expected risk in statistical learning, they proposed the principle of structural risk minimization to investigate the consistency in the machine learning process. Later on, Vapnik and his colleagues at AT & T Bell Laboratory proposed the method of support vector machine  . This has then been further developed into the method of support vector classifier (SVC) within statistical learning theory, which has shown satisfactory performance and is becoming a general method of machine learning. We focus our paper on SVC.
Let the training data set be , where is the training input and is the training output, . The essence of the SVC problem for data mining is to seek a real-valued function on the training input set such that the decision function infers the corresponding category in the set of an arbitrary data input , where if and if   . Because it is not guaranteed that a linear space over the original input space can be found to separate the training input set, a transformation is often introduced from the input space to a high-dimensional Hilbert space such that the training input set corresponds to a training set in . Then a super-plane is sought after in to separate the input space to solve the data mining classification problems.
The primary problem of the standard support vector classifier (SVC) is to
where T stands for transpose, and are penalty parameters, w consists of the slopes of the super-plane, is the intercept of the super-plane, and .
The quadratic Wolfe dual problem of the above primary problem (1) is to
where , , , is the kernel function, and .
The relationships between the optimal solution of the dual problem (2) and the optimal solution of the primary problem (1) is given by
The corresponding optimal decision function is given by , where . Furthermore, each component of the optimal solution corresponds to a training input, and we call a training input a support vector if its corresponding .
In data mining, the training input data of the support vector classifier (SVC) model are approximation of the true values. When using the approximate values to establish the SVC model, errors in data will inevitably impact on the optimal solution and the corresponding optimal decision function of the SVC model. When the upper bound of data errors is known, the method of data perturbation analysis is used to derive the upper bounds of the optimal solution and its corresponding optimal decision function.
We are interested in the stability and sensitivity analysis of the optimal solution and its corresponding optimal decision function. Our analysis is based on the Fiacco’s Theorem on the sensitivity analysis of a general class of nonlinear programming problem    . The second order sufficiency condition required for the Fiacco Theorem was further studied in  . On the other hand, kernel functions have been used in machine learning    . In this paper, we establish a Theorem on data perturbation analysis for a general class of SVC problems with kernel functions. The paper is organized as follows. The main Theorem and its Lemmas are introduced in the next section. Section 3 concludes the paper.
2. Data Perturbation Analysis of the SVC Dual Problem
Suppose that is the optimal solution of the primary problem (1). Corresponding to the solution , we divide, for convenience, the training data into categories as follows:
1) A category: points satisfying and , or and . These points are denoted as for convenience. Denote .
2) B category: points satisfying and , or and . These points are denoted as for convenience. Denote .
3) C category: points satisfying and , or and . These points are denoted as for convenience. Denote .
We call those indices i such that either or as working constraint indices. Let’s start with a lemma.
Lemma 1. Suppose that is the optimal solution of the primary problem (1). Then the set of all working constraint indices is give by
Proof. The proof is straightforward from the definitions of working constraint index and the observation that points in A and B categories imply that and points in the C category imply that . ,
Then we offer a sufficient condition for the linear independence of the gradients indexed by the set .
Lemma 2. Let be the optimal solution of the dual problem (2). If there exists any component of such that where
, then the set of gradients is linearly independent.
Proof. For , suppose there exists such that for but for . For , we always have . For , we always have . The set of gradients
Since it is assumed that there is a support vector multiplier such that
, the number of gradients in must be smaller than the dimension m, which is the dimension of the vector . Therefore the vector whose m components are either −1 or +1 cannot be expressed as a linear combination of the set of gradients . Hence the set of constrained gradients is linearly independent. ,
We use the lower case letter p to denote the training input variable and to denote the training input data. When the form of the kernel function is known, the following main Theorem investigates how the errors in the training input data impact on the optimal solution and the corresponding optimal decision function of the SVC model.
Theorem 3. Suppose that is the optimal solution of the dual problem (2) when , and the corresponding Lagrange multiplier is . Furthermore, and . Suppose that A category input vectors are all support vectors with , and the set of vectors
corresponding to points in category A is linearly independent. Then the following results hold.
1) The optimal solution is unique, and the corresponding multiplier as well as and are all unique.
2) There is a neighbourhood of on which there is a unique continuously differentiable function such that
b) is a feasible solution to the dual problem (2) for any ,
c) the partial derivatives of with respect to data parameters satisfy
where , is the
gradient of L with respect to , is the gradient of with respect to p, is the gradient of with respect to p, and L is the
Lagrange function of the primary problem (1), , and
Our results follow directly from the Fiacco Theorem after checking the following three conditions required in the Fiacco Theorem:
The Second Order Sufficiency Condition for ,
Linear Independence of Gradients over the Working Constraint Set,
The Strict Complementarity Property.
Proof of the Second Order Sufficiency Condition. Suppose that is the optimal solution to the dual problem. Then there exist multipliers and satisfying the Kurash-Kuhn-Tucker Condition:
where and .
The Hessian matrix corresponding to the Lagrangian function
becomes the matrix H.
For , we have assumed that . For , we have and . For , we have and .
Define the set . Then for any , we have
where is the Hessian matrix of and is the upper left sub-matrix of H.
Suppose that . Then because the set of vectors
corresponding to points in category A is linearly independent and that , we must have . This contradicts with the assumption that . Hence we must have . This implied that and therefore the Second Order Sufficiency Condition is satisfied by .
Proof of the Linear Independence of Gradients over the Working Constraint Set. This is Lemma 2.
Proof of the Strict Complementarity Property. We need to show that and cannot be 0 simultaneously, and and cannot be 0 simultaneously. For , we have and hence the intersection between A and the working constraint set is empty because of the assumption . For , we have and multiplier . For , we have and multiplier . Therefore the Strict Complementarity Property holds. ,
Support vector classifier plays an important role in machine learning and data mining. Due to the standard results that connect the primary problem and its dual problem, analysis of the primary problem can be achieved by working on its dual problem. Our main result establishes the equation for solving the partial derivatives of the optimal solution and the corresponding optimal decision function with respect to data parameters. Because the derivative measures the rate of change and a large value of the derivative often implies a large rate of change and hence a sensitive and unstable solution, our main result provides the foundation of quantitative analysis based on the derivatives.
This work is funded with grants from the Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions CIT & TCD201404080 and New Start Academic Research Projects of Beijing Union University. Xikui Wang acknowledges research support from the Natural Sciences and Engineering Research Council (NSERC) of Canada and from the University of Manitoba.
 Cristianin, N. and Shawe-Taylar, J. (2000) An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge.