Stainless steel is an excellent high-performance alloy steel, which can resist corrosion in the air or in chemical corrosive media, and has a wide range of application prospects. Stainless steel has strong toughness, low thermal conductivity, and serious work hardening, which leads to defects such as large cutting force, high cutting temperature, and tools that are prone to adhesion and wear. Therefore, stainless steel is a difficult-to-machine metal material . When turning 304 stainless steel, Support vector machine (SVM) and Particle swarm algorithm (PSO) are used to optimize the cutting parameters . The cutting parameters are mainly the three elements of cutting, including cutting speed (vc), feed (f), and back-cutting (ap). The optimization target is the surface roughness of the workpiece in order to achieve the expected surface quality.
2. Support Vector Machine Theory (SVM)
Data-based machine learning is an important aspect of modern intelligent technology. Machine learning is essentially an approximation of a true model of a problem. Research is to start from observation data (samples) to find the rules used to predict unknown data.
Support vector machine (SVM) is a machine learning method developed in the mid-1990s. This method is based on the statistical learning theory. This method is based on statistical learning theory, which improves the generalization ability of learning machine by seeking the structural risk minimization, and completes the minimization of empirical risk and confidence range. Therefore, when the number of statistical samples is small, we can also obtain the purpose of good statistical laws. Because of its outstanding learning performance, this field has become the focus of many scholars. This technology has also become a research hotspot in the machine learning community and has been successfully applied in many fields, such as face recognition, handwritten number recognition, automatic text classification, and machine translation, etc. .
The basic idea of SVM is to use the kernel function to map the input sample space to the high-dimensional feature space, find an optimal classification surface in the high-dimensional space, and obtain the nonlinear relationship between input and output variables .
Assume that the training data set , in a given feature space, where is the i-th feature vector, also called Is an example; . N is the class label of , when , is called a positive example, when , is called Negative case. is the sample point. The key of the algorithm is to establish a classification hyperplane as the decision surface, which maximizes the separation edge of positive and negative examples. The classification hyperplane is to find the function:
where: w is the normal vector of the hyperplane, b is the constant term of the hyperplane, xi is the training sample, and yi is the type of the sample.
In practice, there may be linear inseparability. At this time, it can be mapped to a high-dimensional space. When the sample is linear and inseparable, mapping it to a high-dimensional space will have a particularly large dimension, which makes calculation difficult. At this time, the kernel function plays an important role in dealing with the problem. Its value lies in the fact that although the feature is also converted from low-dimensional to high-dimensional, the difference is that the method will calculate in low-dimensional in advance, and then it will essentially. The classification effect is manifested in high dimensions, thus avoiding complicated calculations directly in high-dimensional space.
In practical applications, we often rely on prior domain theoretical knowledge to select an effective kernel function. The widely used kernel functions mainly include:
Polynomial kernel and Gaussian kernel linear kernel function:
According to the different problems and data, choosing different parameters will actually get different kernel functions. At the same time, different selection of kernel function parameters will directly affect the prediction accuracy and classification performance of the support vector machine.
Particle Swarm Optimization Algorithm (PSO)
Particle swarm optimization (PSO) was first proposed by Kennedy  and Eberhart  in 1995, and its basic concept originated from the study of bird flock foraging behavior. The main principle of the particle swarm algorithm is: based on the observation of animal group movement behavior, the information sharing of individuals in the group is used for reference to make the movement of the entire group occur in the problem solving space, the evolution process from disorder to order. The optimal solution is finally obtained. This algorithm is a global parallel optimization algorithm. Compared with other optimization algorithms, it has the advantages of short evolution time and high optimization accuracy . Since the particle swarm algorithm was proposed, it has been successfully applied to solve the traveling salesman problem, capacitor allocation problem  and machine learning  and other related fields.
The PSO algorithm is to simulate the intelligent search of the community generated by the mutual cooperation and competition between bird groups . The algorithm simulates the solution of the optimization problem as particles in the search space, these particles can ignore their own volume and mass, but fly at a definite speed . The particle selects the appropriate calculation function to find its own fitness, and dynamically adjusts its flight speed and corresponding points through the current best fitness value of itself and the community, and finally finds the global best. The particle adjustment speed V and the point position X are shown in formula (5) respectively.
In the formula, w is the inertia weight, v is the current iteration number, ; r1, r2 is a random number uniformly distributed in the interval [0,1], c1, c2 is the learning factor, is the optimal solution for the K-th iteration, and is the global optimal so far solution.
3. Optimize the Design Process
3.1. Data Preparation
This article has 49 sets of experimental data, some of which are shown in Table 1.
Among them, ap, vc and f are used as input variables, and the surface roughness Ra is used as the output variable to randomly divide the training set and the test set. After the 49 groups of samples were randomly shuffled, the first 40 groups of samples were used as the training set, and the last 19 groups were used as the test set. The overall algorithm flow is shown in Figure 1.
3.2. SVM Modeling
The SVM regression toolbox in Matlab is used to set it, and the input training set samples are automatically standardized by the solver to remove the influence of dimensions. The kernel function adopts the Gaussian kernel function, which automatically adjusts the super error in the optimization solver. The mean square error and coefficient of determination are used as indicators to evaluate the accuracy of the model. The modeling results are shown in Figure 2.
The test set is input into the model for testing, and the comparison of model prediction results is shown in Figure 3. It can be seen from Figure 4 that the SVM regression model has high accuracy, its mean square error mse = 0.059, and the coefficient of determination R2 = 0.908, which can be used to characterize the mapping relationship between the three elements of cutting and surface roughness.
Table 1. Experimental sample data.
Figure 1. SVM-PSO algorithm flow.
Figure 2. SVM modeling process.
In the current cutting process, the process design of cutting parameters has
Figure 3. SVM modeling results.
Figure 4. PSO optimization process.
greater subjectivity and experience. In order to achieve the expected surface roughness of the workpiece processing, this paper uses a support vector machine (SVM) to establish a model between the three elements of cutting and the surface roughness. Using the particle swarm algorithm, taking the surface roughness as the optimization goal, through parameter analysis and optimization, the following conclusions are obtained:
1) For difficult-to-machine materials such as 304 stainless steel, Support vector machine (SVM) is used for modeling. The mapping model between cutting parameters and surface roughness has higher accuracy and smaller error, which can provide reference for parameter optimization or theoretical derivation.
2) The particle swarm algorithm PSO is used to optimize the cutting parameters, and a set of feasible optimized cutting parameters can be obtained according to the expected surface roughness value, which provides a rationale for the process personnel when designing processing parameters and saves experimental costs.
3) Compared with other intelligent modeling algorithms and optimization algorithms, SVM-PSO is more convenient, concise and efficient, and provides a new idea for process parameters optimization design.
I would like to express my gratitude to all those who have helped me during the writing of this thesis. I gratefully acknowledge the help of my coworker Liu. I do appreciate his patient encouragement, and professional suggestions during my thesis writing.
 Kennedy, J. and Eberhart, R. (1995) Particle Swarm Optimization. Proceedings of IEEE International Conference on Neural Networks, Perth, 27 November-1 December 1995, 1942-1948. https://doi.org/10.1109/ICNN.1995.488968
 Eberhart, R. and Kennedy, J. (1995) A New Optimizer Usingparticle Swarm Theory. Proceedings of the 6th International Symposium on Micro Machine and Human Science, Nagoya, 4-6 October 1995, 39-43. https://doi.org/10.1109/MHS.1995.494215
 Wang, X.B., Luo, F.J., Sang, C.Y., et al. (2017) Personalized Movie Recommendation System Based on Supportvector Machine and Improved Particle Swarm Optimization. IEICE Transactions on Information and Systems, E100-D, 285-293. https://doi.org/10.1587/transinf.2016EDP7054