Response Surface Methodology and Artificial Neural Network Methods Comparative Assessment for Fuel Rich and Fuel Lean Catalytic Combustion

Show more

1. Introduction

Catalytic combustion or heterogeneous combustion had been extensively investigated in recent years. The catalytic oxidation of hydrocarbons became the focus of much basic and applied catalysis research because of its increasing importance for burner’s design of industrial furnaces and the techniques of power-generating gas turbines [1] [2]. For these applications, high temperature catalytic combustion was regarded as a highly efficient and clean energy system. It had been recognized that noble metals possessed the highest catalytic activities that initiated the catalytic oxidation of fuels at relatively lower reaction temperatures [3] [4]. Catalytic combustion methodologies are greatly enhancing flames stability limits at very fuel-Lean equivalence ratios [5] and resulting in ultra-low NOx emissions [6]. Fuel-rich catalytic combustion does not only have a prime catalytic partial oxidation function but also acts as a preheater and stabilizer for subsequent homogeneous combustion zone, [7] [8]. The thermal structure of stabilized confined turbulent gaseous diffusion flames using Pt/Al_{2}O_{3} and Pd/Al_{2}O_{3} catalytic disc burners situated in the combustion domain was experimentally investigated under both fuel-rich and fuel-lean conditions at the modified equivalence ratio Φ = 0.75 and 0.25, respectively [9]. The thermal structure of the catalytic flames developing over Pt disc indicates higher activity at the early upstream region of the main reaction zone compared to the flame developing over Pd disc burner under the fuel-rich condition. Also under the fuel-lean condition, flame operating over Pd catalytic disc burner indicates higher temperature values very near within the flame core compared to the flame developing over Pt catalytic disc burner.

Wierzbicki, *et al.* [10] presented a review of progress in catalytic conversion of JP-8 fuel and its surrogates made over the last decade. The effect of different types of catalyst, support materials and preparation methods on reforming was discussed. Sulfur tolerant catalysts and mechanisms of catalyst poisoning were understood while the role of hydrocarbon present in jet fuel during fuel reforming remains a challenge. High fidelity numerical simulations limited to gas phase non-catalytic reforming were examined. The combustion characteristics and stability of methane-air mixtures over platinum in catalytic micro-combustors were studied by Chen, *et al.* [11] using a two-dimensional computational fluid dynamics model with detailed chemistry and transport. It was shown that the combustor dimensions are vital in determining the combustion stability of the system. The investigation revealed that the optimal combustor length depends on the wall thermal conductivity. Shorter combustors increase the stability against blowout for high conductivity, whereas longer combustors increase the stability against blowout for low conductivity walls.

Progress in catalytic combustion depends on advances in catalyst technology and in multi-dimensional modeling for reactor design [12] investigated the premixed combustion of methane/air mixture in heat recuperation micro-combustors made of different materials. The effects of wall parameters on the combustion characters of a CH_{4}/air mixture under Rhodium catalyst were explored using numerical analysis methodology. The results show that with a decrease of thermal conductivity of wall materials, the temperature of the reaction region increases and hot spots becomes more obvious. Arani *et al.* [13] carried out three dimensional direct numerical simulations (DNS) with detailed heterogeneous and homogenous chemistry and transport to investigate the turbulent combustion of fuel-lean hydrogen/air mixtures over a platinum coated channel where catalytic reactions occurred. The homogeneous ignition, gas-phase combustion, was concentrated close to the walls. Hydrogen was incompletely converted within the gaseous combustion zones and the leaking fuel reacted on the catalytic walls leading to combined hetro-/homogenous combustion over the entire post-ignition domain. Furthermore, Arani *et al.* [14] performed another investigation of three-dimensional direct numerical simulation of turbulent catalytic and gas-phase H_{2}/air combustion at a fuel-lean equivalence ratio Φ = 0.18 in platinum-coated planar channels at two Rynolds number, Re = 182 and 385 using detailed hetro-/homogeneous chemical reaction mechanism. It was observed that the higher turbulence intensity at Re = 385 resulted in larger near-wall hydrogen excess yielding shorter homogeneous ignition distances compared to the lower Re. The coupling of catalytic and gas-phase chemistry inhibited homogenous ignition, was characterized by intense catalytic reaction rates which could be applied in practical catalytic reactors.

Recently, Pan *et al.* [15] investigated experimentally and numerically the hetero/homogeneous reaction for H_{2}/Air mixture in a micro catalytic combustor. The distribution of OH radicals in the combustor was observed by plane laser induced fluorescence. Measurement of temperature variation in the combustor was determined for revealing the transition process of reaction types. The critical equivalence ratio from coupled hetero-/homogeneous reaction transforming into pure heterogeneous reaction is *Ф*_{A}, while that from pure heterogeneous reaction to coupled hetero-/homogeneous reaction is *Ф*_{B}. At different combustor heights and mixed gas flow rate, *Φ*_{A} is always less than *Φ*_{B}. The critical equivalence ratios *Φ*_{A} decreases while the critical equivalence ratio *Φ*_{B} increases, when the height of combustor increases. *Φ*_{A} and *Φ*_{B} both decrease with the increase of mixed gas flow rate. Heat loss of the combustor outer wall has an important effect on the transformation of reaction type. The research introduced by Zang, *et al.* [16] clarified the hazards of volatile organic compounds (VOCs) which now become a kind of harmful environmental pollutants that cannot be overlooked with the rapid development of industry. In the common catalytic combustion catalysts, noble metal catalysts and non-noble metal catalysts researches achieved progress for the elimination of VOCs. Perovaskite catalysts as one of the non-noble catalysts play an important role in the field of catalytic combustion in recent years. This work analyzed and elaborated the reaction kinetics and the QSAR/QSPR (Quantitative structure-activity relationship/Quantitative structure-property relationship) models for the introduction of structural properties and reaction mechanisms.

More recently, He *et al.* [17] presented a literature review on the catalytic methane combustion. This study revealed that the presence of catalysts enables complete oxidation of methane at much lower temperatures, typically 500˚C, so that the formation of pollutant can be largely avoided. Various aspects were discussed including the catalyst types, the reaction mechanism, kinetic characteristics, effects of various influencing operation factors and different reactor types proposed and tested. The study may serve as an essential reference to realize the performance, for future applications and propagation in different industrial sectors. Moreover a catalyst preparation method, consisting of slurry wash coating with *γ*Al_{2}O_{3} followed by impregnating platinum on the micro-reactor wall, had been investigated by He, *et al.* [18]. The effect of various factors in the preparation procedures on the adhesion of the wash coat *γ*Al_{2}O_{3} was studied. Well-adhered Pt/Al_{2}O_{3} catalysts were applied in a micro-reactor and investigated in terms of their performance in catalytic methane combustion. It was shown that the reaction temperature had a greater influence on the methane conversion than the flow rate, and favorable coverage of methane and oxygen on the catalyst surface is essential to obtain a good catalytic performance besides achieving the favorable methane conversion, as well as, the sufficient heat release for the potential uses of such micro-reactors for energy related applications.

Artificial neural networks (ANNs) and response surface methodology (RSM) are significant attitudes in the field of processes modeling and optimization. These methods of modeling assess the relations between the output (response or target variable) and input variables (experimental operating factors) of the process by means of experimentally derived data. Subsequently, derived models are used to approximate the optimum situations to minimize or maximize the target variable (dependent variable) along with the involved corresponding independent variables [19]. Both RSM and ANN do not need the accurate expressions or the physical meaning of the system under exploration so they have been enormously employed in diverse fields [20] [21].

Several researchers have implemented the collective analysis on RSM and ANN to investigate the various aspects of these processes [22].

Ahmadpour *et al.* [19] evidenced the higher accuracy of ANN than the response surface model in their investigation of spent caustic wastewater treatment in a photocatalytic reactor. Comparing ANN and some classical modeling techniques such as RSM [23] showed the supremacy of ANN as a modeling technique in analyzing non-linear relationships of data sets, which consequently provides good fitting for data as well as better predictive ability. They stated that ANN is suitable in engineering research since most problems are non-linear in nature.

The multi-layer perceptron (MLP_ANN) models were superior to the regression model achieving a relatively lower prediction error for modeling Al6082-T6 alloy drilling [24] and better modeling accuracy than RSM for prednisone release from a multipartite System [25], in addition to the superiority for predicting and optimizing the process of ultrasound-assisted extraction [26].

RSM and ANN were studied and compared for modeling highly nonlinear responses found in impact-related problems. Despite the computation cost of ANN, these studies concluded the supremacy of ANN over RSM in such optimization problems [27]. Also, Qadir *et al.* [28] mentioned that ANN is a more valuable tool to interpret the relationship between the input and output data of augmented experimentations and ANN is an efficient algorithm to identify any function with limited number of discontinuities. Moreover, Habeeb *et al.* [29] found the application of ANN for predictive modeling of the adsorption process will help the understanding of the non-linear relationship between the input and output variables besides enhancing monitoring the process variables for optimum performance.

Recently, Cisternas *et al.* [30] in their study of Trends in Modeling stated that ANN and RSM models substantially reduce the computational cost involved in simulation and sensitivity analyses. Ayodele *et al.* [31] demonstrated the robustness of back propagation artificial neural network for predictive modeling of photodegradation of organic pollutants beside the determination of the level of importance of the process parameters. Also, Srinidhi *et al.* [32] stated that ANN is showing promise for tackling multivariate and complex modeling problems. The ANNs algorithms are employed for their high sensitivity to change in variables, accommodation for a large number of variables, flexibility, ease in network construction, and the diverse availability of adjustable functions for precision modeling and prediction. Moreover, Agu *et al.* [33] in the modeling and optimization of Terminalia Catappa L. Kernel Oil (TCKO) extraction designated that ANN was a better and more effective tool than RSM indicated in its higher *R*^{2} and F_Ratio beside lower error analyses parameters. Mohd Zin *et al.* [21] in the microbial decolorization process optimization declared that over a comparative scale, ANN model has higher prediction and accuracy in the ﬁtness compared to the RSM model proven by approximated *R*^{2} and AAD values. Beigzadeh and Rastegar [34] indicated the high accuracy of the ANN modeling in estimating the target variable in their assessment of Biosorption process. This will reduce the need for more laboratory data, allowing the determination of the optimal parameters for designing equipment.

There have been many works in the literature about the catalytic combustion, however, there have been fewer studies about the different mathematical modeling approaches and comparative analyzes. The mathematical modeling studies are necessary to understand the process and show the optimization alternatives to the process [35].

*The present study deals with* the evaluation of the predictive competencies of the RSM and ANN two methodologies for the formerly reported experimental data of thermal structure of catalytic stabilized confined turbulent gaseous diffusion flames over Pt/*γ*Al_{2}O_{3} and Pd/*γ*Al_{2}O_{3} catalytic disc burners under fuel-rich and fuel-lean conditions [9]. This has been achieved by comparing the values of coefficient of determination (R^{2}), F_Ratio besides the various error analyses parameters. Furthermore, the ANN method has been employed to illustrate the effect of input flame parameters on the response in three and two dimensions and to show the location of the optimum.

2. Response Surface Methodology

The RSM is a resourceful tool which is conjured of mathematical and statistical techniques for designing experiments, building models, evaluating the effects of variables, and searching optimum conditions of variables to predict targeted responses as well as the evaluation of the most influential factors on chosen responses [33] [34] [36].

RSM, since its introduction in the 1980s, has been extensively utilized for modeling and optimization of several engineering processes and studies whereby the numbers of process variables influencing the response(s) are many [33] [37]. The structured nature of the RSM is useful to exhibit the factors of contributions from the coefficients in the regression models. This ability is powerful in identifying the insignificant main factors and interaction factors or insignificant quadratic terms in the model and thereby can reduce the complexity of the problem [38].

One of the very important advantages of RSM is the reduction in the number of experimental runs which means it is time effective, inexpensive, and still, has the capability of attaining maximum efficiency and providing acceptable results as well as the evaluation of the most influential factors. RSM also has the advantage of generating second-order polynomial equation, which relates the dependent(s) or response(s) to the independent or process parameters. RSM is beneficial to determine the effects of each variable alone or in combination as it contemplates all the input variables at the same time, and therefore, interactions between variables are considered [39] [40] [41] [42].

Generating a mathematical model; its graphical perspective has led to the term Response Surface Methodology [43]. These graphic drawings of the shape of the surfaces allow a visual explanation of the functional relations between the response and the experimental variables [44] [45].

However, any form of a non-linear relationship between the variables may result in a decrease in the prediction accuracy of the RSM. Increased number of variables is very time-consuming for analysis and significantly decreases the accuracy of this method. Moreover RSM based models are exact for only a limited range of input process parameters, and thus, impose a limitation on the use of RSM models for highly non-linear processes beside it cannot include uncontrollable variables [46] [47].

RSM involves the following steps: 1) Selection of the independent variables, responses and experimental design; 2) Execution of experiments and collection of results; 3) Mathematical modeling of the experimental data by polynomial equations, with the best fitting response through analysis of variance; 4) Drawing of response surfaces using 2D or 3D plots, and finally; 5) Evaluating main and interactional effect of variables and identification of optimal conditions [48] [49].

Before performing the regression analysis the variables should be codified to eliminate the effect of the variation of natural independent variables units and ranges in the experimental domain over which parameters have been tested. This allows parameters of different magnitude to be investigated more evenly in a range between −1 and +1. The equation seen below is the most frequently one utilized for coding [50] [51] [52]:

$\text{codedvalue}=\frac{\text{actualvalue}-\text{mean}}{\text{halfofrange}}$ (1)

A second order equation of the following form has been established for the functional relationships between the coded independent variables and dependent variables using multiple regression technique [52] [53]:

$Y={\beta}_{0}+{\displaystyle {\sum}_{i=1}^{n}{\beta}_{i}{X}_{i}}+{\displaystyle {\sum}_{i=1}^{n-1}{\displaystyle {\sum}_{j=i+1}^{n}{\beta}_{ij}{X}_{i}{X}_{j}}}+{\displaystyle {\sum}_{i=1}^{n}{\beta}_{i}{X}_{i}^{2}}$ (2)

Details of this method have been dealt with in our previous papers [54] [55] [56].

2.1. Artificial Neural Networks

Artificial neural networks (ANNs) are generic mathematical models lie at the intersection of computer science, artificial intelligence, and neuroscience. They classify data, learn models, and make predictions.ANN is an efficient algorithm to identify any function with limited number of discontinuities and valuable tool to interpret the relationship between the input and output data of augmented experimentations [57]. The capability of ANN to investigate and rationalize the performance of any complicated and non-linear process makes ANN an important modeling tool [22].

Ever since its introduction as universal function approximators by McCulloch and Pitts in 1943 [58] ANNs have been extensively used in many areas as a powerful and reliable tool serving data mining and numerical applications because of their powerful control over regulatory parameters for pattern recognition and classification. ANNs have been around since the mid-20th century but blossomed first in the 1980s with the introduction of back propagation and then again in the 2000s with the development of deep learning. The latter has dominated the machine learning and artificial intelligence scene in recent years [57]. Over the years, ANNs have been applied in Modeling and Prediction, Control, Optimization and Classification, Fault Detection besides solving several engineering, science, medicine, mathematics, neurology, metrology, psychology and biology problems [33].

NN is a computational mechanism that is able to acquire, represent, and compute mapping from multivariate space of information to another, given a set of data representing that mapping. ANNs are designed to simulate the human brain when analyzing data by learning from experience. Similar to the human brain, ANNs are capable of processing multi dimensional, non-linear, clustered and imprecise information and could be used to extract a pattern in nonlinear, complex and noisy or fuzzy data sets to detect the trends with high accuracy. ANN algorithms are employed for their high sensitivity to change in variables, accommodation for a large number of variables, flexibility, ease in network construction, and the diverse availability of adjustable functions for precision modeling and prediction. ANN substantially reduces the computational cost involved in simulation and sensitivity analyses. Thus, ANN can be used to decode complicated real world problems that are sometimes challenging to evaluate using statistical approaches without the need for complicated equations, and is capable of exploring regions that are otherwise omitted when using statistical approaches. ANN using its many parameters (weights and bias) is able to predict the output of the model with high accuracy and this will reduce the need for more laboratory data, allowing us to determine the optimal parameters for designing equipment. ANNs, in particular, are best suited for descriptive problems for which large amounts of data are cheaply available. Recently ANN has been developed as an alternative to the RSM system for complex non-linear multivariate modeling. ANN performs the project by learning from training examples and does not need any prior knowledge of the correlation between targeted responses. As compared to the RSM, ANN could be a powerful tool to propose higher accuracy and efficiency on the fitting of experimental responses, prediction, and modeling of processes [46] [47] [49] [57] [59] [60] [61].

ANN is a colossal structure of interconnected networks based on a simplified analogy to the behavior of the human brain consisting of numerous individual elements called neurons, which are mathematically represented by relatively simple yet flexible functions, such as linear or sigmoid functions capable of performing parallel computations for data processing. These processing units communicate with each other by means of weighted connections, corresponding to the synapses of the brain [37] [62].

Due to the uncertainty of the model, the experimental results can be achieved by adjusting the structure of the network, by choosing different numbers of neuron layers, the type and number of neurons in each layer, the type of connection between neurons, the excitation function and the weight of interconnection among internal nodes. The accuracy of the neural network is mainly affected by the hidden layers, the number of nodes, the training algorithm, the learning rate, and the transfer function. Many hidden layers and neurons can lead to network complexity and overfitting [34] [40].

For a specific configuration of the network and for a given set of input-output data, the so-called training of the network consists of adjusting its parameters in order for the network to reproduce the input-output data as accurately as possible. Each iteration of the training process is called an epoch and composed of forward activation to produce a solution and the backward propagation of the calculated error to adjust the weights [62] [63] [64].

2.2. Neuron Model

The most commonly used transfer functions for multilayer networks are presented below in Figure 1(a) & Figure 1(b). Figure 1(a) portrays the tansig differentiable

(a)(b)(c)

Figure 1. (a) *Tansig* transfer function; (b) *Purelin* transfer function; (c) Elementary neuron.

transfer function *f* to generate the output between −1 and 1 as the neuron’s net input goes from negative infinity to positive infinity. Figure 1(b) displays the linear transfer function purelin. Figure 1(c) depicts an elementary neuron with R inputs where each input is weighted with an appropriate *w* while the sum of the weighted inputs and the bias forms the input to the transfer function *f* [65].

2.3. Feed Forward Neural Network

The most widely used network type for approximation problems is the multi-layer perceptron (MLP), also called feed-forward back propagation network (FFBPN), Figure 2(a) & Figure 2(b) [31] [65]. The choice of feed-forward configuration is due to its wide applications in industrial processes and it is easy to implement in a variety of processes [29] [66]. Figure 3 presents the general Procedure for back propagation artificial neural network (BPANN) modeling [31].

Figure 2(a) & Figure 2(b) depicts two-layer tansig/purelin network. Feed forward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons along with an input layer. Employing the nonlinear transfer functions allow the network to learn the nonlinear relationships between input and output vectors. The linear output layer is most frequently used for function fitting (or nonlinear regression) problems [65].

(a)(b)

Figure 2. (a) Model architecture of BPANN [31]; (b) feed-forward network [65].

Figure 3. Procedure for BPANN modeling [31].

The number of input neurons signifies and is equal to the independent variables of the system and the number of output neurons denotes and is equal to the response of the system. All hidden units are attached to each input unit and to the output layer and there is no communication between neurons in the same layer but information only moves in the forward direction, from the input towards the output layer [32] [62] [67].

Generally, a combination of layers with sigmoid activation functions for the hidden layer and a linear function for the output layer beside linear activation functions for the input (where input values are simply passed onto the neurons in the next layer) are used for approximation of functions with minor discontinuities. In the passageway, each value is multiplied by the relevant weight, which characterizes the connection between neurons of the layers; hence a weighted sum is passed to the hidden layer neurons. A bias factor is added to the weighted sum, which allows, for instance, an activation of the neuron even when a null value is passed to it. Employing the tangent sigmoid function in the hidden layer, a value between −1 and +1 is resulted on processing input values of weighted sum and bias. The resulting values are transmitted to the output layer. The output neurons add their own biases to the weighted sum they receive and return the network response for the input data provided to the network. The network parameters to be determined during the training process are the weights which characterize the connection between the neurons and the bias of each neuron [32] [42] [62].

Normalization of the input variables through codification employing Equation (1) leads to avoidance of numerical over-flows due to very large or very small weights and prevention of mismatch between the influence of some input values to the network weights and biases [68] beside avoiding problems such as reduced accuracy and network instabilities in the course of training process [69]. The output values obtained from the ANN are also in the range of −1 to 1, and utilizing the reverse method of normalization process they are transformed to values corresponding to their original data [46].

In the training of the back propagation method, the error is determined by comparing the network output and the desired response and this error is returned to the previous hidden and input layers for performing the necessary corrections in the next training processes. The network training operation ends when the error comes down below some value specified by the user [70].

To avoid overfitting the neural network model, the input-output experimental data is divided into three groups: the training set, verification set and test set. They played different roles in the model formation. The function of training set is to determine the parameters of neural network model and training model, the verification set calculates a single evaluation index used between epochs to optimally select model parameters, in order to halt the training process if the network error starts to increase due to over fitting; and the test set verifies the fitting ability of the neural network model at the end of the training [40] [62] [66] [69] *.* To avoid misleading performance results, the training, validation, and test sets should be sampled without replacement (no data points are shared between sets) [57].

The MATLAB neural network toolbox has been employed for generating, training and prediction using the ANNs. The Levenberg–Marquardt (LM) method has been utilized for performing the training due to its fast convergence, accurate prediction of the model and reliability in locating the global minimum of the mean-squared error (MSE) [29] [31] [40] [68]. LM is hybrid of the Gauss Newton nonlinear regression and gradient steepest descent methods based on the least squares technique for nonlinear models employing the Jacobian matrix [62] [65] [66] [69].

The following mathematical equation relates the input/output variables of the network [34] [49] [62]

${O}_{n}={f}_{lin}\left\{{b}_{0}+{\displaystyle {\sum}_{k=1}^{h}{W}_{k}}\times {f}_{sig}\left[\left({b}_{hk}+{\displaystyle {\sum}_{i=1}^{m}{W}_{ik}{X}_{i}}\right)\right]\right\}$ (3)

where
${O}_{n}$ is the normalized output ranging from −1 to 1;
${b}_{o}$ is the output bias;
${W}_{k}$ is the connection weight between *k*^{th} neuron of hidden layer and the single output neuron;
${b}_{hk}$ is the bias at the *k*^{th} neuron of hidden layer; *h* is the number of neurons in the hidden layer;
${W}_{ik}$ is the connection weight between *i*^{th} input variable and *k*^{th} neuron of hidden layer;
${X}_{i}$ is the normalized input variable *i* in the range [−1, 1];
${f}_{sig}$ is the sigmoid transfer function &
${f}_{lin}$ is the linear transfer function.

The Levenberg-Marquardt algorithm uses the following updating function [62] [63]:

${W}_{tk+1}={W}_{tk}-{\left[{J}^{\text{T}}J+\mu I\right]}^{-1}{J}^{\text{T}}E$ (4)

where

$E={\displaystyle {\sum}_{j}{E}_{j}}={\displaystyle {\sum}_{j}\frac{1}{2}{\left({T}_{j}-{Y}_{j}\right)}^{2}}$ (5)

*J* is the Jacobian matrix, which contains first derivatives of the network errors with respect to the weights and biases parameters of the ANN. *I *is the identity matrix, *E* is a vector of network errors, *W *contains both the weights and biases of the ANN, *μ *is a scalar, a parameter of the algorithm, and *tk* represents the current training epoch [62]. *Y _{j}* is the output value of the

${f}_{sig}\left(sum\right)=tansig\left(sum\right)=2/\left(1+\mathrm{exp}\left(-2\ast \left(sum\right)\right)\right)-1$ (6)

To employ an ANN the following steps must be completed: 1) data selection for learning, 2) network architecture selection, 3) determination of weight and threshold values, 4) verification and validation of the prediction model on the basis of error function, and, optionally, 5) optimization of the function learned by ANN [48].

2.4. Application of RSM and ANN to the Present Work

Experimental

Details of the experimental setup and the data employed in this study have been reported previously in the work of [9].

The following formulas have been employed to calculate the coded factors of (*r*) and (*x*):

$\begin{array}{l}R=r/75\\ X=\left(x-127.5\right)/97.5\end{array}$ (7)

where:

*r*: radial distance from the center line of the flame (mm);

*x*: axial distance along with the flame over the disc (mm).

ANN Modeling Process

The Artificial Neural Network Toolbox of MATLAB (Matlab 2016a (9.0.0), MathWorks, Natick, MA, USA) has been utilized for the development process of the MLP network employed in this study. A three-layer feed-forward ANN has been established with two input neurons for the coded influencing factors *R* and *X* representing the coded radial and axial distances respectively beside one output neurons for the dependent response variable of Temperature. The back propagation algorithm has been employed for training data because of its ability for acquiring the non-linear functional relationships between inputs and targets. The activation functions involved in this network are hyperbolic tangent sigmoid (tansig) for the hidden layer and linear (purelin) for the output layer. The Levenberg-Marquardt back-propagation training algorithm was applied for minimizing the error function of the ANN employing the mean square error (MSE) as performance function.

The imported processing data matrix from laboratory experiment results including the coded *X* and *R* as input variables and radial mean temperature as an output variable were randomly divided by the network into three categories of training data (with a share of 70%), test data (with a share of 15%) and validation data (with a share of 15%). For identification of the optimum network architecture, it is essential to determine the number of neurons in the hidden layer. Therefore, the number of neurons in the hidden layer was varied 3 to 14 neurons and the performance parameter (MSE) of each run was accordingly calculated with respect to the target value. The network with 10 neurons in hidden layer displayed the best results of minimum MSE. In each network training process, the weight and bias were corrected to reduce the MSE. For a certain group of neurons in the hidden layer different results may be obtained in each training process. Therefore, training process for each number of neurons in the hidden layer was executed in five repetitions while the value of the performance function was calculated for each repetition and the average value of the performance function for five repetitions was obtained. Calculating the average value eliminates the effect of the output differences [70]. Also, this concept has been applied in all the evaluations of performance and errors of ANN method.

2.5. Models Validation and Evaluation

Many approaches have been stated in the literature for evaluation of the goodness of model fitting and prediction accuracy of RSM & ANN beside error analyses as presented in Tables 1(a)-(d).

Table 1. (a) Performance and Error functions and their equations; (b) Performance and Error functions and their equations; (c) Performance and Error functions and their equations; (d) Performance and Error functions and their equations.

Where: *n*: is the number of experimental data points;
${x}_{iexp}$ : is the experimental value of the *i*th data point;
${x}_{ipred}$ : is the corresponding value of the *i*th experimental data point predicted by the model;
$\stackrel{\xaf}{{X}_{e}}$ : is the value of the average experimentally measured temperatures and
$\stackrel{\xaf}{{X}_{p}}$ : is the value of the average predicted temperatures.

Where: *n*: is the number of experimental data points; *p*: is the number of non-constant terms in the RSM model;
${x}_{iexp}$ : is the experimental value of the *i*th data point;
${x}_{ipred}$ : is the corresponding value of the *i*th experimental data point predicted by the model;
$\stackrel{\xaf}{{X}_{e}}$ : is value of the average experimentally measured temperatures and
$\stackrel{\xaf}{{X}_{p}}$ : is value of the average predicted temperatures.

Where: n: is the number of experimental data points; *p*: is the number of non-constant terms in the RSM model;
${x}_{iexp}$ : is the experimental value of the *i*th data point;
${x}_{ipred}$ : is the corresponding value of the* ith* experimental data point predicted by the model;
$\stackrel{\xaf}{{X}_{e}}$ : is value of the average experimentally measured temperatures and
$\stackrel{\xaf}{{X}_{p}}$ : is value of the average predicted temperatures.

Where: *n*: is the number of experimental temperatures data points; *p*: is the number of non-constant terms in the RSM model;
${x}_{iexp}$ : is the experimental value of the *i*th temperature data point;
${x}_{ipred}$ : is the corresponding value of the *i*th temperature data point predicted by the model;
$\stackrel{\xaf}{{X}_{e}}$ : is value of the average experimentally measured temperatures and
$\stackrel{\xaf}{{X}_{p}}$ : is value of the average predicted temperatures;
${P}_{T}$ : predicted temperatures.
$In{p}_{k,i}$ and
$\stackrel{\xaf}{In{p}_{k}}$ denote the *i*th and the average value of the *i*th input variable respectively (*k* = *R* and *X*).
${P}_{Ti}$ and
$\stackrel{\xaf}{{P}_{T}}$ refer to the *i*th predicted temperature and average predicted temperature respectively.

For the RSM several mathematical models have been suggested to establish the relationship between the dependent and independent variables. A suitable power transformation to the response data has been recognized using the Box-Cox method for normalizing the data or equalizing its variance. This method indicated that sqrt(T) of the mean experimental temperature T dependent variable is the best transformation, so it has been employed to represent the response Y in Equation (2) [56].

In the present study the following cases have been considered

Case a—The response temperature has been employed as it is for Y in Equation (2) and for training in case of ANN and the predicted temperatures were compared with the corresponding experimental temperature ones.

Case b—The sqrt(T) has been employed in Equation (2) as Y and for training in case of ANN and the predicted results in both cases have been transformed to the equivalent predicted temperature to be compared with the corresponding experimental temperature ones.

Case c—The sqrt(T) has been employed in Equation (2) as Y and for training in case of ANN and predicted results in both cases have been compared with the corresponding sqrt(experimental temperature) ones.

3. Results and Discussions

These above mentioned formulas Tables 1(a)-(d) have been employed in this study for performance evaluation and error analyses and the results are recorded in Tables 2(a)-(c). These results revealed the consistently accurate performance

Table 2. (a) Performance and error evaluation of RSM and ANN methods; (b) Performance and error evaluation of RSM and ANN methods; (c) Performance and error evaluation of RSM and ANN methods.

of the properly trained ANN compared to RSM model in all aspects indicated in the values of the predicted temperatures compared to the experimentally measured ones suggesting the scucessfulness of ANN model for both simulation and predictions. Similar annotations were obtained by many research groups studying various engineering problems [46]. This is conveyed in the very high values of *R*^{2} & F ratio and the exceedingly low value of error indicators for the ANN results compared to that of RSM ones. Considering the results of the studied case a for the values of
${R}_{adj}^{2}$ (0.9881, 0.9883, 0.9937 & 0.9857) in case of ANN compared to (0.9181, 0.9441, 0.9208& 0.9336) in case of RSM, and of *F_ratio* in case of ANN (18,379.5, 12,563.8, 18,886.4 & 8538.4) matched to (796.9, 1159.6, 634.5 & 952.4) in case of RSM for FR_Pt, FL_Pt, FR_Pd and FL_Pd respectively designating the preeminence of ANN in prediction. Furthermore the ranges of
${R}_{adj}^{2}$ * and F_ratio* in all the studied cases are 0.9857 - 0.9951 & 7636.4 – 24,028.4 for ANN method compared to 0.9181 - 0.9809 & 634.5 - 3528.8 for RSM method. The superior modeling capability of ANN can be accredited to its universal approximation facility for nonlinearity, whereas RSM is only limited to a second-order polynomial regression [71].

Also, in all the studied cases, the predicted temperatures were compared with corresponding experimental ones and the error was referred to the maximum experimental temperature and this comparison is demonstrated in Table 2(a) as *max _{error}*%. In all cases studied the

Table 2(a) discloses that the ANN method is more expensive than RSM. This is shown in the larger elapsed time for NN (3.09 - 5.14) compared to that of RSM (1.19E−02 - 1.04E−01), because ANN method uses a series of computationally expensive functions for a single model.

The three-dimensional concave curved response surfaces in Figures 4(a)-(d) designate the probability of obtaining a maximum value of the measured temperature within the chosen factors levels and analyses the interactive relationships among the factors and the response [36] [64].

The contour plots of Figures 5(a)-(d) consider the individual and cumulative influence of the variables and the mutual interaction between the variables and the dependent variable [72] [73]. The oval shape of the contour plots points to a significant interaction between the independent variables. The smallest ellipses in the contour plots denote the maximum predicted values [71].

(a)(b)(c)(d)

Figure 4. (a) ANN Surface plot for FR_Pt; (b) ANN Surface plot for FL_Pt; (c) ANN Surface plot for FR_Pd; (d) ANN Surface plot for FL_Pd.

(a)(b)(c)(d)

Figure 5. (a) ANN Contour plot for FR_Pt; (b) ANN Contour plot for FL_Pt; (c) ANN Contour plot for FR_Pd; (d) Contour plot for FL_Pd.

3.1. Simulation and Optimization

Establishing the efficiency of the neural network to predict the response temperature for the various conditions of the experiments, the final optimum architecture was utilized for the Prediction of maximum temperature for the above mentioned Flame Conditions and Disc Types. The region defined by the two coded experimental input variables design limits has been divided into 20 intervals resulting in a total of 20^{2} situations. The neural network has been applied to predict the response temperature for these situations [74]. The maximum temperature response and its corresponding input variables have been obtained by investigating the simulated results. The optimization results are portrayed in Figures 4(a)-(d) & Figures 5(a)-(d) which reveals the values for the maximum predicted temperature and the corresponding coded input variables that have been obtained employing the ANN method. Table 2(a) shows the maximum predicted temperature together with the corresponding predicted input variables.

The RSM the results of cases b & c have been reported previously in our study [56] and depicted in Table 2(a). As for case a; the values of the mean experimental temperature *T* dependent variable cited along with the corresponding *x* and *r* [9] have been employed utilizing Ordinary Least Squares (OLS) method to represent the response Y in Equation (2) in terms of the coded *X* & *R* resulting the following equations:

For FR_Pt

$T=1098.2+174.9\ast X-574.7\ast {R}^{2}-513.4{X}^{2}$ (8)

For FL_Pt

$T=824.7+214.7\ast X-542.6\ast {R}^{2}-252.8\ast {X}^{2}$ (9)

For FR_Pd

$T=737.9+44.04\ast R+238.4\ast X-389.2\ast {R}^{2}-266.4\ast {X}^{2}+37.18\ast R\ast X$ (10)

For FL_Pd

$T=822.1+165.8\ast X-402.3\ast {R}^{2}-352.6\ast {X}^{2}$ (11)

An optimization process, exploiting Matlab 2016a (9.0.0), has been performed for the above presented Equations (8)-(11) to estimate the maximum predicted temperature and the corresponding *R* and *X *values. The Matlab implements a multidimensional unconstrained nonlinear optimization employing the Nelder-Mead simplex (direct search) method. Table 2(a) shows the maximum predicted temperature together with the *R* & *X* values.

From this table, it is clear the predicted maximum temperatures from ANN method are closer to the corresponding experimental ones than those predicted by RSM. Comparing the maximum predicted temperatures to the analogous experimental ones and referring the absolute maximum deviation to the maximum experimental temperature, resulted in *AD _{Tmax}* values recorded in Table 2(a). The values

3.2. Comparative Evaluation of RSM and ANN

RSM is recommended for modeling of a new process as it is easier compared to ANN and its sensitivity analysis is more precise. ANN has excellent prediction and optimization abilities; it is best suited for nonlinear systems that include interactions higher than quadratic. Moreover, ANN does not require any prior specification for suitable fitting function [35] [71].

The structured nature of RSM delivers the predicted quadratic equation revealing the factors of contributions from the regression coefficients of the models. This aptitude is vigorous in identifying the significant and insignificant terms in the model therefore can reduce the complexity of the models. The Artificial Neural Network (ANN) model offers little information about the contribution of the factors and their influence on the response if further analysis has not been done [75] [76].

The greater predictive accuracy of the ANN is accredited to its ability to process multi-dimensional, non-linear and clustered information whereas RSM is restricted to use of a second order polynomial. The generation of an optimum ANN is a multi-step calculation process, that is reiterated until an appropriate error is attained whereas a response surface model is based on a single step calculation [25] [76].

ANN is an alternative better than the methods based on RSM in the case of performance. Furthermore, ANN can increase the level of certainty associated with the results and simultaneously can be used to validate new technological strategies [35]. Therefore, using RSM-ANN modeling, the shortcomings of RSM can be resolved and the actual relationship between independent and response parameters can be studied through experimental data [40].

4. Conclusions

An artificial neural model was successfully established and compared to RSM to predict the temperature profile of the various Flame Conditions and Disc Types for three cases. A generalized, properly fit, robust feed-forward artificial neural network model was developed, using a back propagation based Levenberg-Marquardt algorithm, and utilized to train the data from the experimental laboratory testing. The study consequence proves that both the statistical and computational intelligence modeling of ANN can make a potential alternative to the time-consuming experimental studies in addition to minimizing the costly machining test trials. The main conclusions obtained in this study are as follows:

1) The neural network model, with 10 neurons in the hidden layer, produced prediction results in very good agreement with the experimental data.

2) The systematic comparative study has revealed that the properly trained ANN model has consistently performed more accurate predictions in all aspects compared to those of RSM. This accurateness of predictions is expressed in the very high values of *R*^{2} and F_ratios and the very low value of error indicators for the ANN results compared to RSM ones.

3) The ANN model displays greater generalization capacity than the rest of the RSM models. The reason can be accredited to the universal ability of ANN to approximate the nonlinearity of the system. It can be concluded that ANN provides a more accurate replacement of RSM due to its better predictive ability compared to that of RSM.

4) Considering the accurate results and acceptable errors of ANN it can be used to economize material and time in designs.

5) The reliability of the ANN as a predictive modeling tool (justified through the very high values obtained for both statistical parameters *R*^{2} and adjusted *R*^{2}) confirms a perfect correlation between the predicted ANN values and the corresponding experimental values.

References

[1] Hoffmann, S., Bartlett, M., Finkenrath, M., Evulet, A. and Ursin, T.P. (2009) Performance and Cost Analysis of Advanced Gas Turbine Cycles with Precombustion CO2 Capture. Journal of Engineering for Gas Turbines and Power, 131, Article ID: 021701.

https://doi.org/10.1115/1.2982147

[2] Winkler, D., Mueller, P., Reimer, S., Griffin, T., Burdet, A., Mantzaras, J. and Ghermay, Y. (2009) Improvement of Gas Turbine Combustion Reactivity under Flue Gas Recirculation Condition with In-Situ Hydrogen Addition. Proceedings of the ASME Turbo Expo 2009: Power for Land, Sea, and Air. Vol. 2: Combustion, Fuels and Emissions, Orlando, 8-12 June 2009, 137-145.

https://doi.org/10.1115/GT2009-59182

[3] Zakhary, A.S. and Aboul-Gheit, A.K. (2005) Catalytic Combustion of Gaseous Fuel over a Platinum/Al2O3 Disc. Intergas 3rd International Conference for Oil, Gas & Petrochemicals, Cairo, 18-20 December 2005, 18-20.

[4] Zakhary, A.S. and Aboul-Gheit, A.K. (2006) Effect of Catalytic Disc on the Thermal Structure of Turbulent Confined Lifted Diffusion Flames. Egypt Journal of Petroleum, 15, 31-38.

[5] Zakhary, A.S. and Aboul-Gheit, A.K. (2006) Catalytic Combustion Enhancement of the Stability Limits of Confined Turbulent Jet Diffusion Flames. Egyptian Journal of Petroleum, 14, 107-116.

[6] Deriase, S.F., Ghoneim, S.A., Zakhary, A.S. and Aboul-Gheit, A.K. (2012) The Experimental and Numerical Approach of Catalytic Combustion on Noble Metals Disc Burners of the Turbulent Gaseous Fuel Jet Diffusion Flames. Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 34, 492-507.

https://doi.org/10.1080/15567030903551208

[7] Zheng, X. and Mantzaras, J. (2013) Homogeneous Combustion of Fuel-Lean Syngas Mixtures over Platinum at Elevated Pressures and Preheats. Combustion and Flame, 160, 155-169.

https://doi.org/10.1016/j.combustflame.2012.09.001

[8] Schullze, M. and Mantzars, J. (2013) Hetro-/Homogeneous Combustion of Hydrogen/Air Mixture over Platinum: Fuel-Lean versus Fuel-Rich Combustion Modes. International Journal of Hydrogen Energy, 38, 10654-10670.

https://doi.org/10.1016/j.ijhydene.2013.06.069

[9] Zakhary, A.S., Aboul-Gheit, A.K. and Ghoneim, S.A. (2014) Fuel-Rich and Fuel-Lean Catalytic Combustion of the Stabilized Confined Turbulent Gaseous Diffusion Flames over Noble Metal Disc Burners. Egyptian Journal of Petroleum, 23, 79-86.

https://doi.org/10.1016/j.ejpe.2014.02.011

[10] Wierzbicki, T.A., Lee, I.C. and Gupta, A.K. (2016) Recent Advances in Catalytic Oxidation and Reformation of Jet Fuels. Applied Energy, 165, 904-918.

https://doi.org/10.1016/j.apenergy.2015.12.057

[11] Chen, J., Song, W. and Xu, D. (2017) Optimal Combustor Dimensions for the Catalytic Combustion of Methane-Air Mixture in Micro-Channels. Energy Conversion and Management, 134, 197-207.

https://doi.org/10.1016/j.enconman.2016.12.028

[12] Yan, Y., Wang, H., Pan, W., Zhang, L., Li, L., Yang, Z. and Lin, C. (2016) Numerical Study of Effect of Wall Parameters on Catalytic Combustion Characteristics of CH4/Air in a Heat Recirculation Micro Combustor. Energy Conversion and Management, 118, 474-484.

https://doi.org/10.1016/j.enconman.2016.04.026

[13] Arani, B.O., Frouzakis, C.E., Mantazars, J. and Boulouchos, K. (2017) Three- Dimensional Direct Numerical Simulation of Turbulent Fuel-Lean H2/Air Hetro-/ Homogeneous Combustion over Pt with Detailed Chemistry. Proceedings of the Combustion Institute, 36, 4355-4363.

https://doi.org/10.1016/j.proci.2016.05.009

[14] Arani, B.O., Frouzakis, C.E., Mantzaras, J. and Boulouchos, K. (2018) Direct Numerical Simulations of Turbulent Catalytic and Gas-Phase Combustion of H2/Air over Pt at Practically Reynolds Numbers. Proceedings of the Combustion Institute, 37, 5489-5497.

https://doi.org/10.1016/j.proci.2018.05.103

[15] Pan, J., Miao, N., Lu, Z., Lu, Q., Yang, W., Pan, Z. and Zhang, Y. (2019) Experimental and Numerical Study on the Transition Conditions and Influencing Factors of Hetro-/Homogeneous Reaction for H2/Air Mixture in Micro Catalytic Combustor. Applied Thermal Engineering, 154, 120-130.

https://doi.org/10.1016/j.applthermaleng.2019.03.076

[16] Zang, M., Zho, C., Wang, Y. and Chen, S. (2019) A Review of Recent Advances in Catalytic Combustion of VOCs on Perovskite-Type Catalysts. Journal of Saudi Chemical Society, 23, 645-654.

https://doi.org/10.1016/j.jscs.2019.01.004

[17] He, L., Fan, Y., Bellettre, J., Yue, J. and Luo, L. (2020) A Review on Catalytic Methane Combustion at Low Temperatures: Catalysts, Mechanisms, Reaction conditions and Reaction Designs. Renewable and Sustainable Energy Reviews, 119, Article ID: 109589.

https://doi.org/10.1016/j.rser.2019.109589

[18] He, L., Fan, Y., Luo, L., Bellettre, J. and Yue, J. (2020) Preparation of Pt/γ-Al2O3 Catalyst Coating in Microreactors for Catalytic Methane Combustion. Chemical Engineering Journal, 380, Article ID: 122424.

https://doi.org/10.1016/j.cej.2019.122424

[19] Ahmadpour, A., HaghighiAsl, A. and Fallah, N. (2018) Investigation of Spent Caustic Wastewater Treatment through Response Surface Methodology and Artificial Neural Network in a Photocatalytic Reactor. Iranian Journal of Chemical Engineering, 15, 46-72.

[20] Selvan, S.S., Pandian, P.S., Subathira, A. and Saravanan, S. (2018) Comparison of Response Surface Methodology (RSM) and Artificial Neural Network (ANN) in Optimization of Aegle marmelos Oil Extraction for Biodiesel Production. Arabian Journal for Science and Engineering, 43, 6119-6131.

https://doi.org/10.1007/s13369-018-3272-5

[21] Moh Zin, K., Effendi Halmi, M.I., Abd Gani, S.S., Zaidan, U.H., Samsuri, A.W. and Abd Shukor, M.Y. (2020) Microbial Decolorization of Triazo Dye, Direct Blue 71: An Optimization Approach Using Response Surface Methodology (RSM) and Artificial Neural Network (ANN). BioMed Research International, 2020, Article ID: 2734135.

https://doi.org/10.1155/2020/2734135

[22] Antil, S.K., Antil, P., Singh, S., Kumar, A. and Iulian, C. (2020) Artificial Neural Network and Response Surface Methodology Based Analysis on Solid Particle Erosion Behavior of Polymer Matrix Composites. Materials, 13, Article ID: 1381.

https://doi.org/10.3390/ma13061381

[23] Awolusi, T.F., Oke, O.L., Akinkurolere, O.O., Sojobi, A.O. and Aluko, O.G. (2019) Performance Comparison of Neural Network Training Algorithms in the Modeling Properties of Steel Fiber Reinforced Concrete. Heliyon, 5, e01115.

https://doi.org/10.1016/j.heliyon.2018.e01115

[24] Karkalos, N.E., Efkolidis, N., Kyratsis, P. and Markopoulos, A.P. (2019) A Comparative Study between Regression and Neural Networks for Modeling Al6082-T6 Alloy Drilling. Machines, 7, Article ID: 13.

https://doi.org/10.3390/machines7010013

[25] Manda, A., Roderick, B.W. and Khamanga, S.M.M. (2019) An Artificial Neural Network Approach to Predict the Effects of Formulation and Process Variables on Prednisone Release from a Multipartite System. Pharmaceutics, 11, Article ID: 109.

https://doi.org/10.3390/pharmaceutics11030109

[26] Yu, H.C., Huang, S.M., Lin, W.M., Kuo, C.H. and Shieh, C.J. (2019) Comparison of Artificial Neural Networks and Response Surface Methodology towards an Efficient Ultrasound-Assisted Extraction of Chlorogenic Acid from Lonicera japonica. Molecules, 24, Article ID: 2304.

https://doi.org/10.3390/molecules24122304

[27] Osman, H., Shigidi, I. and Arabi, A. (2019) Multiple Modeling Techniques for Assessing Sesame Oil Extraction under Various Operating Conditions and Solvents. Foods, 8, Article ID: 142.

https://doi.org/10.3390/foods8040142

[28] Qadir, R., Anwar, F., Gilani, M.A., Zahoor, S., Rehman, M.M. and Mustaqeem, M. (2019) RSM/ANN Based Optimized Recovery of Phenolics from Mulberry Leaves by Enzyme-Assisted Extraction. Czech Journal of Food Sciences, 37, 99-105.

https://doi.org/10.17221/147/2018-CJFS

[29] Habeeb, O.A., Ayodele, B.V., Alsaffar, M.A., Tuan Adbullah, T.A.R., Kanthasamy, R. and Yunus, R. (2021) Experimental Studies and Artificial Neural Network Modeling of Hydrogen Sulfide Removal from Wastewater by Calcium-Modified Coconut Shell Based Activated Carbon. Songklanakarin Journal of Science and Technology, 43, 96-104.

[30] Cisternas, L.A., Lucay, F.A. and Botero, Y.L. (2020) Trends in Modeling, Design, and Optimization of Multiphase Systems in Minerals Processing. Minerals, 10, Article ID: 22.

https://doi.org/10.3390/min10010022

[31] Ayodele, B.V., Alsaffar, M.A., Mustapa, S. and Vo, D.V.N. (2020) Back-Propagation Neural Networks Modeling of Photocatalytic Degradation of Organic Pollutants Using TiO2-Based Photocatalysts. Journal of Chemical Technology and Biotechnology, 95, 2739-2749.

[32] Srinidhi, F., Patel, D. and Kumara, V.S.A. (2021) Artificial Neural Network (FFBP-ANN) Based Grey Relational Analysis for Modeling Dyestuff Solubility in Supercritical CO2 with Ethanol as the Co-Solvent. Cambridge Open Engage, Cambridge.

https://doi.org/10.26434/chemrxiv.11973273

[33] Agu, C.M., Menkiti, M.C., Ekwe, E.B. and Agulanna, A.C. (2020) Modeling and Optimization of Terminalia Catappa L. Kernel Oil Extraction Using Response Surface Methodology and Artificial Neural Network. Artificial Intelligence in Agriculture, 4, 1-11.

[34] Beigzadeh, R. and Rastegar, S.O. (2020) Assessment of Cr(VI) Biosorption from Aqueous Solution by Artificial Intelligence. Chemical Methodologies, 4,181-190.

https://doi.org/10.33945/SAMI/CHEMM.2020.2.8

[35] Pereira, A.K.V., de Melo Barbosa, R., Fernandes, M.A.C., Finkler, L. and Finkler, C.L.L. (2020) Comparative Analyses of Response Surface Methodology and Artificial Neural Networks on Incorporating Tetracaine into Liposomes. Brazilian Journal of Pharmaceutical Sciences, 56, e17808.

https://doi.org/10.1590/s2175-97902019000317808

[36] Chollom, M.N., Rathilal, S., Swalaha, F.M., Bakare, B.F. and Tetteh, E.K. (2020) Comparison of Response Surface Methods for the Optimization of an up Flow Anaerobic Sludge Blanket for the Treatment of Slaughterhouse Wastewater. Environmental Engineering Research, 25, 114-122.

https://doi.org/10.4491/eer.2018.366

[37] Atashi, H., Hajisafari, M., Rezaeian, F. and Parnian, M.J. (2019) Modeling of Liquid Hydrocarbon Products from Syngas. International Journal of Coal Science & Technology, 6, 27-36.

https://doi.org/10.1007/s40789-018-0232-3

[38] Maran, J.P, Sivakumar, V., Thirugnanasambandham, K. and Sridhar, R. (2013) Artificial Neural Network and Response Surface Methodology Modeling in Mass Transfer Parameters Predictions during Osmotic Dehydration of Carica papaya L. Alexandria Engineering Journal, 52, 507-516.

https://doi.org/10.1016/j.aej.2013.06.007

[39] Talib, N.S.R., Halmi, M.E., Abd Ghani, S.S., Zaidan, U.H. and Abd Shukor, M.Y. (2019) Artificial Neural Networks (ANNs) and Response Surface Methodology (RSM) Approach for Modeling the Optimization of Chromium (VI) Reduction by Newly Isolated Acinetobacter radioresistens Strain NS-MIE from Agricultural Soil. BioMed Research International, 2019, Article ID: 5785387.

https://doi.org/10.1155/2019/5785387

[40] Zhang, J., Lin, G., Yin, X., Zeng, J., Wen, S. and Lan, Y. (2020) Application of Artificial Neural Network (ANN) and Response Surface Methodology (RSM) for Modeling and Optimization of the Contact Angle of Rice Leaf Surfaces. Acta Physiologiae Plantarum, 42, Article No. 51.

https://doi.org/10.1007/s11738-020-03040-0

[41] Oguntade, T.I, Christiana, T., Ita, C.S., Sanmi, O. and Oyekunle, D.T. (2020) A Binary Mixture of Sesame and Castor Oil as an Ecofriendly Corrosion Inhibitor of Mild Steel in Crude Oil. The Open Chemical Engineering Journal, 14, 25-35.

[42] Lucay, F.A., Sales-Cruz, M., Gálvez E.D. and Cisternas, L.A (2021) Modeling of the Complex Behavior through an Improved Response Surface Methodology. Mineral Processing and Extractive Metallurgy Review, 42, 285-311.

https://doi.org/10.1080/08827508.2020.1728265

[43] Bas, D. and Boyaci, I.H. (2007) Modeling and Optimization I: Usability of Response Surface Methodology. Journal of Food Engineering, 78, 836-845.

https://doi.org/10.1016/j.jfoodeng.2005.11.024

[44] Khuri, A.I. (2017) A General Overview of Response Surface Methodology. Biometrics & Biostatistics International Journal, 5, 87-93

https://doi.org/10.15406/bbij.2017.05.00133

[45] Khuri, A.I. (2017) Response Surface Methodology and Its Applications In Agricultural and Food Sciences. Biometrics & Biostatistics International Journal, 5, 155-163.

https://doi.org/10.15406/bbij.2017.05.00141

[46] Sivamani, S., Selvakumar, S., Rajendran, K. and Muthusamy, S. (2018) Artificial Neural Network-Genetic Algorithm Based Optimization of Biodiesel Production from Simarouba glauca. Biofuels, 10, 393-401.

https://doi.org/10.1080/17597269.2018.1432267

[47] Nezhad, H.B., Miri, M. and Ghasemi, M.R. (2019) New Neural Network-Based Response Surface Method for Reliability Analysis of Structures. Neural Computing and Applications, 31, 777-791.

https://doi.org/10.1007/s00521-017-3109-2

[48] Sampaio, F.C., de Faria, J.T., Silva, G.D.L., Goncalves, R.M., Pitangui, C.G., Alberto, A., Al Arni, S. and Attilio, C. (2017) Comparison of Response Surface Methodology and Artificial Neural Network for Modeling Xylose-to-Xylitol Bioconversion. Chemical Engineering & Technology, 40, 122-129.

https://doi.org/10.1002/ceat.201600066

[49] Yadav, A.M, Chaurasia, R.C., Suresh, N. and Gajbhiye, P. (2018) Application of Artificial Neural Networks and Response Surface Methodology Approaches for the Prediction of Oil Agglomeration Process. Fuel, 220, 826-836.

https://doi.org/10.1016/j.fuel.2018.02.040

[50] Weibull (2015) Response Surface Methods for Optimization.

http://www.weibull.com/DOEWeb/response_surface_methods.htm

[51] Pambi, R.L.L. and Musonge, P. (2016) Application of Response Surface Methodology (RSM) in the Treatment of Final Effluent from the Sugar Industry Using Chitosan. WIT Transactions on Ecology and the Environment, 209, 209-219.

https://doi.org/10.2495/WP160191

[52] Khajeh, M., Sarafraz-Yazdi, A. and Moghadam, A.F. (2017) Modeling of Solid-Phase Tea Waste Extraction for the Removal of Manganese and Cobalt from Water Samples by Using PSO-Artificial Neural Network and Response Surface Methodology. Arabian Journal of Chemistry, 10, S1663-S1673.

https://doi.org/10.1016/j.arabjc.2013.06.011

[53] Doust, A.M., Rahimi, M. and Feyzi, M. (2016) An Optimization Study by Response Surface Methodology (RSM) on Viscosity Reduction of Residue Fuel Oil Exposed Ultrasonic Waves and Solvent Injection. Iranian Journal of Chemical Engineering, 13, 3-19.

[54] Gendy T.S., El-Temtamy S.A., Ghoneim, S.A., El-Salamony, R.A., El-Naggar, A.Y. and El-Morsi, A.K. (2016) Response Surface Methodology for Carbon Dioxide Reforming of Natural Gas. Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 38, 1236-1245.

https://doi.org/10.1080/15567036.2013.876466

[55] Gendy, T.S., Zakhary, A.S. and El-Shiekh, T.M. (2017) Response Surface Methodology for Stabilized Turbulent Confined Jet Diffusion Flames Using Bluff-Body Burners. Journal of Scientific and Engineering Research, 4, 230-242

[56] Gendy, T.S, Ghoneim, S.A. and Zakhary, A.S. (2019) Response Surface Modeling of Fuel Rich and Fuel Lean Catalytic Combustion of the Stabilized Confined Turbulent Gaseous Diffusion Flames. World Journal of Engineering and Technology, 7, 1-17.

https://doi.org/10.4236/wjet.2019.71001

[57] Panerati, J., Matthias, A., Patience, S.C., Beltrame, G. and Patience, G.S. (2019) Experimental Methods in Chemical Engineering: Artificial Neural Networks—ANNs. The Canadian Journal of Chemical Engineering, 97, 2372-2382.

https://doi.org/10.1002/cjce.23507

[58] McCulloch, W. and Pitts, W. (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. The Bulletin of Mathematical Biophysics, 5, 115-133.

https://doi.org/10.1007/BF02478259

[59] Eftekhari, M., Yadollahi, A., Ahmadi, H., Shojaeiyan, A. and Ayyari, M. (2018) Development of an Artificial Neural Network as a Tool for Predicting the Targeted Phenolic Profile of Grapevine (Vitis vinifera) Foliar Wastes. Frontiers in Plant Science, 9, 837-846.

https://doi.org/10.3389/fpls.2018.00837

[60] Bezsonov, O., Ilyunin, O., Kaldybaeva, B., Selyakov, O., Perevertaylenko, O., Khusanov, A., Rudenko, O., Udovenko, S., Shamraev, A. and Zorenko, V. (2019) Resource and Energy Saving Neural Network-Based Control Approach for Continuous Carbon Steel Pickling Process. Journal of Sustainable Development of Energy, Water and Environment Systems, 7, 275-292.

https://doi.org/10.13044/j.sdewes.d6.0249

[61] Dudzik, A. and Potrzeszcz-Sut, B. (2019) The Structural Reliability Analysis Using Explicit Neural State Functions. MATEC Web of Conferences, 262, 10002-10008.

[62] Gomes, W.J.S. (2019) Structural Reliability Analysis Using Artificial Neural Networks. The ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B, 5, Article ID: 041004.

https://doi.org/10.1115/1.4044040

[63] Mohd Noor, C.W., Mamat, R., Najafi, G., Wan Nik, W.B. and Fadhil, M. (2015) Application of Artificial Neural Network for Prediction of Marine Diesel Engine Performance. IOP Conference Series: Materials Science and Engineering, 100, Article ID: 012023.

https://doi.org/10.1088/1757-899X/100/1/012023

[64] Speck, F., Raja, S., Ramesh, V. and Thivaharan, V. (2016) Modeling and Optimization of Homogenous Photo-Fenton Degradation of Rhodamine B by Response Surface Methodology and Artificial Neural Network. International Journal of Environmental Research, 10, 543-554.

[65] Beale, M.H., Hagan, M.T. and Demuth, H.B. (2016) Neural Network Toolbox: User’s Guide. The MathWorks, Inc., Natick, MA.

[66] Ayodele, B.V., Mustapa, S., Alsaar, M.A and Cheng, C.K. (2019) Artificial Intelligence Modeling Approach for the Prediction of CO-Rich Hydrogen Production Rate from Methane Dry Reforming. Catalysts, 9, Article ID: 738.

https://doi.org/10.3390/catal9090738

[67] De Castro, L.N. and Von Zuben, F.J. (2002) Automatic Determination of Radial Basis Functions: An Immunity-Based Approach. International Journal of Neural Systems, 11, 523-535.

https://doi.org/10.1142/S0129065701000941

[68] Amin, N.A.S. and Istadi, I. (2012) Different Tools on Multi-Objective Optimization of a Hybrid Artificial Neural Network—Genetic Algorithm for Plasma Chemical Reactor Modeling. In: Roeva, O., Ed., Real-World Applications of Genetic Algorithms, IntechOpen, London, 1-26.

https://doi.org/10.5772/38290

[69] Fath, A.H., Pouranfard, A. and Foroughizadeh, P. (2018) Development of an Artificial Neural Network Model for Prediction of Bubble Point Pressure of Crude Oils. Petroleum, 4, 281-291.

https://doi.org/10.1016/j.petlm.2018.03.009

[70] Najafi, B., Ardabili, S.F., Mosavi, A., Shamshirband, S. and Rabczuk, T. (2018) An Intelligent Artificial Neural Network-Response Surface Methodology Method for Accessing the Optimum Biodiesel and Diesel Fuel Blending Conditions in a Diesel Engine from the View Point of Exergy and Energy Analysis. Energies, 11, Article ID: 860.

https://doi.org/10.3390/en11040860

[71] Shafi, J., Zhonghua, S., Ji, M., Gu, Z. and Ahmad, W. (2018) ANN and RSM Based Modeling for Optimization of Cell Dry Mass of Bacillus sp. Strain B67 and Its Antifungal Activity Against Botrytis cinerea. Biotechnololgy & Biotechnological Equipment, 32, 58-68.

https://doi.org/10.1080/13102818.2017.1379359

[72] Ravikumar, K., Krishnan, S., Ramalingam, S. and Balu, K. (2007) Optimization of Process Variables by the Application of Response Surface Methodology for Dye Removal Using a Novel Adsorbent. Dyes and Pigments, 72, 66-74.

https://doi.org/10.1016/j.dyepig.2005.07.018

[73] Taheri-Garavand, A., Karimi, F., Karimi, M., Lotfi, V. and Khoobbakht, G. (2017) Hybrid Response Surface Methodology-Artificial Neural Network Optimization of Drying Process of Banana Slices in a Forced Convective Dryer. Food Science and Technology International, 24, 277-291.

https://doi.org/10.1177/1082013217747712

[74] Amin, N.A.S., Yusof, K.M. and Isha, R. (2005) Carbon Dioxide Reforming of Methane to Syngas: Modeling Using Response Surface Methodology and Artificial Neural Network. Journal Teknologi, 43, 15-30.

[75] Tang, S.Y., Lee, J.S., Loh, S.P. and Tham, H.J. (2017) Application of Artificial Neural Network to Predict Colour Change, Shrinkage and Texture of Osmotically Dehydrated Pumpkin. IOP Conference Series: Materials Science and Engineering, 206, Article ID: 012036.

https://doi.org/10.1088/1757-899X/206/1/012036

[76] Mishra, A., Jaiswal, K., Rose, A.R. and Nidigonda, G. (2019) Application of Neural Network Techniques in Friction Stir Welding Process. International Journal for Research in Applied Science & Engineering Technology, 7, 838-845.

https://doi.org/10.22214/ijraset.2019.4151

[77] Elmaz, F., Yücel, O. and Mutlu, A.Y. (2020) Predictive Modeling of the Syngas Production from Methane Dry Reforming over Cobalt Catalyst with Statistical and Machine Learning Based Approaches. International Journal of Advances in Engineering and Pure Sciences, 1, 8-14.

[78] Amar, M.N., Zeraibi, N. and Redouane, K. (2018) Bottom Hole Pressure Estimation Using Hybridization Neural Networks and Grey Wolves Optimization. Petroleum, 4, 419-429.

https://doi.org/10.1016/j.petlm.2018.03.013

[79] Kayria, M. (2015) An Intelligent Approach to Educational Data: Performance Comparison of the Multilayer Perceptron and the Radial Basis Function Artificial Neural Networks. Educational Sciences: Theory & Practice, 15, 1247-1255.

[80] Shihani, N., Kumbhar, B.K. and Kulshreshtha, M. (2006) Modeling of Extrusion Process Using Response Surface Methodology and Artificial Neural Networks. Journal of Engineering Science and Technology, 1, 31-40

[81] Singh, P. (2006) Suitability of Different Neural Networks in Daily Flow Forecasting. Applied Soft Computing, 7, 968-978.

https://doi.org/10.1016/j.asoc.2006.05.003