Repetitive Nearest Neighbor Based Simulated Annealing Search Optimization Algorithm for Traveling Salesman Problem

Show more

1. Introduction

The traveling salesman problem (TSP) is a well-known combinatorial optimization problem that has been extensively studied in various fields of science and technology such as mathematics, artificial intelligence, physics, operations research, and biology. It is the problem of finding the possible shortest route among a list of cities, where each city is included once and only once and finally returns to the starting city. It is believed that the history of the TSP problem was discovered in 1920 in Vienna [1]. However, a formal description of the TSP problem was formulated by Dantzig et al. in 1954 [2]. Since then it has been used in modeling diverse real-world problems, such as designing hardware devices, microchips, and radio electronic devices, data association, data transmission in computer networks, DNA sequencing, vehicle routing, job scheduling, clustering of data arrays, image processing and pattern recognition, analysis of the structure of crystals, transportation, logistics, supply chain management, etc. [3] [4]. The TSP problem is easy to understand but often very difficult to solve as it contains all features of the combinatorial optimization problem. In fact, it has been proven to be a non-deterministic polynomial (NP) hard problem [5]. By NP-hard, we mean those problems for which no polynomial time algorithm exists to effectively solve them. Indeed, the executive time of any existing algorithm for solving the TSP problem is increased super-polynomially (or, perhaps exponentially) with the number of cities [4]. Thus, the study on improving the solution algorithm of the TSP problem has important theoretical, engineering, and practical significance.

In graph theory, the TSP problem can be illustrated by a complete directed graph $G=\left(V,E\right)$ , where V represents the set of cities also called nodes or vertices and E denotes the set of edges also called the path between each pair of distinct vertices. A distance (cost) matrix $D={d}_{ij}$ is associated with each edge ${e}_{ij}\in E$ also called the edge weight. Depending on the distance (cost) matrix D, the TSP problem can be categorized as symmetric or asymmetric. The graph G is called a symmetric TSP if all the edges of G are symmetrical edges, i.e., ${d}_{ij}={d}_{ji}$ $\forall \text{\hspace{0.17em}}{e}_{ij}\in E$ . On the other hand, G is called a asymmetric TSP if there exists at least one edge ${e}_{ij}\in E$ for which ${d}_{ij}\ne {d}_{ji}$ . In this study, we consider the asymmetric TSP problem. Thus, the objective function Z of the TSP problem can be formulated as follows [3] :

$Z=\mathrm{min}{\displaystyle \underset{i}{\sum}{\displaystyle \underset{j}{\sum}{d}_{ij}{x}_{ij}}};\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{ij}=\{\begin{array}{l}1;\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{edge}\text{\hspace{0.17em}}{e}_{ij}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{route}\\ 0;\text{\hspace{0.17em}}\text{otherwise}\end{array}$ (1)

with respect to the following constrains:

$\underset{i=1}{\overset{n}{\sum}}{x}_{ij}=1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,3,\cdots ,n$ (2)

$\underset{j=1}{\overset{n}{\sum}}{x}_{ij}=1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,3,\cdots ,n$ (3)

${x}_{ij}\in \left\{0,1\right\},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i,j=1,2,3,\cdots ,n$ (4)

$\underset{i,j\in S}{\overset{n}{\sum}}{x}_{ij}\le \left|S\right|-1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}2\le \left|S\right|\le n-2$ (5)

In the above model, Equation (1) is the total distance (cost) that needs to be minimized and Equations (2) - (5) are the constraints’ condition of the model. Equation (2) ensures that each location j is occupied by only one city, whereas Equation (3) guarantees that each city i is assigned one exact position. Equation (4) refers to the integrality constraints of variables zero-one ${x}_{ij}\left({x}_{ij}\succ 0\right)$ . In contrast, Equation (5) assures that each city in the final route will be visited once and that no sub-routes will be formed.

Due to the applicability and complexity, various researchers have been designed and developed different optimization algorithms in the last few decades to deal with the TSP problem. Among these algorithms, heuristic algorithms are the most successful and widely used search mechanism for solving the TSP problem. Heuristic algorithms, however, offer a satisfactory solution but are often meet with the problem of premature converge. As a result, the search process easily falls into the trapped of local optimum condition and is unable to jump the solution into another promising search space. For this reason, many researchers turned into developing hybrid optimization algorithms through the integration of the superiority of two or more heuristic optimization algorithms. The aim of this paper is to design and implement a hybrid optimization algorithm called RNN-SA for solving symmetric TSP problem, which uses the search process of RNN algorithm and SA algorithm. The proposed RNN-SA algorithm performs the search procedure by two stages. It first generates a set of feasible routes step by step based on the procedure of RNN. Then, these routes are used as the initial solutions of the SA algorithm and are improved iteratively in an effective iterative improvement process. The proposed algorithm is implemented on a collection of benchmark TSP datasets. The proposed RNN-SA gets better results than both the RNN and SA algorithms and performs better than some other hybrid optimization algorithms.

The rest of this paper is organized as follows. In Section 2, we review some related hybrid algorithms for solving the TSP problem. In Section 3, we briefly introduce the optimization approaches under consideration in this study that make up the proposed hybrid algorithm. In Section 4, we discuss the proposed hybrid RNN-SA algorithm in detail. In Section 5, we present experimental results, result analysis, and performance comparisons. The conclusion of the paper is summarized in Section 6.

2. Related Work

In this section, we review some recently published hybrid optimization procedures that use different strategies to develop sophisticated solution methods for solving the TSP problem. Geng et al. proposed an adaptive hybrid algorithm called ASA-GS by combining the problem solving efforts of the SA algorithm and greedy search (GS) mechanism to solve the symmetric TSP problem [6]. They use a greedy search strategy in the optimization procedure of general SA algorithm to speed up the solution convergence rate. Their testing experiments demonstrated that the ASA-GS performs well on both small and large-scale TSP datasets and even outperforms some recently developed optimization algorithms. In fact, this composite algorithm found a better trade-off between solution quality and computation time for solving symmetric TSP problem. Utilize the benefit of the genetic algorithm (GA), SA, ant colony optimization (ACO) and particle swarm optimization (PSO) a hybrid algorithm named GSA-ACO-PSO is reported by Chen and Chien to tackle symmetric TSP problem [7]. In this optimization framework, a set of feasible solutions are generated by the ant colony system and these feasible solutions are adopted as the initial solution of the GA procedure. Then, the GA is executed with SA mutation techniques to achieve better solutions. On the other hand, the role of the particle swarm optimization process is to facilitate the exchange of pheromone information among the populations in the ant colony system after a predefined number of cycles. The experimental evaluations indicated that this hybrid algorithm exhibits better performance than some other related optimization algorithms.

Deng et al. developed a hybrid algorithm with the help of evolutionary concepts of GA, ACO and PSO algorithms to solve the symmetric TSP problem [3]. In the implementation process of this algorithm, a series of sub-optimal solutions are first generated through the combination of wholeness, randomicity, and rapidity of the PSO and GA techniques. After that, the resulting solutions are exploited based on the ACO algorithm by utilizing the benefit of the parallel, positive feedback and higher accuracy. Their algorithm evaluation illustrated that it performs better than some other evolutionary TSP solving algorithms. In [8], Zhan et al. proposed a new version of the SA algorithm named list-based SA algorithm (LBSA) in order to solve the symmetric TSP problem. In this approach, a list-based cooling schedule is adopted to control temperature reduction in the basic SA algorithm. Their experimental results indicated that the LBSA finds good approximate solutions and outperforms some other state-of-the-art algorithms. A hybrid optimization algorithm by combining the superiority of the symbiotic organism search (SOS) and SA algorithms named SOS-SA is reported by Ezugwu et al. [9]. In this optimization framework, the initial feasible solutions for the SOS algorithm are generated by applying the conventional SA algorithm. After that, these solutions are modified and improved through the three intelligent optimization phases of SOS algorithm. Comparative results demonstrated that the SOS-SA framework can yield TSP optimal solutions and show competitive behavior with other state-of-the-art optimization algorithms.

Hore et al. reported a hybrid variable neighborhood search (HVNS) algorithm to solve both symmetric and asymmetric TSPs [4]. They accomplish the search procedure by two stages such as it first generates an initial feasible solution through a route construction based greedy approach, and then improves this solution iteratively by using various neighborhood structure with stochastic stopping criteria. The algorithm evaluation found that it performs better than the conventional optimization algorithms, and VNS-1 and VNS-2 algorithms as well. In [10], Tsai et al. proposed a hybrid algorithm called ACOMAC by introducing multiple ant clans (MAC) idea in the process of ACO algorithm. They also discussed ant colony system (ACS) for solving the TSP problem. In this work, multiple nearest neighbor (NN) and dual nearest neighbor (DNN) are combined separately with both ACOMAC and ACS to enhance their performance. Their experiments analysis indicated that the performance of both the basic ACOMAC and ACS are enhanced significantly when they combine with NN and DNN, meanwhile, ACOMAC + DNN performs better than the other discussed algorithms.

3. Methods of Study

In this section, we give a brief introduction of the optimization techniques under consideration in this study that make up the proposed hybrid algorithm for solving TSP problem. The RNN optimization algorithm and the SA optimization algorithm are briefly discussed in the following subsections consecutively.

3.1. Repetitive Nearest Neighbor Optimization Algorithm

The RNN is the route construction algorithm that is an extension of the well-known Nearest Neighbor (NN) algorithm [11]. The NN algorithm attempts to construct the route based on the connections of nearest neighbors. It starts with a city chosen at random as the starting city of the route and then includes the next city which is located closest to the last city. The performance of this algorithm is highly sensitive to the choice of starting city. To remedy this, the RNN algorithm was developed. It performs better than the NN algorithm for solving TSP problem. However, there is time complexity of order $O\left({n}^{3}\right)$ while the algorithm running time of NN is reported as $O\left({n}^{2}\right)$ [12]. In fact, the RNN algorithm constructs a set of routes step by step through some strategies. Its route construction procedure can be described as a sequence of the following steps.

Step-1: Let
$C=\left\{{c}_{1},{c}_{2},{c}_{3},\cdots ,{c}_{n}\right\}$ be the list of n cities and
$d\left({c}_{i},{c}_{j}\right)$ be the cost/distance between the cities
${c}_{i}$ and
${c}_{j}$ , where
${c}_{i}$ represents the position of the i^{th} city. In the first step, the search engine generates n sub-routes and each sub-route consists of one single city. The set of constructed sub-routes can be expressed as follows:

${R}_{1}=\left\{{c}_{i};i=1,2,\cdots ,n\right\}$ (6)

Step-2: Each sub-route obtained from the first step is extended the network in this step by adding a different nearest city from the remaining cities. Thus, the set of possible n sub-routes is constructed on the basis of Equation (7).

${R}_{2}=\left\{{c}_{i}{c}_{j};\forall {c}_{i}\in {R}_{1};\mathrm{min}d\left({c}_{i},{c}_{j}\right),\forall \text{\hspace{0.17em}}{c}_{j}\in C;i\ne j;j=1,2,\cdots ,n\right\}$ (7)

In the Equation (7), $d\left({c}_{i},{c}_{j}\right)$ is the Euclidean distance between the cities ${c}_{i}$ and ${c}_{j}$ , which is calculated based on the formula presented in Equation (8). Let $\left({x}_{i},{y}_{i}\right)$ and $\left({x}_{j},{y}_{j}\right)$ be the Cartesian coordinate of the location of the cities ${c}_{i}$ and ${c}_{j}$ . Then, the formula for calculating the Euclidean distance between ${c}_{i}$ and ${c}_{j}$ is as follows:

$\sqrt{{\left({x}_{i}-{x}_{j}\right)}^{2}+{\left({y}_{i}-{y}_{j}\right)}^{2}}$ (8)

Step-3: In this step, each 2-city sub-route of ${R}_{2}$ is enlarged through a nearest city that is not yet in the route. Thus, the set of possible 3-city sub-routes is constructed as bellows:

${R}_{3}=\left\{{c}_{i}{c}_{j}{c}_{k};\text{\hspace{0.17em}}\forall {c}_{i}\text{\hspace{0.17em}}{c}_{j}\in {R}_{2};\mathrm{min}d\left({c}_{j},{c}_{k}\right),\text{\hspace{0.17em}}\forall \text{\hspace{0.17em}}{c}_{k}\in C;i,j\ne k;k=1,2,\cdots ,n\right\}$ (9)

where
$\left|{R}_{3}\right|=n$ is the total number of routes in the set
${R}_{3}$ . In this way, the route construction mechanism is continued until all the cities are included in each route. After completing the n^{th} step, we get a set of routes
${R}_{n}$ , where
$\left|{R}_{n}\right|=n$ and each route of
${R}_{n}$ contains n cities. To generate feasible TSP routes, we add the starting city to the
${\left(n+1\right)}^{\text{th}}$ position of each route in
${R}_{n}$ . Finally, we get a set of feasible n routes and from there a best route is searched out. In this paper, the RNN method is considered to generate a set of n feasible TSP routes.

3.2. Simulated Annealing Optimization Algorithm

The SA algorithm for solving the TSP problem was first introduced by Kirkpatrick et al. in 1983 [13]. It is designed based on the idea of annealing process of metal atoms to achieve low energy states in a heat bath. Actually, metal atoms become unstable from their initial states at high temperature and they explore for other states. They find a lower energy state compared to their current state at the time of cooling. Thus, the procedure consists of two steps. First, the temperature of the heat bath is increased to a maximum value such that the metal atoms melt. Then, the temperature is reduced so that the particles cool until they are reached to a steady state. With regard to the search process, this algorithm consists of two important strategies-diversification and intensification. Setting the initial temperature high allow the exploration of the search space (diversification). On the other hand, the particles cool themselves until they converge into a steady state which means the search process converge to a local minimum (intensification). To maintain a proper balance between diversification and intensification strategies, a suitable temperature cooling schedule is required in this algorithm. We use an optimized cooling schedule in this paper which is defined in Equation (12).

The SA algorithm starts with randomly generating a set of initial solutions. The new candidate solutions/states are generated based on the initial solutions and the specified neighborhood structure. The neighborhood structure for generating new solution is briefly discussed in Subsection 3.2.1. The new solution is accepted by the algorithm when its energy/fitness value is lower than the current solution. On the other hand, a non-improving new solution is accepted based on the transition probability p, defined as follows:

$p={\text{e}}^{\left(-\frac{\Delta f}{{T}_{k}}\right)},$ (10)

$\Delta f=\frac{f\left({{x}^{\prime}}_{i}\right)-f\left({x}_{i}\right)}{f\left({x}_{i}\right)},$ (11)

where
${T}_{k}\succ 0$ is the temperature in the k^{th} iteration,
$f\left({x}_{i}\right)$ and
$f\left({{x}^{\prime}}_{i}\right)$ represent the fitness value of the current route and new route, respectively. If

$\Delta f\le 0$ , then the new solution ${{x}^{\prime}}_{i}$ is accepted. When $\Delta f>0$ , then the system changes the current solution ${x}_{i}$ according to the probability p. It is noted from Equation (10) that the probability of accepting a bad solution is proportionally

decreased with the temperature T as $\underset{T\to 0}{\mathrm{lim}}{\text{e}}^{\left(-\frac{\Delta f}{{T}_{k}}\right)}=0$ . Therefore, a rigorous

temperature cooling schedule plays a vital role for the performance of the SA algorithm. The following cooling schedule is utilized in this work to solve TSP problem:

${T}_{k+1}=\gamma {T}_{k},$ (12)

In the above equation, $\gamma $ represents the cooling coefficient, which is also called the temperature reduction rate. The values of $\gamma $ lies between 0 and 1. The process shows the best solution, when the temperature reaches to a predefined temperature or the maximum number of iterations is met. The procedure of SA algorithm is presented in the Algorithm 1.

Algorithm 1. Pseudocode of the SA algorithm.

3.2.1. Neighborhood Structure of the SA Algorithm

In this paper, we use three neighborhood operators namely swap, reversion and inversion to generate new solution from the current existing solution. The probabilities of occurring swap, reversion and insertion are 0.5, 0.2, and 0.3, respectively, while the selection method is Roulette Wheel Selection. These sets of neighborhood operators are briefly discussed subsequently.

Swap: The swap operator randomly selects the position of two cities from a route ${x}_{i}$ and exchanges the position of these two cities to create a new route ${{x}^{\prime}}_{i}$ . Let the two positions i and j with $i\ne j$ be randomly selected from the route ${x}_{i}$ . Then, the swap operator generates a new route ${{x}^{\prime}}_{i}$ by interchanging the cities between the position i and j of ${x}_{i}$ . For example, let ${x}_{i}=\left(5,\text{\hspace{0.17em}}7,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}4,\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}6\right)$ be a feasible route consists of 7 cities. If second and sixth positions are selected for swap operation, then the corresponding cities 7 and 3 of ${x}_{i}$ are exchanged and generates a new route ${{x}^{\prime}}_{i}=\left(5,\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}4,\text{\hspace{0.17em}}7,\text{\hspace{0.17em}}6\right)$ . The swap operation procedure is depicted in Figure 1.

Reversion: The reversion operator firstly locates the position of two different cities in a route randomly and then reverses the local path between these two cities. Consider a route ${x}_{i}$ and the two cut points i and j with $i\ne j$ on ${x}_{i}$ . Then, the reversion operator inverse the cities between the position i and j of ${x}_{i}$ and generates a new route ${{x}^{\prime}}_{i}$ . To explain the reversion process, consider a route ${x}_{i}$ with 7 cities expressed by ${x}_{i}=\left(5,\text{\hspace{0.17em}}7,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}4,\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}6\right)$ . Let the position second and fifth are randomly selected in ${x}_{i}$ , then reversion operator modifies the route ${x}_{i}$ and creates a new route ${{x}^{\prime}}_{i}$ by reversing the local path (7, 1, 2, 4) between the cities 7 and 4. After reversion operation, the modified route can be written as ${{x}^{\prime}}_{i}=\left(5,\text{\hspace{0.17em}}4,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}7,\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}6\right)$ . This process is displayed graphically in Figure 2.

Insertion: The insertion operator firstly picks up two positions i and j with
$i\ne j$ on
${x}_{i}$ randomly, and then the city of i^{th} position is inserted into the city of j^{th} back position. To illustrate more clearly of insertion operation procedure, let
${x}_{i}$ be a 7-city route, which is represented by
${x}_{i}=\left(5,\text{\hspace{0.17em}}7,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}4,\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}6\right)$ . Suppose third and sixth positions are considered in
${x}_{i}$ to perform the insertion operation. Then, the city 1 is placed behind the city 3 for producing a new route
${{x}^{\prime}}_{i}$ . After performing insertion operation on
${x}_{i}$ , the modified route
${{x}^{\prime}}_{i}$ can be written as
${{x}^{\prime}}_{i}=\left(5,\text{\hspace{0.17em}}7,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}4,\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}6\right)$ . The insertion mechanism is illustrated in Figure 3.

Figure 1. Illustration of the swap procedure [14].

Figure 2. Illustration of the reversion procedure [14].

Figure 3. Illustration of the insertion procedure [14].

4. Proposed Hybrid Approach (RNN-SA)

In this section, we present a hybrid method called RNN-SA for the solution of symmetric TSP problem. The RNN algorithm is a simple and easily implementable constructive heuristic algorithm used to solve various optimization problems [11]. The main benefit of this algorithm is that it can produce a set of feasible solutions instead of one single solution. Logically, there is a more chance to get a better solution from a set of solutions instead of a single solution. In addition, The RNN algorithm performs better than the NN algorithm and yields the solution very promptly. However, its solution quality is not good enough compared to other algorithms. On the other hand, the SA algorithm is a local search based heuristic algorithm extensively utilized for finding the optimum solution of both discrete and continuous optimization problems [13]. One of the main advantages of this algorithm is that it accepts non-improving solutions with certain probability during the optimization process in the sense that a slightly worse solution than the current solution may provide a better solution in the near future. Indeed, the SA has the capability to prevent the search process from falling into the trapped of local optimum solution by permitting the uphill moves to search for a global optimum solution. The local optimum is a point in the search space where all the solution points in the neighborhood are worse than the current solution. However, the solution quality of the SA algorithm is still infected by the randomly initializing population. To overcome this problem, we adopt the RNN algorithm in the optimization process of the SA algorithm and enhance its performance. Therefore, a new hybrid algorithm (RNN-SA) is proposed by integrating the superiority of RNN and SA algorithms to effectively exploit and explore the problem search space.

Suppose that there are n cities in the TSP problem. The proposed algorithm accomplishes the search procedure by two stages. First, the RNN algorithm is used to generate a set of n feasible routes step by step and these routes are fed as the initial solution of the SA algorithm. Then, the system performs the SA algorithm to obtain better solutions by adopting a robust search process through a proper balance of diversification and intensification strategies. In fact, the feasible solutions obtained from the RNN algorithm are iteratively improved through the SA algorithm in an iterative improvement process. In each iteration, a set of neighborhood operators with their corresponding occurring probability are utilized to generate a new solution from the current solution. The selected neighborhood operator performs n moves to generate the new solutions in each iteration corresponding to each current solution. Thus, the proposed algorithm starts with generating a set of better routes by RNN and then adopts the SA algorithm to improve these routes iteratively. It stops when the maximum number of iteration is achieved. The algorithm stopping condition is an important factor that can determine the final result of the experiment. If the algorithm is stopped too early, then the quality of the solution might not be even close to the best-known optimum solution. In contrast, prolonging the simulation incurs a considerable amount of unnecessary effort and time. So, it is crucial to consider the trade-off between the quality of solution and computational time. In this paper, we set up a fixed iteration number 1000 which is enough to get a satisfactory solution. The pseudocode and the architecture of the proposed hybrid optimization algorithm (RNN-SA) are offered in Algorithm 2 and Figure 4, respectively.

5. Experiments and Analysis

In this section, we evaluate the performance of the proposed hybrid RNN-SA algorithm through various experiments. To accomplish the experimental tasks, a set of real-world symmetric TSP datasets are considered from TSPLIB [15]. Indeed, TSPLIB is a publicly available library that provides diverse testing datasets for the combinatorial optimization problems. It also reports the best-known optimum solution for each dataset. We consider 24 benchmark symmetric TSP datasets with dimension ranging from 51 to 318. The numerical value of the dataset name represents the dimension of that dataset. The experimental results of RNN-SA are first compared with both basic RNN and SA algorithms as well as the best-known optimum results reported by the dada library. Then, it is compared

Algorithm 2. Pseudocode of the proposed hybrid (RNN-SA) algorithm.

Figure 4. Architecture of the proposed hybrid optimization algorithm (RNN-SA).

with the results of some other hybrid algorithms existing in the literature. The results of various experiments might be different for same TSP dataset due to owing randomness in the optimization process of our RNN-SA algorithm. For this reason, the algorithm runs independently 5 consecutive times for each TSP dataset and from these 5 values the best solution, average solution, worst solution, and the standard deviation (SD) value are reported. Both basic algorithms (RNN and SA) and proposed (RNN-SA) algorithm are coded in MATLABR2017a programming language and executed on a standard laptop computer with Intel(R) Core(TM) i5-5337U CPU 1.80 GHz, 4 GB RAM and Windows 10 operating system. The maximum iteration for both SA and RNN-SA is set up as 1000 and in each run, the best solution from 1000 iterations are taken for a particular TSP dataset. The values of the parameters are adjusted through the experiment by trial and error method and their optimized values for both SA and RNN-SA algorithms are presented in Table 1.

5.1. Experimental Results of the Proposed RNN-SA Algorithm

The experimental results of the proposed RNN-SA algorithm for 24 benchmark symmetric TSP datasets are presented in Table 2. In Table 2, Scale represents the number of cities (dimension) of the dataset, BKS denotes the best-known optimum solution reported by the data library. Best, Average and Worst indicate the best optimum value, average optimum value, and worst optimum value of our algorithm among 5 optimum values. SD carries the standard deviation value computed based on the 5 optimum values. PDbest (%) and PDav (%) denote the percentage deviation of the computed best solution and average solution with respect to BKS. PDbest (%) and PDav (%) are computed according to Equation

Table 1. List of parameters and their optimized values for both SA and RNN-SA algorithms.

Table 2. Experimental results of the proposed RNN-SA optimization algorithm for the 24 symmetric benchmark TSP datasets.

(13) and Equation (14). From the Table 2, it can be observed that the proposed RNN-SA algorithm yields the solutions very close to the BKS and the errors term are very small. Besides, the obtained SD values are small, which means that the proposed RNN-SA algorithm is stable to compute a satisfactory solution in each run. The errors lie within the interval of [0, 2.53] and [0.02, 3.68] with respect to the best and average solutions, respectively. In fact, the average errors over all the considered 24 datasets are 0.72 and 1.39 with respect to the best and average solutions. Therefore, it can be decided that the proposed RNN-SA algorithm is a reliable optimization algorithm that produces the solutions very near to the best-known optimum solutions.

$\text{PDbest}\left(\%\right)=\frac{\text{Best}-\text{BKS}}{\text{BKS}}\times 100\%$ (13)

$\text{PDav}\left(\%\right)=\frac{\text{Average}-\text{BKS}}{\text{BKS}}\times 100\%$ (14)

5.2. Performance Comparisons

In this subsection, we compare the performance of the proposed hybrid algorithm (RNN-SA) with the basic RNN and SA algorithms as well as some other hybrid algorithms. We code and execute the RNN-SA, RNN, and SA algorithms under the same experimental environment to show a fair comparison. For comparing with other hybrid algorithms, the results of RNN-SA are compared with the results presented in the reference articles. The side by side comparisons are displayed in Tables 3-5. The best results among the considered algorithms are highlighted in boldface. In addition, the average over all the considered datasets corresponding to each comparison is presented at the bottom of each table. Table 3 shows the comparison of RNN-SA for 24 symmetric TSP datasets with the basic RNN and SA algorithms. According to Table 3, the proposed RNN-SA algorithm shows better performance than both RNN and SA with respect to best, average as well as worst solutions in all the tested datasets except bier127 dataset. In bier127 dataset, our best result is inferior to SA but the average and worst results are superior to SA. Besides, the SD value of our algorithm is smaller than SA in nearly all the datasets. This indicator suggests that our algorithm (RNN-SA) is more stable than the SA algorithm. The average of best results, average results, worst results and SD values over all the 24 datasets of RNN-SA are 29,017.96, 29,295.97, 29,518.39, and 190.5763, respectively, which are better than the 29,303.13, 29,688.43, 29,994.45, and 306.0842 of SA as well as 33,137.39 of RNN.

The comparison of the proposed hybrid algorithm (RNN-SA) with the other hybrid algorithms (ACS + NN, ACOMAC + NN, ACS + DNN, and ACOMAC + DNN) is shown in Table 4. From Table 4, it is visible that the proposed RNN-SA algorithm is significantly dominated each of the other four hybrid algorithms in all the four datasets. In fact, the average over all the datasets of RNN-SA is 9527.753, which is better than the 9759.98 of ACS + NN, 9612.828 of ACOMAC + NN, 9691.583 of ACS + DNN, and 9586.613 of ACOMAC + DNN. Table 5 shows the performance comparison of RNN-SA with hybrid variable neighborhood

Table 3. Comparison of the proposed RNN-SA algorithm with the basic RNN and SA algorithms.

Table 4. Comparison of the proposed RNN-SA algorithm with the ACS + NN (Tsai et al., 2004), ACOMAC + NN (Tsai et al., 2004), ACS + DNN (Tsai et al., 2004), and ACOMAC + DNN (Tsai et al., 2004) hybrid algorithms.

Table 5. Comparison of the proposed RNN-SA algorithm with the HVNS (Hore et al., 2018) hybrid algorithms.

search (HVNS) algorithm for 24 symmetric TSP datasets. As shown in Table 5, the proposed RNN-SA algorithm exhibits competitive behavior with HVNS in terms of solution quality. It is observed that the HVNS finds better results in some small scale datasets. As the scale increases, our algorithm shows better global search capability than the HVNS algorithm. The average of best results, average results, and worst results over all the 24 datasets of RNN-SA are 29,017.96, 29,295.97, and 29,429.17, respectively which are better than the 29,350.41, 29,376.41, and 29,518.39 of HVNS.

6. Conclusion

We have established a hybrid heuristic algorithm called RNN-SA by integrating the RNN algorithm and the SA algorithm for finding the optimum solution of the symmetric TSP problem. The proposed algorithm is composed of two main stages. In the first stage, we use the RNN algorithm to construct a set of feasible routes step by step. The concept of NN is adopted in this stage during the network enhancement process. In the second stage, we improve these routes in an iterative improvement process on the basis of the SA algorithm. Three neighborhood operators are utilized to generate new solution in the process of SA. Indeed, a set of feasible routes obtained from the RNN algorithm are fed in the SA algorithm to avoid random initialization and increase performance. The experiments are conducted with a set of benchmark symmetric TSP datasets taken from the TSPLIB. The experimental results demonstrate that the proposed hybrid (RNN-SA) algorithm yields the solutions very close to the best-known optimum solutions and performs better than both the basic algorithms (RNN and SA). In addition, the proposed hybrid optimization algorithm outperforms some other hybrid optimization algorithms existing in the literature in terms of solution quality.

NOTES

*Corresponding author.

References

[1] Applegate, D., Bixby, R., Cook, W. and Chvátal, V. (1998) On the Solution of Traveling Salesman Problems. Documenta Mathematica, Extra Volume of ICM, Chapter 3, 645-656.

[2] Dantzig, G., Fulkerson, R. and Johnson, S. (1954) Solution of a Large-Scale Traveling-Salesman Problem. Journal of the Operations Research Society of America, 2, 393- 410. https://doi.org/10.1287/opre.2.4.393

[3] Deng, W., Chen, R., He, B., Liu, Y., Yin, L. and Guo, J. (2012) A Novel Two-Stage Hybrid Swarm Intelligence Optimization Algorithm and Application. Soft Computing, 16, 1707-1722. https://doi.org/10.1007/s00500-012-0855-z

[4] Hore, S., Chatterjee, A. and Dewanji, A. (2018) Improving Variable Neighborhood Search to Solve the Traveling Salesman Problem. Applied Soft Computing, 68, 83-91.
https://doi.org/10.1016/j.asoc.2018.03.048

[5] Garey, M.R. and Johnson, D.S. (2002) Computers and Intractability: Vol. 29. Wh Freeman, New York.

[6] Geng, X., Chen, Z., Yang, W., Shi, D. and Zhao, K. (2011) Solving the Traveling Salesman Problem Based on an Adaptive Simulated Annealing Algorithm with Greedy Search. Applied Soft Computing, 11, 3680-3689.
https://doi.org/10.1016/j.asoc.2011.01.039

[7] Chen, S.-M. and Chien, C.-Y. (2011) Solving the Traveling Salesman Problem Based on the Genetic Simulated Annealing ant Colony System with Particle Swarm Optimization Techniques. Expert Systems with Applications, 38, 14439-14450.
https://doi.org/10.1016/j.eswa.2011.04.163

[8] Zhan, S.H., Lin, J., Zhang, Z.J. and Zhong, Y.W. (2016) List-Based Simulated Annealing Algorithm for Traveling Salesman Problem. Computational Intelligence and Neuroscience, 2016, Article ID: 1712630. https://doi.org/10.1155/2016/1712630

[9] Ezugwu, A.E.-S., Adewumi, A.O. and Fr?ncu, M.E. (2017) Simulated Annealing Based Symbiotic Organisms Search Optimization Algorithm for Traveling Salesman Problem. Expert Systems with Applications, 77, 189-210.
https://doi.org/10.1016/j.eswa.2017.01.053

[10] Tsai, C.-F., Tsai, C.-W. and Tseng, C.-C. (2004) A New Hybrid Heuristic Approach for Solving Large Traveling Salesman Problem. Information Sciences, 166, 67-81.
https://doi.org/10.1016/j.ins.2003.11.008

[11] Rosenkrantz, D.J., Stearns, R.E. and Lewis II, P.M. (1977) An Analysis of Several Heuristics for the Traveling Salesman Problem. SIAM Journal on Computing, 6, 563-581. https://doi.org/10.1137/0206041

[12] Johnson, D.S. and McGeoch, L.A. (1997) The Traveling Salesman Problem: A Case Study in Local Optimization. Local Search in Combinatorial Optimization, 1, 215-310.

[13] Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P. (1983) Optimization by Simulated Annealing. Science, 220, 671-680. https://doi.org/10.1126/science.220.4598.671

[14] Wang, Y., Wu, Y.W. and Xu, N. (2019) Discrete Symbiotic Organism Search with Excellence Coefficients and Self-Escape for Traveling Salesman Problem. Computers & Industrial Engineering, 131, 269-281. https://doi.org/10.1016/j.cie.2019.04.008

[15] TSPLIB. http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/