by VIKTOR

In this sample application, we showcase three approaches – 2-opt, genetical algorithm, and self-organizing maps – to the popular traveling salesman problem.

Find how you can use AI automate engineering processes in your organization.

The traveling salesman problem (TSP) is a popular NP-hard problem that tries to answer the following question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly one time before returning to the city of origin?”.

Being first introduced in 1930, the TSP has been an intensively studied optimization problem ever since. Even though the TSP is a computationally difficult problem, by now there are many exact algorithms known that can be used to solve instances with up to tens of thousands of cities completely. Even problems with millions of cities can be approximated with a small fraction of 1%.

Today, the TSP is applied in instances such as planning, logistics, and the manufacture of microchips.

The goal of this application is to showcase some of the approaches to the TSP in the VIKTOR environment. While no new approach is developed in this application, the app gives you the opportunity to play around with different algorithms to try to get the best solution for a generated topology.

In the application, you can choose between two different topologies:

**Circle:**This topology chooses X number of points on a circle with a radius of Y based on your definition of X and Y. We have added this option so you can check whether the algorithm is able to determine the shortest possible route, which is always a circle.

**Random:**This topology has a grid of 0-1 and chooses pseudorandom X and Y locations. It is also possible to enter a Z seed to initialize the pseudorandom number generator. This allows you to recreate the same random topology, which you, for example, can use for different algorithms so you can compare them.

For each topology that you generate, storage is produced. This is done by creating a hash with the coordinates of all the cities. In this storage, the shortest known distance over all your runs is saved as the topology high score. For each topology that is run, you can compare the determined distance with the high score.

The application contains three different algorithms, some well-known, some lesser known, that have previously been used to solve the TSP. If you switch between the algorithms, you will see a different explanatory text appear that provides information on the specifics of the corresponding algorithm. Additionally, all the input variables contain a text box with a description matching the algorithms.

**2-opt:**The first and easiest algorithm to be implemented in this app is the 2-opt local search algorithm. This algorithm detects if the route crosses over itself and, if it does, it tries to reorder itself so that it does not. The improvement threshold of this algorithm checks how much your route has improved. If it can no longer be improved, it quits trying.

**Genetical algorithm:**The Genetical Algorithm (GA) is an evolutionary algorithm, based on Charles Darwin's theory of natural evolution. After initializing the population (which is defined as different possible routes), we select the fittest individuals and combine them with each other to get new, and hopefully better, solutions. We can also implement random mutations in the population. This is helpful as otherwise there is a chance that the solution will never improve because we keep having the same individuals when breeding.

**Self-organizing maps:**The self-organizing maps algorithm implements a neural network that is closely related to the topology we generate. The purpose of the technique is to represent the model with a lower number of dimensions while maintaining the relations of similarity of the nodes contained in it. The algorithm used in this app generates a circular array of neurons, behaving as an elastic ring. During the execution of the algorithm, the neurons are searching for cities closest to their neighborhood. The ring then warps around the cities. To ensure convergence, we can add a learning rate to the algorithm. The higher the rate, the more the neurons will move. However, we might want to make the algorithm less aggressive the longer it runs. Because of this, we can also add decay to the learning rate.

Here, you can see a small part of the Python code for the self-organizing maps:

```
1def self_organizing_maps(
2 cities: np.ndarray, iterations: int, learning_rate: float = 0.8, popSize: int = 100, decay: float = 0.0003
3) -> Union[list, list, list]:
4 """Solve the TSP using a Self-Organizing Map."""
5 problem = _read_tsp(cities)
6
7 # Obtain the normalized set of cities (w/ coord in [0,1])
8 cities = problem.copy()
9
10 cities[["x", "y"]] = _normalize(cities[["x", "y"]])
11
12 # The population size
13 n = popSize
14
15 # Generate an adequate network of neurons:
16 network = _generate_network(n)
17
18 routes = [_get_route(cities, network).values.tolist()]
19 distances = [path_distance(problem[["x", "y"]].to_numpy())]
20
21 for i in range(iterations):
22 # Choose a random city
23 city = cities.sample(1)[["x", "y"]].values
24 winner_idx = _select_closest(network, city)
25 # Generate a filter that applies changes to the winner's gaussian
26 gaussian = _get_neighborhood(winner_idx, n // 10, network.shape[0])
27 # Update the network's weights (closer to the city)
28 network += gaussian[:, np.newaxis] * learning_rate * (city - network)
29 # Decay the variables
30 learning_rate = learning_rate * (1 - decay)
31 n = n * (1 - decay)
32
33 # Adding data for the frames
34 route = _get_route(cities, network)
35 problem = problem.reindex(route)
36 distances.append(path_distance(problem[["x", "y"]].to_numpy()))
37 route = route.values.tolist()
38 routes.append(route)
39
40 return routes, np.arange(0, iterations + 1, 1).tolist(), distances
```

You can now download the full Python code from our GitHub repository or sign up for a free VIKTOR account to use it in your own applications!

Try for free

Share