These problems, at a small scale are fairly trivial. However, they get exponentially harder as yous scale it up.
Clearly here the solution is
However, here the solution required the use of a powerful supercomputer
These problems were first mentioned in a travelling salesman handbook in 1832
It is one of the most prominent combinatorial optimisation problems in computer science
To find the best or optimal solution to a problem
a function from a set to the real numbers
an element s.t. minimisation or maximisation
Function is called the objective function or cost function for minimisation problems or fitness function for maximisation problems such as those in evolutionary computation
is called the feasible set, which is some subset of the euclidean space specified by a set of constriants
The domain of is called the search space, while the elements of are called the candidate solutions or feasible solutions
The category is dependent on the nature of the objective function
It is also dependent on the nature of the solutions
Mathematical programming algorithms such as linear programming
Search and enumeration algorithms
To solve the TSP problem we generally use Heuristic algorithms.
We have 2 categories of randomised algorithms
We must use Monte Carlo for TSP as we do not know the optimal solution, the exit criteria for Las Vegas
However, we find that naive approaches to solving TSP with randomised algorithms do not yield good results. This is because the probability of finding the optimal solution is very low.
We can improve our results by using local search algorithms
Local Search is a heuristic algorithm for solving hard optimisation problems
x_0 := generate initial solution terminationFlag := false x := x_0 while (terminationFlag != true) { Modify the current solution to a neighbourhood one (v in A) if f(v) < f(x) then x:=v if a termination criterion is met{ terminationFlag := true } return x }
This is one of the simplest local search algorithms
Described as climbing Everest in thick fog with amnesia
It is an iterative algorithm:
There are two types of Hill Climbing
x_0 := generate initial solution terminationFlag := flase x := x_0 while (terminatingFlag != true) { Modify the current solution to an immediate neighbour one v in A if f(v) < f(x) then x:=v if a terminition criterion is met: terminatingFlag := true } return x
The major issue with Hill Climbing for TSP is how to generate the immediate neighbour solutions of the current solution.
For 2-3 cities there is only one solution
For 4 cities there are 3 solutions
For 4 cities we notice that the difference in the routes between a solution and a immediate neighbour is 2 edges.
Given a route such as . How do we choose 2 edges to swap? This was a problem posed in 1958 by Croes
route := initial TSP solution i,j := two cities for swapping Step 1: take route[1] to route[i-1] and add them in order to newRoute Step 2: take route[i] to route[j] and add them in reverse to newRoute Step 3: take route[j+1] to end and add them in order to newRoute return newRoute
We get closer to the optimal solution with this approach, however, we often get stuck at a local optimum by the nature of the algorithm refusing to accept a worse solution once the local optimum has been found
Randomised search:
Local Search:
Can we combine these?
The main idea of SLS is to escape or avoid local optima
We achieve this by injecting randomness into local search
Strategies:
Random restart
Perform a random non-improving step: randomly move to a less fit neighbour - Simmulated annealing
for k := 0 to k_max { x_0 := randomly generated initial solution terminationFlag := false x_i := x_0 while (terminationFlag != true){ modify the current solution to an immediate neighbour if f(v) < f(x_i) then x_i := v if a termination criterion is met { terminationFlag := true } } store x_i } return x_best = min(x_i, i=1 to k_max)
This is a generic heuristic algorithm for optimisation problems proposed by Kirkpatrick in 1983
This algorithm is inspired by annealing in metallurgy
Heat treatment that alters a material to increases its ductility and to make it more workable
We want to find a state of lower thermodynamic free energy for metal. Meaning,, we want to find a solution with the minimum value for the objective function.
x := x_0; e:=f(x) //Initial solution, objective function value (energy) x_best := x ; //Initial best solution k := 0 // count evaluation number while (k<k_max) { T := temperature(t_0) //temperature calculation x_new := neighbour(x) // pick a neighbour e_new := f(x_new) // compute it's objective function value if P(e,e_new,T) > R(0,1) { // should we move to it? x:= x_new ; e:= e_new // yes, change state } if e_new < e_best{ // is it a new best ? x_best := x_new; e_best := e_new // save as best } k ++ // increase evaluation } return x_best
Where:
Without the analogue from matallurgy, the simmulated annealing algorithm is essentially a stochastic local search
The main idea is to escape from local optima by taking a random non-improving step: