to minimise the weight of a tension/ compression spring. All design variables are continuous.
Let the wire diameter , the mean coil diameter and the number of active coils . The following are the constraints on these variables:
We can write this as a function to minimise:
which is subject to the following:
Also known as Design Optimisation, this is the process to find the combination of design variables that optimises the design objective and satisfies the constraints.
A design variable is under the control of the designer and could have an impact on the solution of the optimisation problem. They can be:
Design objectives represent the desires of the designers such as to maximise profit or minimise cost
Constraints are desires the designers cannot optimise infinitely due to:
A design constraint is usually rigid or hard as they often need to be strictly adhered to.
Engineering optimisation problems fall under constrained optimisation problems
In general a constrained optimisation problem can be represented as:
subject to:
where is the dimensional vector , is the objective function, is the inequality constraint and is the equality constraint
We denote the search space as and the feasible space as . The global optimum of might not be the same as that of
There are two main types of constraints we will look at:
There are 3 sub approaches under the penalty function method:
The penalty function is pre-defined and fixed during evolution
The general form of static penalty function is:
where are fixed, predefined values
Equality constraints can be converted into inequality contraints:
Where but is small
This approach is simple to implement but requires rich domain specific knowledge to accurately set
can be divided into a number of different levels. When to use each level is determined by a set of heuristics, such as the more important the constraint, the larger the value of
Dynamic Penalty Functions take the general form:
Where and are two penalty coefficients
The general principal of DPFs is that larger the generation number , the larger the penalty coefficients and
Common Dynamic Penalty Functions include:
Given a static penalty function where and
How does affect our training?
For a minimisation problem using for two individuals and , their fitness values are not determined by changing fitness values
Fitness proportional selection: As we now have changing fitness values we, by extension, have changing selection probabilities.
Ranking Selection:
From this we can see that different values lead to different ranking of individuals in the population.
The use of different penalty functions lead to different objective functions and therefore different explorations of the search space, finding different optima.
Essentially, penalty function transform fitness and change the ranking system leading to different candidates being selected.
Inappropriate penalty function lead to infeasible results, setting is difficult
Proposed by Prof. Xin Yao a lecturer at the University of Birmingham in 2000
This is a rank-based selection scheme that handles constraints
It does not use penalty functions and is self adaptive
Sort a population of size from best to worst, according to their fitness values:
From this set we select the top -ranked individuals with probability where is a ranking function such as:
Penalty functions essentially perform a transformation of the objective, fitness function. A rank change a selection change.
Here we instead change the rank directly in the Evolutionary Algorithm.
We can view ranking as the same as sorting. We can modify the sorting algorithm in the EA to consider constraint violation.
Stochastic Ranking is essentially a modified form of bubble sort with some additional rules to handle constraints
for j:=1 to M: for i := 2 to M: u := U(0;1) # randomly uniformly distributed random number if G(x'[i-1]) = G(x'[i]) == 0 or u <= P_f : # Swap values so that better (smaller) are before the larer ones if f(x'[i-1]) < f('[i]): swap(I[i], I[i-1]) swap(f(x'[i]),f(x'[i-1])) swap(G(x'[i]),G(x'[i-1])) else: if G(x'[i-1]) < G(x'[i]): swap(I[i], I[i-1]) swap(f(x'[i]),f(x'[i-1])) swap(G(x'[i]),G(x'[i-1]))
Where:
Why do we introduce given that is allows for infeasible solutions, in the cases where their fitness values are higher than feasible ones, with some probability?
Where : Most comparisons are based solely on infeasible solutions are likely to occur.
Where : Most comparisons are based on infeasible solutions are lss likely to occur, however, the resulting solutions may be poor
Recommended values of fall between 0.45 and 0.5
If penalty functions perform fitness transformation, converting rank change to selection change. Stochastic ranking changes ranks by changing the sorting algorithm. Why do we not change selection directly in the EA?
This method is simple and parameter free, however, it can lead to premature convergence.
Instead of modifying an EA or fitness function, infeasible individuals can be repaired into feasible ones
Let be an infeasible individual and be a feasible one.
For this process we maintain two populations:
I_r = select a reference individual do until individual z_i is feasible{ z_i = a_i * I_s + (1-a_i)* I_r, where 0<a_i < 1 calculate the fitness value of z_i : f(z_i) if f(z_i) <= f(I_r) then replace I_s with z_i else{ u = U(0;1) // uniformly distributed random number if u <= Pr then replace I_s with z_i } }
Notice we replace with with some probability even though is worse than
Adding a penalty term to the objective function is equivalent to changing the fitness function, which, in turn, is equivalent to changing the selection probabilities. It is easier and more effective to change the selection probabilities directly and explicitly using Stochastic ranking and Feasibility rules.
We have also seen that there are alternative constraint handling techniques such as repairing methods.