Often we encounter problems that have multiple objectives which we would like to maximise/ minimise. For example, buying a car, we may have many things that we are looking to maximise whilst also minimising the price such as number of seats or comfort.
How do we determine which solutions are optimal?
Domination
x1 dominates x2 if it satisfies both:
x1 is no worse than x2 in all objectives
x1 is strictly better than x2 in at least on objective.
You can often find 2 solutions where neither satisfy the above conditions, in this case we say these solutions are independent
Pareto-Optimal Solutions
Non-dominated solutions: The set P′∈P can be defined as the set of non-dominated solutions in P where ∀p∈P′ p is not dominated by any member of the set P. An O(MN2) algorithm exists to find this set.
Pareto-Optimal Solutions: When P=S, the resulting P′ is the Pareto optimal set. Where S is the set of all possible solutions in the search space.
Preference-Based Approach
In a preference based approach given a set of objectives which we need to minimise or maximise subject to constraints we first take into account higher level information before estimating a relative importance vector, {w1,w2,…,wM}.
Using this vector we can define a new objective function which is a weighted sum of the objectives. F=w1f1+w2f2+…+wMfM
We now have a single objective optimisation problem which can be solved using conventional means.
Ideally our solution will be a single point on the pareto front
We can define out weighted sum of objectives more formally as:
F(x)=m=1∑Mwmfm(x)
where the user supplies the weight vector w. This is one of the downsides of this method as it requires the user to know the weight vector.
Another issue with this method comes from the possible non-uniformity of Pareto-Optimal solutions leading to our inability to find some solutions using this method.
Ideal MOO
In an ideal world, our MO optimiser would return the complete set of Pareto-Optimal solutions allowing us to simply choose one from the set. Sadly, in practice it is not that simple.
Why an Evolutionary Approach?
As we have seen previously, evolutionary algorithms lend themselves well to producing a set of possible solutions due to the fact that each member of the population is a potential solution in the search space.
We can also employ the niching techniques we have studied to find diverse solutions even in cases where the pareto front is irregularly shaped.
Identifying the Non-dominated set
i:=1 & P′=∅
∀j∈P,j=i:
Check if solution j dominates i
if yes, go to step 4
If more solutions are left in P, j++ and go to 2, otherwise, P′=P′∪{i}
i++ if i≤N, goto 2, else stop and declare P′ the non-dominated set.
This approach has O(MN2) computational complexity
Lecture 14: MOO Continued
Shortcomings to Non-Elitist MOEAs
In non-elitist MOEAs, the preservation of the most elite in the population is missing. Elite preservation is an important property of Single Objective Evolutionary Algorithms (SOEAs), this is the same for MOEAs
Elitist MOEAs
Elitist MOEAs have 3 major tasks:
Elite Preservation: They must maintain an archive of non-dominated solutions
Progress towards the Pareto-Optimal Front: They must prefer non-dominated solutions
Maintain diversity among candidate solutions: By making use of techniques such as clustering, niching or grid-based competition they must make individuals compete for a place in the archive.
NSGA-II
Abstract
Calculate (ni,Si) for each solution i where:
ni is the number of solutions dominating i
Si is the set of solutions dominated by i
Algorithm
In one generation of NSGA-II :
We require a parent population, Pt of N individuals
We also require an offspring population, Qt of N individuals.
This algorithm works in both a continuous and a discrete space.
Sort Rt:=Pt∪Qt into non-dominated fronts F1,...
Set i:=1 and Pt+1:=∅
While ∣Pt+1∣+∣Fi∣<N:
Set Pt+1:=Pt+1∪Fi
Set i:=i+1
Perform crowding sort on the individuals from Fi
Add N−∣Pt+1∣ most widely spread solutions from Fi to Pt+1
Create Qt+1 from Pt+1 using crowded tournament selection, crossover and mutation operators
Algorithm: Non-Dominated Sorting
Abstract
Identify the best non-dominated set
Discard them from the population
Identify the next-best non-dominated set
Repeat until all solutions are classified
Requires a population of individuals P
For each individual, i∈P:
Set Si:=∅ and ni:=0
for all pairs i,j∈P,i=j
if j dominates i
Sj:=Sj∪{i}
else if i dominates j
nj:=nj+1
for each i∈P
if ni=0 keep i in the first non-dominant front P1
Set k=1
while Pk=∅
for each i∈Pk and j∈Si
Set nj:=nj−1
If nj=0
Update Q:=Q∪{j}
Set k=k+1 and Pk=Q and update Q:=∅
Crowding Distance Algorithm
Requires a set F={(f1i,…,fni)}i∈[l] of objective vectors.
For each i∈[l]
sort the set F according to objective fm s.t. fm(l1m)≤fm(l2m)≤⋯≤fm(llm)
Set dl1m:=∞ and dllm:=∞ so that boundary points are selected
For j∈{2,…,k−1}
dlj:=dlj+fmmax−fmminfm(lj+1m)−fm(lj−1m) for all other points
return crowding distances(d1,…,dl)
Strength Pareto EA (SPEA)
These type of EA stores non-dominated solutions externally to the rest of the algorithm. It uses Pareto-Dominance to assign fitness values. For external members, it assigns the number of dominated solutions in the population (smaller is better). For Population members, it assigns the total fitness of external dominating members (smaller is better)
Tournament selection and recombination is then applied to combined current and elite populations. A clustering technique is used to maintain diversity in the updated external population when it's size reaches a limit.
Pareto-Archived ES (PAES)
Known as a (1+1)-ES, a parent pt and a child ct are compared with an external archive At.
If ct is dominated by At then pt+1=pt.
If ct dominates a member of At, delete it from At and include ct in At and assign pt+1=ct
If ∣At∣<N, include ct and pt+1=winner(pt,ct)
If ∣At∣=N and ct does not lie in the highest count hypercube H, replace ct with a random solution from H and pt+1=winner(pt,ct)
Where the winner is based on the least number of solutions in the hypercube.
Note PAES and SPEA are not used as widely in modern times. Your first port of call should be NSGA-II
Constraint Handling
One classical approach to constraint handling MOOPs is by using a penalty function:
Fm=fm+RmΩ(g)
Here we replace the fitness function fm with Fm which includes the additional term RmΩ(g) which represents how much a given solution violates our constraints. Where Rm determines how much we penalise infeasible solutions.
Alternative approaches to handling infeasible solutions include Jimmenez's approach or the Ray-Tang-Seow approach
There are also alternative approaches that rely on modified definitions of domination, namely Fonseca and Fleming's approach and Deb et al's approach.
Constrain-Domination Principal
A solution i constrain-dominates a solution j if any of the following is true:
Solution i is feasible and solution j is not
Solutions i and j are both infeasible but solution i has a smaller overall constrain violation.
Solutions i and j are feasible and solution i dominates solution j