desdeo_emo.EAs

This module contains classes implementing different Evolutionary algorithms.

Submodules

Package Contents

Classes

BaseEA

This class provides the basic structure for Evolutionary algorithms.

BaseDecompositionEA

The Base class for decomposition based EAs.

RVEA

The python version reference vector guided evolutionary algorithm.

NSGAIII

PPGA

Predatory-Prey genetic algorithm.

TournamentEA

IOPIS_NSGAIII

The Base class for decomposition based EAs.

IOPIS_RVEA

The python version reference vector guided evolutionary algorithm.

MOEA_D

Python implementation of MOEA/D

IBEA

Python Implementation of IBEA.

class desdeo_emo.EAs.BaseEA(a_priori: bool = False, interact: bool = False, selection_operator: Type[desdeo_emo.selection.SelectionBase.SelectionBase] = None, n_iterations: int = 10, n_gen_per_iter: int = 100, total_function_evaluations: int = 0, use_surrogates: bool = False, keep_archive: bool = False, save_non_dominated: bool = False)[source]

This class provides the basic structure for Evolutionary algorithms.

start()

Mimics the structure of the mcdm methods. Returns the request objects from self.retuests().

end()

To be run at the end of the evolution process.

_next_gen()

Run one generation of an EA. Change nothing about the parameters.

iterate(preference=None) Tuple

Run one iteration of EA.

One iteration consists of a constant or variable number of generations. This method leaves EA.params unchanged, except the current iteration count and gen count.

pre_iteration()

Run this code before every iteration.

post_iteration()

Run this code after every iteration.

continue_iteration()

Checks whether the current iteration should be continued or not.

continue_evolution() bool

Checks whether the current iteration should be continued or not.

check_FE_count() bool
Checks whether termination criteria via function evaluation count has been

met or not.

Returns:

True is function evaluation count limit NOT met.

Return type:

bool

set_interaction_type(interaction_type: None | str) None | str
manage_preferences(preference=None)

Forward the preference to the correct preference handling method.

Parameters:

preference (_type_, optional) – _description_. Defaults to None.

Raises:

eaError – Preference handling not implemented.

requests() Tuple
non_dominated_archive(individuals: numpy.ndarray, results: desdeo_problem.EvaluationResults) None
class desdeo_emo.EAs.BaseDecompositionEA(problem: desdeo_problem.MOProblem = None, initial_population: desdeo_emo.population.Population.Population = None, population_size: int = None, population_params: Dict = None, lattice_resolution: int = None, interact: bool = False, n_iterations: int = 10, n_gen_per_iter: int = 100, use_surrogates: bool = False, total_function_evaluations: int = 0, keep_archive: bool = False, save_non_dominated: bool = False)[source]

Bases: BaseEA

The Base class for decomposition based EAs.

This class contains most of the code to set up the parameters and operators. It also contains the logic of a simple decomposition EA.

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • selection_operator (Type[SelectionBase], optional) – The selection operator to be used by the EA, by default None.

  • population_size (int, optional) – The desired population size, by default None, which sets up a default value of population size depending upon the dimensionaly of the problem.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • lattice_resolution (int, optional) – The number of divisions along individual axes in the objective space to be used while creating the reference vector lattice by the simplex lattice design. By default None

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

_next_gen()

Run one generation of decomposition based EA. Intended to be used by next_iteration.

pre_iteration()

Run this code before every iteration.

_select() list

Describe a selection mechanism. Return indices of selected individuals.

Returns:

List of indices of individuals to be selected.

Return type:

list

request_plot() desdeo_tools.interaction.SimplePlotRequest
request_preferences() Type[desdeo_tools.interaction.request.BaseRequest]
requests() Tuple
end()

Conducts non-dominated sorting at the end of the evolution process

Returns:

The first element is a 2-D array of the decision vectors of the non-dominated solutions.

The second element is a 2-D array of the corresponding objective values.

Return type:

tuple

class desdeo_emo.EAs.RVEA(problem: desdeo_problem.MOProblem, population_size: int = None, population_params: Dict = None, initial_population: desdeo_emo.population.Population.Population = None, alpha: float = 2, lattice_resolution: int = None, selection_type: str = None, interact: bool = False, use_surrogates: bool = False, n_iterations: int = 10, n_gen_per_iter: int = 100, total_function_evaluations: int = 0, time_penalty_component: str | float = None, keep_archive: bool = False, save_non_dominated: bool = False)[source]

Bases: desdeo_emo.EAs.BaseEA.BaseDecompositionEA

The python version reference vector guided evolutionary algorithm.

Most of the relevant code is contained in the super class. This class just assigns the APD selection operator to BaseDecompositionEA.

NOTE: The APD function had to be slightly modified to accomodate for the fact that this version of the algorithm is interactive, and does not have a set termination criteria. There is a time component in the APD penalty function formula of the type: (t/t_max)^alpha. As there is no set t_max, the formula has been changed. See below, the documentation for the argument: penalty_time_component

See the details of RVEA in the following paper

R. Cheng, Y. Jin, M. Olhofer and B. Sendhoff, A Reference Vector Guided Evolutionary Algorithm for Many-objective Optimization, IEEE Transactions on Evolutionary Computation, 2016

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • population_size (int, optional) – The desired population size, by default None, which sets up a default value of population size depending upon the dimensionaly of the problem.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • alpha (float, optional) – The alpha parameter in the APD selection mechanism. Read paper for details.

  • lattice_resolution (int, optional) – The number of divisions along individual axes in the objective space to be used while creating the reference vector lattice by the simplex lattice design. By default None

  • selection_type (str, optional) – One of [“mean”, “optimistic”, “robust”]. To be used in data-driven optimization. To be used only with surrogate models which return an “uncertainity” factor. Using “mean” is equivalent to using the mean predicted values from the surrogate models and is the default case. Using “optimistic” results in using (mean - uncertainity) values from the the surrogate models as the predicted value (in case of minimization). It is (mean + uncertainity for maximization). Using “robust” is the opposite of using “optimistic”.

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

  • penalty_time_component (Union[str, float], optional) – The APD formula had to be slightly changed. If penalty_time_component is a float between [0, 1], (t/t_max) is replaced by that constant for the entire algorithm. If penalty_time_component is “original”, the original intent of the paper is followed and (t/t_max) is calculated as (current generation count/total number of generations). If penalty_time_component is “function_count”, (t/t_max) is calculated as (current function evaluation count/total number of function evaluations) If penalty_time_component is “interactive”, (t/t_max) is calculated as (Current gen count within an iteration/Total gen count within an iteration). Hence, time penalty is always zero at the beginning of each iteration, and one at the end of each iteration. Note: If the penalty_time_component ever exceeds one, the value one is used as the penalty_time_component. If no value is provided, an appropriate default is selected. If interact is true, penalty_time_component is “interactive” by default. If interact is false, but total_function_evaluations is provided, penalty_time_component is “function_count” by default. If interact is false, but total_function_evaluations is not provided, penalty_time_component is “original” by default.

_time_penalty_constant()

Returns the constant time penalty value.

_time_penalty_original()

Calculates the appropriate time penalty value, by the original formula.

_time_penalty_interactive()

Calculates the appropriate time penalty value.

_time_penalty_function_count()

Calculates the appropriate time penalty value.

class desdeo_emo.EAs.NSGAIII(problem: desdeo_problem.MOProblem, population_size: int = None, population_params: Dict = None, n_survive: int = None, initial_population: desdeo_emo.population.Population.Population = None, lattice_resolution: int = None, selection_type: str = None, interact: bool = False, use_surrogates: bool = False, n_iterations: int = 10, n_gen_per_iter: int = 100, total_function_evaluations: int = 0, keep_archive: bool = False, save_non_dominated: bool = False)

Bases: desdeo_emo.EAs.BaseEA.BaseDecompositionEA

Python Implementation of NSGA-III. Based on the pymoo package.

Most of the relevant code is contained in the super class. This class just assigns the NSGAIII selection operator to BaseDecompositionEA.

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • population_size (int, optional) – The desired population size, by default None, which sets up a default value of population size depending upon the dimensionaly of the problem.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • lattice_resolution (int, optional) – The number of divisions along individual axes in the objective space to be used while creating the reference vector lattice by the simplex lattice design. By default None

  • selection_type (str, optional) – One of [“mean”, “optimistic”, “robust”]. To be used in data-driven optimization. To be used only with surrogate models which return an “uncertainity” factor. Using “mean” is equivalent to using the mean predicted values from the surrogate models and is the default case. Using “optimistic” results in using (mean - uncertainity) values from the the surrogate models as the predicted value (in case of minimization). It is (mean + uncertainity for maximization). Using “robust” is the opposite of using “optimistic”.

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

class desdeo_emo.EAs.PPGA(problem, population_size: int = 100, population_params=None, initial_population=None, n_iterations: int = 10, n_gen_per_iter: int = 10, predator_pop_size: int = 50, prey_max_moves: int = 10, prob_prey_move: float = 0.3, offspring_place_attempts: int = 10, kill_interval: int = 7, max_rank: int = 20, neighbourhood_radius: int = 3)[source]

Bases: desdeo_emo.EAs.BaseEA.BaseEA

Predatory-Prey genetic algorithm.

A population of prey signify the various models or solutions to the problem at hand. Weaker prey, i.e. bad models or solutions, are killed by predators. The predators and prey are placed in a lattice, in which they are free to roam.

In each generation, each predator gets a certain number of turns to move about and hunt in its neighbourhood, killing the weaker prey, according to a fitness criteria. After this, each prey gets a certain number of moves to pursue a random walk and to reproduce with other prey. Each reproduction step generates two new prey from two parents, by crossing over their attributes and adding random mutations. After each prey has completed its move, the whole process starts again.

As the weaker individuals get eliminated in each generation, the population as a whole becomes more fit, i.e. the individuals get closer to the true pareto-optimal solutions.

If you have any questions about the code, please contact:

Bhupinder Saini: bhupinder.s.saini@jyu.fi Project researcher at University of Jyväskylä.

Parameters:

population (object) – The population object

Notes

The algorithm has been created earlier in MATLAB, and this Python implementation has been using that code as a basis. See references [4] for the study during which the original MATLAB version was created. Python code has been written by Niko Rissanen under the supervision of professor Nirupam Chakraborti.

For the MATLAB implementation, see: N. Chakraborti. Data-Driven Bi-Objective Genetic Algorithms EvoNN and BioGP and Their Applications in Metallurgical and Materials Domain. In Datta, Shubhabrata, Davim, J. Paulo (eds.), Computational Approaches to Materials Design: Theoretical and Practical Aspects, pp. 346-369, 2016.

References

[1] Laumanns, M., Rudolph, G., & Schwefel, H. P. (1998). A spatial predator-prey approach to multi-objective optimization: A preliminary study. In International Conference on Parallel Problem Solving from Nature (pp. 241-249). Springer, Berlin, Heidelberg.

[2] Li, X. (2003). A real-coded predator-prey genetic algorithm for multiobjective optimization. In International Conference on Evolutionary Multi-Criterion Optimization (pp. 207-221). Springer, Berlin, Heidelberg.

[3] Chakraborti, N. (2014). Strategies for evolutionary data driven modeling in chemical and metallurgical Systems. In Applications of Metaheuristics in Process Engineering (pp. 89-122). Springer, Cham.

[4] Pettersson, F., Chakraborti, N., & Saxén, H. (2007). A genetic algorithms based multi-objective neural net applied to noisy blast furnace data. Applied Soft Computing, 7(1), 387-397.

_next_gen()

Run one generation of PPGA.

Intended to be used by next_iteration.

Parameters:

population ("Population") – Population object

select(population, max_rank=20) list

Of the population, individuals lower than max_rank are selected. Return indices of selected individuals.

Parameters:
  • population (Population) – Contains the current population and problem information.

  • max_rank (int) – Select only individuals lower than max_rank

Returns:

List of indices of individuals to be selected.

Return type:

list

manage_preferences(preference=None)

Forward the preference to the correct preference handling method.

Parameters:

preference (_type_, optional) – _description_. Defaults to None.

Raises:

eaError – Preference handling not implemented.

class desdeo_emo.EAs.TournamentEA(problem, initial_population: desdeo_emo.population.Population.Population, n_gen_per_iter: int = 10, n_iterations: int = 10, tournament_size: int = 5, population_size: int = 500)

Bases: desdeo_emo.EAs.BaseEA.BaseEA

This class provides the basic structure for Evolutionary algorithms.

start()

Mimics the structure of the mcdm methods. Returns the request objects from self.retuests().

end()

Conducts non-dominated sorting at the end of the evolution process :returns:

The first element is a 2-D array of the decision vectors of the non-dominated solutions.

The second element is a 2-D array of the corresponding objective values.

Return type:

tuple

_next_gen()

Run one generation of decomposition based EA.

This method leaves method.params unchanged. Intended to be used by next_iteration.

Parameters:

population ("Population") – Population object

select() list
class desdeo_emo.EAs.IOPIS_NSGAIII(problem: desdeo_problem.MOProblem, population_size: int = None, population_params: Dict = None, initial_population: desdeo_emo.population.Population.Population = None, lattice_resolution: int = None, n_iterations: int = 10, n_gen_per_iter: int = 100, total_function_evaluations: int = 0, use_surrogates: bool = False)[source]

Bases: BaseIOPISDecompositionEA

The Base class for decomposition based EAs.

This class contains most of the code to set up the parameters and operators. It also contains the logic of a simple decomposition EA.

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • selection_operator (Type[SelectionBase], optional) – The selection operator to be used by the EA, by default None.

  • population_size (int, optional) – The desired population size, by default None, which sets up a default value of population size depending upon the dimensionaly of the problem.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • lattice_resolution (int, optional) – The number of divisions along individual axes in the objective space to be used while creating the reference vector lattice by the simplex lattice design. By default None

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

class desdeo_emo.EAs.IOPIS_RVEA(problem: desdeo_problem.MOProblem, population_size: int = None, population_params: Dict = None, initial_population: desdeo_emo.population.Population.Population = None, alpha: float = None, lattice_resolution: int = None, n_iterations: int = 10, n_gen_per_iter: int = 100, total_function_evaluations: int = 0, time_penalty_component: str | float = None, use_surrogates: bool = False)[source]

Bases: BaseIOPISDecompositionEA, desdeo_emo.EAs.RVEA.RVEA

The python version reference vector guided evolutionary algorithm.

Most of the relevant code is contained in the super class. This class just assigns the APD selection operator to BaseDecompositionEA.

NOTE: The APD function had to be slightly modified to accomodate for the fact that this version of the algorithm is interactive, and does not have a set termination criteria. There is a time component in the APD penalty function formula of the type: (t/t_max)^alpha. As there is no set t_max, the formula has been changed. See below, the documentation for the argument: penalty_time_component

See the details of RVEA in the following paper

R. Cheng, Y. Jin, M. Olhofer and B. Sendhoff, A Reference Vector Guided Evolutionary Algorithm for Many-objective Optimization, IEEE Transactions on Evolutionary Computation, 2016

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • population_size (int, optional) – The desired population size, by default None, which sets up a default value of population size depending upon the dimensionaly of the problem.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • alpha (float, optional) – The alpha parameter in the APD selection mechanism. Read paper for details.

  • lattice_resolution (int, optional) – The number of divisions along individual axes in the objective space to be used while creating the reference vector lattice by the simplex lattice design. By default None

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

  • penalty_time_component (Union[str, float], optional) – The APD formula had to be slightly changed. If penalty_time_component is a float between [0, 1], (t/t_max) is replaced by that constant for the entire algorithm. If penalty_time_component is “original”, the original intent of the paper is followed and (t/t_max) is calculated as (current generation count/total number of generations). If penalty_time_component is “function_count”, (t/t_max) is calculated as (current function evaluation count/total number of function evaluations) If penalty_time_component is “interactive”, (t/t_max) is calculated as (Current gen count within an iteration/Total gen count within an iteration). Hence, time penalty is always zero at the beginning of each iteration, and one at the end of each iteration. Note: If the penalty_time_component ever exceeds one, the value one is used as the penalty_time_component. If no value is provided, an appropriate default is selected. If interact is true, penalty_time_component is “interactive” by default. If interact is false, but total_function_evaluations is provided, penalty_time_component is “function_count” by default. If interact is false, but total_function_evaluations is not provided, penalty_time_component is “original” by default.

_time_penalty_constant()

Returns the constant time penalty value.

_time_penalty_original()

Calculates the appropriate time penalty value, by the original formula.

_time_penalty_interactive()

Calculates the appropriate time penalty value.

_time_penalty_function_count()

Calculates the appropriate time penalty value.

class desdeo_emo.EAs.MOEA_D(problem: desdeo_problem.MOProblem, n_neighbors: int = 20, n_parents: int = 2, use_repair: bool = True, initial_population: desdeo_emo.population.Population.Population = None, population_size: int = None, population_params: Dict = None, lattice_resolution: int = None, interact: bool = False, n_iterations: int = 10, n_gen_per_iter: int = 100, use_surrogates: bool = False, total_function_evaluations: int = 0, scalarization_function: desdeo_tools.scalarization.MOEADSF = PBI(), keep_archive: bool = False, save_non_dominated: bool = False)[source]

Bases: desdeo_emo.EAs.BaseEA.BaseDecompositionEA

Python implementation of MOEA/D

in IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712-731, Dec. 2007, doi: 10.1109/TEVC.2007.892759.

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • scalarization_function (MOEADSF) – The scalarization function to compare the solutions. Some implementations can be found in desdeo-tools/scalarization/MOEADSF. By default it uses the PBI function.

  • n_neighbors (int, optional) – Number of reference vectors considered in the neighborhoods creation. The default number is 20.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • lattice_resolution (int, optional) – The number of divisions along individual axes in the objective space to be used while creating the reference vector lattice by the simplex lattice design. By default None

  • n_parents (int, optional) – Number of individuals considered for the generation of offspring solutions. The default option is 2.

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • use_surrogates (bool, optional) – A bool variable defining whether surrogate problems are to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

_next_gen()

Run one generation of decomposition based EA. Intended to be used by next_iteration.

_select(current_neighborhood: int) list

Describe a selection mechanism. Return indices of selected individuals.

Returns:

List of indices of individuals to be selected.

Return type:

list

class desdeo_emo.EAs.IBEA(problem: desdeo_problem.MOProblem, population_size: int, initial_population: desdeo_emo.population.Population.Population = None, a_priori: bool = False, interact: bool = False, population_params: Dict = None, n_iterations: int = 10, n_gen_per_iter: int = 100, total_function_evaluations: int = 0, use_surrogates: bool = False, kappa: float = 0.05, indicator: Callable = epsilon_indicator)[source]

Bases: desdeo_emo.EAs.BaseIndicatorEA.BaseIndicatorEA

Python Implementation of IBEA.

Most of the relevant code is contained in the super class. This class just assigns the EnviromentalSelection operator to BaseIndicatorEA.

Parameters:
  • problem (MOProblem) – The problem class object specifying the details of the problem.

  • population_size (int, optional) – The desired population size, by default None, which sets up a default value of population size depending upon the dimensionaly of the problem.

  • population_params (Dict, optional) – The parameters for the population class, by default None. See desdeo_emo.population.Population for more details.

  • initial_population (Population, optional) – An initial population class, by default None. Use this if you want to set up a specific starting population, such as when the output of one EA is to be used as the input of another.

  • a_priori (bool, optional) – A bool variable defining whether a priori preference is to be used or not. By default False

  • interact (bool, optional) – A bool variable defining whether interactive preference is to be used or not. By default False

  • n_iterations (int, optional) – The total number of iterations to be run, by default 10. This is not a hard limit and is only used for an internal counter.

  • n_gen_per_iter (int, optional) – The total number of generations in an iteration to be run, by default 100. This is not a hard limit and is only used for an internal counter.

  • total_function_evaluations (int, optional) – Set an upper limit to the total number of function evaluations. When set to zero, this argument is ignored and other termination criteria are used.

  • use_surrogates (bool, optional) – A bool variable defining whether surrogate problems are to be used or not. By default False

  • kappa (float, optional) – Fitness scaling value for indicators. By default 0.05.

  • indicator (Callable, optional) – Quality indicator to use in indicatorEAs. By default in IBEA this is additive epsilon indicator.

_fitness_assignment(fitnesses)

Performs the fitness assignment of the individuals.

_environmental_selection(fitnesses, worst_index)

Selects the worst member of population, then updates the population members fitness values compared to the worst individual. Worst individual is removed from the population.