optimizers
The library provides a comprehensive set of evolutionary optimization algorithms for solving continuous and discrete optimization problems. Each optimizer is designed with flexibility in mind, supporting various selection methods, crossover operators, and mutation strategies.
Contents
Differential Evolution Optimizers
Differential Evolution is a population-based stochastic optimization method that operates on real-valued vectors. It uses vector differences for generating new candidate solutions and has proven effective for continuous optimization problems.
Reference: Storn, R., & Price, K. (1995). Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. Journal of Global Optimization, 23.
Algorithm |
Description |
|---|---|
Classical Differential Evolution algorithm with multiple mutation strategies |
|
Self-adaptive Differential Evolution with dynamic parameter control (Brest et al., 2007) |
|
Success-History based Adaptive Differential Evolution with parameter adaptation (Tanabe & Fukunaga, 2013) |
DifferentialEvolution
- class thefittest.optimizers.DifferentialEvolution(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int, mutation: str = 'rand_1', F: float = 0.5, CR: float = 0.5, elitism: bool = True, init_population: ndarray[Any, dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[float64]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
EvolutionaryAlgorithmDifferential Evolution optimizer for continuous optimization problems.
Differential Evolution (DE) is a stochastic population-based optimization algorithm that uses differential mutation and crossover operators to evolve solutions. It is particularly effective for continuous optimization problems.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, num_variables) and return a 1D array of fitness values of shape (pop_size,).
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- left_borderUnion[float, int, np.number, NDArray[np.number]]
Lower bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- right_borderUnion[float, int, np.number, NDArray[np.number]]
Upper bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- num_variablesint
Number of decision variables (problem dimensionality).
- mutationstr, optional (default=”rand_1”)
Mutation strategy to use. Available strategies:
‘rand_1’: mutant = x_r1 + F * (x_r2 - x_r3)
‘best_1’: mutant = x_best + F * (x_r1 - x_r2)
‘current_to_best_1’: mutant = x_i + F * (x_best - x_i) + F * (x_r1 - x_r2)
‘rand_to_best1’: mutant = x_r1 + F * (x_best - x_r1) + F * (x_r2 - x_r3)
‘rand_2’: mutant = x_r1 + F * (x_r2 - x_r3) + F * (x_r4 - x_r5)
‘best_2’: mutant = x_best + F * (x_r1 - x_r2) + F * (x_r3 - x_r4)
- Ffloat, optional (default=0.5)
Differential weight (mutation factor), typically in [0, 2].
- CRfloat, optional (default=0.5)
Crossover probability, should be in [0, 1].
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.float64]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, num_variables).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations and fitness values.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
References
[1]Storn, Rainer & Price, Kenneth. (1995). Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. Journal of Global Optimization. 23.
Examples
>>> from thefittest.benchmarks import Griewank >>> from thefittest.optimizers import DifferentialEvolution >>> >>> # Define problem parameters >>> n_dimension = 100 >>> left_border = -100. >>> right_border = 100. >>> number_of_generations = 500 >>> population_size = 500 >>> >>> # Create optimizer instance >>> optimizer = DifferentialEvolution( ... fitness_function=Griewank(), ... iters=number_of_generations, ... pop_size=population_size, ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... show_progress_each=10, ... minimization=True, ... mutation="rand_1", ... F=0.1, ... CR=0.5, ... keep_history=True ... ) >>> >>> # Run optimization >>> optimizer.fit() >>> >>> # Get results >>> fittest = optimizer.get_fittest() >>> stats = optimizer.get_stats() >>> >>> print('The fittest individ:', fittest['phenotype']) >>> print('with fitness', fittest['fitness'])
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int, mutation: str = 'rand_1', F: float = 0.5, CR: float = 0.5, elitism: bool = True, init_population: ndarray[Any, dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[float64]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static float_population(pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int) ndarray[Any, dtype[float64]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
jDE
- class thefittest.optimizers.jDE(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int, mutation: str = 'rand_1', F_min: float = 0.1, F_max: float = 0.9, t_F: float = 0.1, t_CR: float = 0.1, elitism: bool = True, init_population: ndarray[Any, dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[float64]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
DifferentialEvolutionSelf-adaptive Differential Evolution with control parameter adaptation.
jDE (self-adaptive Differential Evolution) is a variant of DE that self-adapts the control parameters F (mutation factor) and CR (crossover rate) during evolution. Each individual has its own F and CR values that evolve along with the solution, allowing the algorithm to automatically tune these parameters for the problem at hand.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, num_variables) and return a 1D array of fitness values of shape (pop_size,).
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- left_borderUnion[float, int, np.number, NDArray[np.number]]
Lower bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- right_borderUnion[float, int, np.number, NDArray[np.number]]
Upper bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- num_variablesint
Number of decision variables (problem dimensionality).
- mutationstr, optional (default=”rand_1”)
Mutation strategy to use. See DifferentialEvolution for available strategies.
- F_minfloat, optional (default=0.1)
Minimum value for mutation factor F.
- F_maxfloat, optional (default=0.9)
Maximum value for mutation factor F.
- t_Ffloat, optional (default=0.1)
Probability of updating F parameter for each individual.
- t_CRfloat, optional (default=0.1)
Probability of updating CR parameter for each individual.
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.float64]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, num_variables).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations, fitness values, and F/CR parameters.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
References
[1]Brest, Janez & Greiner, Sao & Bošković, Borko & Mernik, Marjan & Zumer, Viljem. (2007). Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. Evolutionary Computation, IEEE Transactions on. 10. 646 - 657. 10.1109/TEVC.2006.872133.
Examples
>>> from thefittest.benchmarks import Rastrigin >>> from thefittest.optimizers import jDE >>> >>> # Define problem parameters >>> n_dimension = 30 >>> left_border = -5.12 >>> right_border = 5.12 >>> number_of_generations = 200 >>> population_size = 100 >>> >>> # Create jDE optimizer with self-adaptive parameters >>> optimizer = jDE( ... fitness_function=Rastrigin(), ... iters=number_of_generations, ... pop_size=population_size, ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... mutation="rand_1", ... F_min=0.1, ... F_max=0.9, ... t_F=0.1, ... t_CR=0.1, ... minimization=True, ... show_progress_each=20, ... keep_history=True ... ) >>> >>> # Run optimization >>> optimizer.fit() >>> >>> # Get results >>> fittest = optimizer.get_fittest() >>> stats = optimizer.get_stats() >>> >>> print('The fittest individ:', fittest['phenotype']) >>> print('with fitness', fittest['fitness']) >>> print('Final F parameters:', stats['F'][-1]) >>> print('Final CR parameters:', stats['CR'][-1])
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int, mutation: str = 'rand_1', F_min: float = 0.1, F_max: float = 0.9, t_F: float = 0.1, t_CR: float = 0.1, elitism: bool = True, init_population: ndarray[Any, dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[float64]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- static float_population(pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int) ndarray[Any, dtype[float64]]
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
SHADE
- class thefittest.optimizers.SHADE(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int, elitism: bool = True, init_population: ndarray[Any, dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[float64]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
DifferentialEvolutionSuccess-History based Adaptive Differential Evolution optimizer.
SHADE is an advanced variant of Differential Evolution that adaptively adjusts its control parameters (F and CR) based on the success history of previous generations. It uses historical memory to guide parameter selection and incorporates an archive of recently replaced solutions.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, num_variables) and return a 1D array of fitness values of shape (pop_size,).
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population. Also determines the size of the historical memory for F and CR parameters.
- left_borderUnion[float, int, np.number, NDArray[np.number]]
Lower bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- right_borderUnion[float, int, np.number, NDArray[np.number]]
Upper bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- num_variablesint
Number of decision variables (problem dimensionality).
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.float64]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, num_variables).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations, fitness values, and parameter histories.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
Notes
SHADE uses:
Current-to-pbest/1 mutation strategy with archive
Adaptive F parameter sampled from Cauchy distribution
Adaptive CR parameter sampled from normal distribution
Success-history based parameter adaptation using Lehmer mean
External archive of inferior solutions
References
[1]Tanabe, Ryoji & Fukunaga, Alex. (2013). Success-history based parameter adaptation for Differential Evolution. 2013 IEEE Congress on Evolutionary Computation, CEC 2013. 71-78. 10.1109/CEC.2013.6557555.
Examples
>>> from thefittest.optimizers import SHADE >>> >>> # Define a custom optimization problem >>> def custom_problem(x): ... return (5 - x[:, 0])**2 + (12 - x[:, 1])**2 >>> >>> # Set up problem parameters >>> n_dimension = 2 >>> left_border = -100. >>> right_border = 100. >>> number_of_generations = 100 >>> population_size = 100 >>> >>> # Create SHADE optimizer >>> optimizer = SHADE( ... fitness_function=custom_problem, ... iters=number_of_generations, ... pop_size=population_size, ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... show_progress_each=10, ... minimization=True ... ) >>> >>> # Run optimization >>> optimizer.fit() >>> >>> # Get results >>> fittest = optimizer.get_fittest() >>> print('The fittest individ:', fittest['phenotype']) >>> print('with fitness', fittest['fitness'])
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int, elitism: bool = True, init_population: ndarray[Any, dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[float64]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- static float_population(pop_size: int, left_border: float | int | number | ndarray[Any, dtype[number]], right_border: float | int | number | ndarray[Any, dtype[number]], num_variables: int) ndarray[Any, dtype[float64]]
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
Genetic Algorithms
Genetic Algorithms are search heuristics inspired by natural selection. They work with binary string representations and use selection, crossover, and mutation operators to evolve solutions over generations.
Reference: Holland, J. H. (1992). Genetic Algorithms. Scientific American, 267(1), 66-72.
Algorithm |
Description |
|---|---|
Classical Genetic Algorithm with binary string representation |
|
Self-configuring Genetic Algorithm with automatic parameter tuning (Semenkin & Semenkina, 2012) |
|
Population-level Dynamic Probabilities Genetic Algorithm with operator probability adaptation (Niehaus & Banzhaf, 2001) |
|
Success-History based Adaptive Genetic Algorithm (Stanovov et al., 2019) |
GeneticAlgorithm
- class thefittest.optimizers.GeneticAlgorithm(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selection: str = 'tournament_5', crossover: str = 'uniform_2', mutation: str = 'weak', init_population: ndarray[Any, dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
EvolutionaryAlgorithmGenetic Algorithm optimizer for binary and combinatorial optimization problems.
Genetic Algorithm (GA) is a population-based evolutionary algorithm that uses selection, crossover, and mutation operators to evolve solutions encoded as binary strings. It is particularly effective for discrete optimization problems.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, …) and return a 1D array of fitness values.
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- str_lenint
Length of the binary string (genotype length).
- tour_sizeint, optional (default=2)
Tournament size for tournament selection.
- mutation_ratefloat, optional (default=0.05)
Mutation rate for custom mutation strategies.
- parents_numint, optional (default=2)
Number of parents used in crossover operation.
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- selectionstr, optional (default=”tournament_5”)
Selection strategy. Available strategies:
‘proportional’: Fitness proportional selection
‘rank’: Rank-based selection
‘tournament_k’: Tournament selection with tour_size
‘tournament_3’, ‘tournament_5’, ‘tournament_7’: Fixed size tournaments
- crossoverstr, optional (default=”uniform_2”)
Crossover strategy. Available strategies for binary GAs:
‘empty’: No crossover (cloning)
‘one_point’: Single-point crossover
‘two_point’: Two-point crossover
‘uniform_2’: Uniform crossover with 2 parents
‘uniform_7’: Uniform crossover with 7 parents
‘uniform_k’: Uniform crossover with parents_num parents
‘uniform_prop_2’: Fitness-proportional uniform crossover with 2 parents
‘uniform_prop_7’: Fitness-proportional uniform crossover with 7 parents
‘uniform_prop_k’: Fitness-proportional uniform crossover with parents_num parents
‘uniform_rank_2’: Rank-based uniform crossover with 2 parents
‘uniform_rank_7’: Rank-based uniform crossover with 7 parents
‘uniform_rank_k’: Rank-based uniform crossover with parents_num parents
‘uniform_tour_3’: Tournament-based uniform crossover with 3 parents
‘uniform_tour_7’: Tournament-based uniform crossover with 7 parents
‘uniform_tour_k’: Tournament-based uniform crossover with parents_num parents
Note: Operators starting with
'gp_'are for Genetic Programming only.- mutationstr, optional (default=”weak”)
Mutation strategy. Available strategies:
‘weak’: Flip 1/3 of bits on average
‘average’: Flip 1 bit on average
‘strong’: Flip 3 bits on average
‘custom_rate’: Use specified mutation_rate
- init_populationOptional[NDArray[np.byte]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, str_len).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations and fitness values.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
References
[1]Holland, J. H. (1992). Genetic algorithms. Scientific American, 267(1), 66-72.
Examples
Example 1: OneMax Problem with 1000 bits
>>> from thefittest.benchmarks import OneMax >>> from thefittest.optimizers import GeneticAlgorithm >>> >>> number_of_iterations = 200 >>> population_size = 200 >>> string_length = 1000 >>> >>> optimizer = GeneticAlgorithm( ... fitness_function=OneMax(), ... iters=number_of_iterations, ... pop_size=population_size, ... str_len=string_length, ... show_progress_each=10 ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print("Best fitness:", fittest["fitness"]) >>> print("Solution found:", fittest["genotype"])
Example 2: Custom Binary Optimization
>>> import numpy as np >>> >>> # Define custom fitness function >>> def custom_fitness(X): ... # Count ones in each solution ... return X.sum(axis=1).astype(np.float64) >>> >>> optimizer = GeneticAlgorithm( ... fitness_function=custom_fitness, ... iters=100, ... pop_size=50, ... str_len=100, ... selection='tournament_5', ... crossover='uniform_2', ... mutation='weak' ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best fitness:', fittest['fitness'])
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selection: str = 'tournament_5', crossover: str = 'uniform_2', mutation: str = 'weak', init_population: ndarray[Any, dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
SHAGA
- class thefittest.optimizers.SHAGA(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, elitism: bool = True, init_population: ndarray[Any, dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
EvolutionaryAlgorithmSuccess History based Adaptive Genetic Algorithm.
SHAGA is an adaptive genetic algorithm that uses success history to dynamically adjust mutation rate (MR) and crossover rate (CR) parameters during evolution. It combines concepts from SHADE (Success-History based Adaptive Differential Evolution) with genetic algorithms for binary optimization.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, str_len) and return a 1D array of fitness values.
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- str_lenint
Length of the binary string (genotype length).
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.byte]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, str_len).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations and fitness values.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
References
[1]Stanovov, Vladimir & Akhmedova, Shakhnaz & Semenkin, Eugene. (2019). Genetic Algorithm with Success History based Parameter Adaptation. 180-187. 10.5220/0008071201800187.
Examples
Example 1: OneMax Problem
>>> from thefittest.benchmarks import OneMax >>> from thefittest.optimizers import SHAGA >>> >>> optimizer = SHAGA( ... fitness_function=OneMax(), ... iters=150, ... pop_size=100, ... str_len=500, ... show_progress_each=20 ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best fitness:', fittest['fitness'])
Example 2: Continuous Optimization with Gray Code
>>> from thefittest.benchmarks import Sphere >>> from thefittest.utils.transformations import GrayCode >>> >>> # Setup encoding >>> encoder = GrayCode() >>> encoder.fit(left_border=-10, right_border=10, ... num_variables=5, h_per_variable=0.01) >>> num_bits = encoder.get_bits_per_variable().sum() >>> >>> optimizer = SHAGA( ... fitness_function=Sphere(), ... genotype_to_phenotype=encoder.transform, ... iters=200, ... pop_size=100, ... str_len=num_bits, ... minimization=True ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- Attributes:
- _str_lenint
Length of binary strings.
- _MRNDArray[np.float64]
Current mutation rates for each individual.
- _CRNDArray[np.float64]
Current crossover rates for each individual.
- _H_MRNDArray[np.float64]
Historical memory of successful mutation rates.
- _H_CRNDArray[np.float64]
Historical memory of successful crossover rates.
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, elitism: bool = True, init_population: ndarray[Any, dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
SelfCGA
- class thefittest.optimizers.SelfCGA(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('empty', 'one_point', 'two_point', 'uniform_2', 'uniform_7', 'uniform_prop_2', 'uniform_prop_7', 'uniform_rank_2', 'uniform_rank_7', 'uniform_tour_3', 'uniform_tour_7'), mutations: Tuple[str, ...] = ('weak', 'average', 'strong'), init_population: ndarray[Any, dtype[int8]] | None = None, K: float = 2, selection_threshold_proba: float = 0.05, crossover_threshold_proba: float = 0.05, mutation_threshold_proba: float = 0.05, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
GeneticAlgorithmSelf-Configuring Genetic Algorithm with Modified Uniform Crossover.
SelfCGA is an adaptive genetic algorithm that automatically selects the best combination of selection, crossover, and mutation operators during evolution. It maintains a pool of operators and dynamically adjusts their usage probabilities based on their success in improving the population.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array and return a 1D array of fitness values.
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- str_lenint
Length of the binary string (genotype length).
- tour_sizeint, optional (default=2)
Tournament size for tournament selection operators.
- mutation_ratefloat, optional (default=0.05)
Mutation rate for custom mutation strategies.
- parents_numint, optional (default=2)
Number of parents used in crossover operations.
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- selectionsTuple[str, …], optional
Tuple of selection operator names to use in the adaptive pool. Available operators (all from GeneticAlgorithm):
‘proportional’: Fitness proportional selection
‘rank’: Rank-based selection
‘tournament_k’: Tournament selection with tour_size
‘tournament_3’, ‘tournament_5’, ‘tournament_7’: Fixed size tournaments
Default: (‘proportional’, ‘rank’, ‘tournament_3’, ‘tournament_5’, ‘tournament_7’)
- crossoversTuple[str, …], optional
Tuple of crossover operator names to use in the adaptive pool. Available operators (all from GeneticAlgorithm except gp_* variants):
‘empty’: No crossover (cloning)
‘one_point’: Single-point crossover
‘two_point’: Two-point crossover
‘uniform_2’, ‘uniform_7’, ‘uniform_k’: Uniform crossover with N parents
‘uniform_prop_2’, ‘uniform_prop_7’, ‘uniform_prop_k’: Fitness-proportional uniform
‘uniform_rank_2’, ‘uniform_rank_7’, ‘uniform_rank_k’: Rank-based uniform
‘uniform_tour_3’, ‘uniform_tour_7’, ‘uniform_tour_k’: Tournament-based uniform
- Default: (‘empty’, ‘one_point’, ‘two_point’, ‘uniform_2’, ‘uniform_7’,
‘uniform_prop_2’, ‘uniform_prop_7’, ‘uniform_rank_2’, ‘uniform_rank_7’, ‘uniform_tour_3’, ‘uniform_tour_11’)
- mutationsTuple[str, …], optional
Tuple of mutation operator names to use in the adaptive pool. Available operators (all from GeneticAlgorithm except gp_* variants):
‘weak’: Flip 1/3 of bits on average
‘average’: Flip 1 bit on average
‘strong’: Flip 3 bits on average
‘custom_rate’: Use specified mutation_rate
Default: (‘weak’, ‘average’, ‘strong’)
- init_populationOptional[NDArray[np.byte]], optional (default=None)
Initial population. If None, population is randomly initialized.
- Kfloat, optional (default=2)
Coefficient for probability adjustment based on operator success.
- selection_threshold_probafloat, optional (default=0.05)
Minimum probability threshold for selection operators.
- crossover_threshold_probafloat, optional (default=0.05)
Minimum probability threshold for crossover operators.
- mutation_threshold_probafloat, optional (default=0.05)
Minimum probability threshold for mutation operators.
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations.
- minimizationbool, optional (default=False)
If True, minimize; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations.
- keep_historybool, optional (default=False)
If True, keeps history of populations and fitness values.
- n_jobsint, optional (default=1)
Number of parallel jobs. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold.
References
[1]Semenkin, E.S., Semenkina, M.E. Self-configuring Genetic Algorithm with Modified Uniform Crossover Operator. LNCS, 7331, 2012, pp. 414-421. https://doi.org/10.1007/978-3-642-30976-2_50
Examples
Example: Rastrigin Problem with 10 variables
>>> import numpy as np >>> from thefittest.benchmarks import Rastrigin >>> from thefittest.optimizers import SelfCGA >>> from thefittest.utils.transformations import GrayCode >>> >>> # Problem parameters >>> n_dimension = 10 >>> left_border = -5. >>> right_border = 5. >>> number_of_generations = 500 >>> population_size = 500 >>> >>> # Setup Gray Code encoding for continuous optimization >>> genotype_to_phenotype = GrayCode() >>> genotype_to_phenotype.fit( ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... h_per_variable=0.001 ... ) >>> num_bits = genotype_to_phenotype.get_bits_per_variable().sum() >>> >>> # Create optimizer >>> optimizer = SelfCGA( ... fitness_function=Rastrigin(), ... genotype_to_phenotype=genotype_to_phenotype.transform, ... iters=number_of_generations, ... pop_size=population_size, ... str_len=num_bits, ... show_progress_each=30, ... minimization=True, ... optimal_value=0. ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best phenotype:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('empty', 'one_point', 'two_point', 'uniform_2', 'uniform_7', 'uniform_prop_2', 'uniform_prop_7', 'uniform_rank_2', 'uniform_rank_7', 'uniform_tour_3', 'uniform_tour_7'), mutations: Tuple[str, ...] = ('weak', 'average', 'strong'), init_population: ndarray[Any, dtype[int8]] | None = None, K: float = 2, selection_threshold_proba: float = 0.05, crossover_threshold_proba: float = 0.05, mutation_threshold_proba: float = 0.05, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
PDPGA
- class thefittest.optimizers.PDPGA(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('empty', 'one_point', 'two_point', 'uniform_2', 'uniform_7', 'uniform_prop_2', 'uniform_prop_7', 'uniform_rank_2', 'uniform_rank_7', 'uniform_tour_3', 'uniform_tour_7'), mutations: Tuple[str, ...] = ('weak', 'average', 'strong'), init_population: ndarray[Any, dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
SelfCGAGenetic Algorithm with Population-level Dynamic Probabilities (PDP).
PDPGA implements the PDP method for operator probability adaptation, which assigns each operator a minimum application probability (threshold) and dynamically adjusts probabilities based on operator success rates.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array and return a 1D array of fitness values.
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- str_lenint
Length of the binary string (genotype length).
- tour_sizeint, optional (default=2)
Tournament size for tournament selection operators.
- mutation_ratefloat, optional (default=0.05)
Mutation rate for custom mutation strategies.
- parents_numint, optional (default=2)
Number of parents used in crossover operations.
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- selectionsTuple[str, …], optional
Tuple of selection operator names to use in the adaptive pool. Available operators (all from GeneticAlgorithm):
‘proportional’: Fitness proportional selection
‘rank’: Rank-based selection
‘tournament_k’: Tournament selection with tour_size
‘tournament_3’, ‘tournament_5’, ‘tournament_7’: Fixed size tournaments
Default: (‘proportional’, ‘rank’, ‘tournament_3’, ‘tournament_5’, ‘tournament_7’)
- crossoversTuple[str, …], optional
Tuple of crossover operator names to use in the adaptive pool. Available operators (all from GeneticAlgorithm except gp_* variants):
‘empty’: No crossover (cloning)
‘one_point’: Single-point crossover
‘two_point’: Two-point crossover
‘uniform_2’, ‘uniform_7’, ‘uniform_k’: Uniform crossover with N parents
‘uniform_prop_2’, ‘uniform_prop_7’, ‘uniform_prop_k’: Fitness-proportional uniform
‘uniform_rank_2’, ‘uniform_rank_7’, ‘uniform_rank_k’: Rank-based uniform
‘uniform_tour_3’, ‘uniform_tour_7’, ‘uniform_tour_k’: Tournament-based uniform
- Default: (‘empty’, ‘one_point’, ‘two_point’, ‘uniform_2’, ‘uniform_7’,
‘uniform_prop_2’, ‘uniform_prop_7’, ‘uniform_rank_2’, ‘uniform_rank_7’, ‘uniform_tour_3’, ‘uniform_tour_7’)
- mutationsTuple[str, …], optional
Tuple of mutation operator names to use in the adaptive pool. Available operators (all from GeneticAlgorithm except gp_* variants):
‘weak’: Flip 1/3 of bits on average
‘average’: Flip 1 bit on average
‘strong’: Flip 3 bits on average
‘custom_rate’: Use specified mutation_rate
Default: (‘weak’, ‘average’, ‘strong’)
- init_populationOptional[NDArray[np.byte]], optional (default=None)
Initial population. If None, population is randomly initialized.
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations.
- minimizationbool, optional (default=False)
If True, minimize; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations.
- keep_historybool, optional (default=False)
If True, keeps history of populations and fitness values.
- n_jobsint, optional (default=1)
Number of parallel jobs. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold.
Notes
The threshold probabilities are automatically set to \(0.2 / n\) for each operator type, where \(n\) is the number of operators. This provides a balanced initial distribution that adapts based on operator performance during evolution. The method evaluates operator success by comparing offspring fitness with a randomly selected parent from the crossover pool.
References
[1]Niehaus, J., Banzhaf, W. (2001). Adaption of Operator Probabilities in Genetic Programming. In: Miller, J., Tomassini, M., Lanzi, P.L., Ryan, C., Tettamanzi, A.G.B., Langdon, W.B. (eds) Genetic Programming. EuroGP 2001. Lecture Notes in Computer Science, vol 2038. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45355-5_26
Examples
Example 1: OneMax Problem
>>> from thefittest.benchmarks import OneMax >>> from thefittest.optimizers import PDPGA >>> >>> optimizer = PDPGA( ... fitness_function=OneMax(), ... iters=150, ... pop_size=100, ... str_len=500, ... show_progress_each=30 ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best fitness:', fittest['fitness'])
Example 2: Custom Binary Problem with Operator Selection
>>> import numpy as np >>> >>> def custom_fitness(X): ... return X.sum(axis=1).astype(np.float64) >>> >>> # Create optimizer with specific operator pools >>> optimizer = PDPGA( ... fitness_function=custom_fitness, ... iters=100, ... pop_size=50, ... str_len=100, ... selections=('rank', 'tournament_3', 'tournament_5'), ... crossovers=('one_point', 'two_point', 'uniform_2'), ... mutations=('weak', 'average', 'strong'), ... keep_history=True ... ) >>> >>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best fitness:', fittest['fitness'])
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('empty', 'one_point', 'two_point', 'uniform_2', 'uniform_7', 'uniform_prop_2', 'uniform_prop_7', 'uniform_rank_2', 'uniform_rank_7', 'uniform_tour_3', 'uniform_tour_7'), mutations: Tuple[str, ...] = ('weak', 'average', 'strong'), init_population: ndarray[Any, dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[int8]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
Genetic Programming
Genetic Programming evolves computer programs to solve problems. It uses tree-based representations and can perform symbolic regression, program synthesis, and other tasks requiring automatic program generation.
Reference: Koza, J. R. (1993). Genetic Programming - On the Programming of Computers by Means of Natural Selection. Complex Adaptive Systems.
Algorithm |
Description |
|---|---|
Genetic Programming for symbolic regression and program synthesis |
|
Self-configuring Genetic Programming (Semenkin & Semenkina, 2012) |
|
Population-level Dynamic Probabilities Genetic Programming (Niehaus & Banzhaf, 2001) |
GeneticProgramming
- class thefittest.optimizers.GeneticProgramming(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 7, elitism: bool = True, selection: str = 'rank', crossover: str = 'gp_standard', mutation: str = 'gp_weak_grow', max_level: int = 16, init_level: int = 5, init_population: ndarray[Any, dtype[_ScalarType_co]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
GeneticAlgorithmGenetic Programming optimizer for symbolic regression and tree-based evolution.
Genetic Programming (GP) evolves computer programs represented as tree structures using evolutionary operators adapted for tree manipulation. This implementation follows Koza’s approach to genetic programming.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of tree-based solutions.
- unisetUniversalSet
Universal set defining terminal and function nodes for tree construction.
- itersint
Maximum number of iterations (generations).
- pop_sizeint
Number of individuals (trees) in the population.
- tour_sizeint, optional (default=2)
Tournament size for tournament selection.
- mutation_ratefloat, optional (default=0.05)
Mutation rate for custom mutation strategies.
- parents_numint, optional (default=7)
Number of parents used in crossover operation.
- elitismbool, optional (default=True)
If True, the best solution is preserved.
- selectionstr, optional (default=”rank”)
Selection strategy. Available: ‘proportional’, ‘rank’, ‘tournament_k’, ‘tournament_3’, ‘tournament_5’, ‘tournament_7’.
- crossoverstr, optional (default=”gp_standard”)
Crossover strategy for trees. Available GP crossover operators:
‘gp_empty’: No crossover (cloning)
‘gp_standard’: Standard GP subtree crossover
‘gp_one_point’: One-point crossover for trees
‘gp_uniform_2’, ‘gp_uniform_7’, ‘gp_uniform_k’: Uniform crossover variants
‘gp_uniform_prop_2’, ‘gp_uniform_prop_7’, ‘gp_uniform_prop_k’: Proportional uniform
‘gp_uniform_rank_2’, ‘gp_uniform_rank_7’, ‘gp_uniform_rank_k’: Rank-based uniform
‘gp_uniform_tour_3’, ‘gp_uniform_tour_7’, ‘gp_uniform_tour_k’: Tournament uniform
- mutationstr, optional (default=”gp_weak_grow”)
Mutation strategy for trees. Available GP mutation operators:
‘gp_weak_point’, ‘gp_average_point’, ‘gp_strong_point’: Point mutations
‘gp_weak_grow’, ‘gp_average_grow’, ‘gp_strong_grow’: Growing mutations
‘gp_weak_swap’, ‘gp_average_swap’, ‘gp_strong_swap’: Swap mutations
‘gp_weak_shrink’, ‘gp_average_shrink’, ‘gp_strong_shrink’: Shrink mutations
‘gp_custom_rate_point’, ‘gp_custom_rate_grow’, ‘gp_custom_rate_swap’, ‘gp_custom_rate_shrink’: Custom rate variants
- max_levelint, optional (default=16)
Maximum tree depth allowed during evolution.
- init_levelint, optional (default=5)
Initial tree depth for population initialization.
- init_populationOptional[NDArray], optional (default=None)
Initial population of trees. If None, randomly initialized.
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode tree to phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations.
- minimizationbool, optional (default=False)
If True, minimize; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations.
- keep_historybool, optional (default=False)
If True, keeps history of populations and fitness.
- n_jobsint, optional (default=1)
Number of parallel jobs.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to genotype_to_phenotype.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold.
References
[1]Koza, John R. (1993). Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press.
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 7, elitism: bool = True, selection: str = 'rank', crossover: str = 'gp_standard', mutation: str = 'gp_weak_grow', max_level: int = 16, init_level: int = 5, init_population: ndarray[Any, dtype[_ScalarType_co]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static half_and_half(pop_size: int, uniset: UniversalSet, max_level: int) ndarray[Any, dtype[_ScalarType_co]]
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
PDPGP
- class thefittest.optimizers.PDPGP(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('gp_standard', 'gp_one_point', 'gp_uniform_rank_2'), mutations: Tuple[str, ...] = ('gp_weak_point', 'gp_average_point', 'gp_strong_point', 'gp_weak_grow', 'gp_average_grow', 'gp_strong_grow'), max_level: int = 16, init_level: int = 4, init_population: ndarray[Any, dtype[_ScalarType_co]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
GeneticProgramming,PDPGAGenetic Programming with Population-level Dynamic Probabilities (PDP).
PDPGP extends genetic programming with the PDP method for adaptive operator selection. It dynamically adjusts probabilities of GP-specific selection, crossover, and mutation operators based on their success rates during evolution.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of tree-based solutions.
- unisetUniversalSet
Universal set defining terminal and function nodes.
- itersint
Maximum number of iterations (generations).
- pop_sizeint
Number of individuals (trees) in the population.
- tour_sizeint, optional (default=2)
Tournament size for tournament selection.
- mutation_ratefloat, optional (default=0.05)
Mutation rate for custom mutation strategies.
- parents_numint, optional (default=2)
Number of parents used in crossover.
- elitismbool, optional (default=True)
If True, the best solution is preserved.
- selectionsTuple[str, …], optional
Tuple of selection operator names to use in the adaptive pool. Available operators (same as GeneticAlgorithm):
‘proportional’: Fitness proportional selection
‘rank’: Rank-based selection
‘tournament_k’: Tournament selection with tour_size
‘tournament_3’, ‘tournament_5’, ‘tournament_7’: Fixed size tournaments
Default: (‘proportional’, ‘rank’, ‘tournament_3’, ‘tournament_5’, ‘tournament_7’)
- crossoversTuple[str, …], optional
Tuple of GP crossover operator names to use in the adaptive pool. Available GP crossover operators:
‘gp_empty’: No crossover (cloning)
‘gp_standard’: Standard GP subtree crossover
‘gp_one_point’: One-point crossover for trees
‘gp_uniform_2’, ‘gp_uniform_7’, ‘gp_uniform_k’: Uniform crossover variants
‘gp_uniform_prop_2’, ‘gp_uniform_prop_7’, ‘gp_uniform_prop_k’: Proportional
‘gp_uniform_rank_2’, ‘gp_uniform_rank_7’, ‘gp_uniform_rank_k’: Rank-based
‘gp_uniform_tour_3’, ‘gp_uniform_tour_7’, ‘gp_uniform_tour_k’: Tournament
Default: (‘gp_standard’, ‘gp_one_point’, ‘gp_uniform_rank_2’)
- mutationsTuple[str, …], optional
Tuple of GP mutation operator names to use in the adaptive pool. Available GP mutation operators:
Point mutations: ‘gp_weak_point’, ‘gp_average_point’, ‘gp_strong_point’
Growing mutations: ‘gp_weak_grow’, ‘gp_average_grow’, ‘gp_strong_grow’
Swap mutations: ‘gp_weak_swap’, ‘gp_average_swap’, ‘gp_strong_swap’
Shrink mutations: ‘gp_weak_shrink’, ‘gp_average_shrink’, ‘gp_strong_shrink’
Custom rate: ‘gp_custom_rate_point’, ‘gp_custom_rate_grow’, ‘gp_custom_rate_swap’, ‘gp_custom_rate_shrink’
- Default: (‘gp_weak_point’, ‘gp_average_point’, ‘gp_strong_point’,
‘gp_weak_grow’, ‘gp_average_grow’, ‘gp_strong_grow’)
- max_levelint, optional (default=16)
Maximum tree depth allowed.
- init_levelint, optional (default=4)
Initial tree depth.
- init_populationOptional[NDArray], optional (default=None)
Initial population of trees.
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode tree to phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations.
- minimizationbool, optional (default=False)
If True, minimize; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations.
- keep_historybool, optional (default=False)
If True, keeps history of populations.
- n_jobsint, optional (default=1)
Number of parallel jobs.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to genotype_to_phenotype.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold.
References
[1]Niehaus, J., Banzhaf, W. (2001). Adaption of Operator Probabilities in Genetic Programming. In: Miller, J., Tomassini, M., Lanzi, P.L., Ryan, C., Tettamanzi, A.G.B., Langdon, W.B. (eds) Genetic Programming. EuroGP 2001. Lecture Notes in Computer Science, vol 2038. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45355-5_26
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('gp_standard', 'gp_one_point', 'gp_uniform_rank_2'), mutations: Tuple[str, ...] = ('gp_weak_point', 'gp_average_point', 'gp_strong_point', 'gp_weak_grow', 'gp_average_grow', 'gp_strong_grow'), max_level: int = 16, init_level: int = 4, init_population: ndarray[Any, dtype[_ScalarType_co]] | None = None, genotype_to_phenotype: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
- static half_and_half(pop_size: int, uniset: UniversalSet, max_level: int) ndarray[Any, dtype[_ScalarType_co]]
SelfCGP
- class thefittest.optimizers.SelfCGP(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('gp_standard', 'gp_one_point', 'gp_uniform_rank_2'), mutations: Tuple[str, ...] = ('gp_weak_point', 'gp_average_point', 'gp_strong_point', 'gp_weak_grow', 'gp_average_grow', 'gp_strong_grow'), max_level: int = 16, init_level: int = 4, init_population: ndarray[Any, dtype[_ScalarType_co]] | None = None, K: float = 2, selection_threshold_proba: float = 0.05, crossover_threshold_proba: float = 0.05, mutation_threshold_proba: float = 0.05, genotype_to_phenotype: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Bases:
GeneticProgramming,SelfCGASelf-Configuring Genetic Programming with modified uniform crossover.
SelfCGP extends genetic programming with self-adaptive operator selection. It automatically configures GP-specific selection, crossover, and mutation operators during evolution based on their performance, using the same adaptation mechanism as SelfCGA but applied to tree-based operators.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of tree-based solutions.
- unisetUniversalSet
Universal set defining terminal and function nodes.
- itersint
Maximum number of iterations (generations).
- pop_sizeint
Number of individuals (trees) in the population.
- tour_sizeint, optional (default=2)
Tournament size for tournament selection.
- mutation_ratefloat, optional (default=0.05)
Mutation rate for custom mutation strategies.
- parents_numint, optional (default=2)
Number of parents used in crossover.
- elitismbool, optional (default=True)
If True, the best solution is preserved.
- selectionsTuple[str, …], optional
Tuple of selection operator names to use in the adaptive pool. Available operators (same as GeneticAlgorithm):
‘proportional’: Fitness proportional selection
‘rank’: Rank-based selection
‘tournament_k’: Tournament selection with tour_size
‘tournament_3’, ‘tournament_5’, ‘tournament_7’: Fixed size tournaments
Default: (‘proportional’, ‘rank’, ‘tournament_3’, ‘tournament_5’, ‘tournament_7’)
- crossoversTuple[str, …], optional
Tuple of GP crossover operator names to use in the adaptive pool. Available GP crossover operators:
‘gp_empty’: No crossover (cloning)
‘gp_standard’: Standard GP subtree crossover
‘gp_one_point’: One-point crossover for trees
‘gp_uniform_2’, ‘gp_uniform_7’, ‘gp_uniform_k’: Uniform crossover variants
‘gp_uniform_prop_2’, ‘gp_uniform_prop_7’, ‘gp_uniform_prop_k’: Proportional
‘gp_uniform_rank_2’, ‘gp_uniform_rank_7’, ‘gp_uniform_rank_k’: Rank-based
‘gp_uniform_tour_3’, ‘gp_uniform_tour_7’, ‘gp_uniform_tour_k’: Tournament
Default: (‘gp_standard’, ‘gp_one_point’, ‘gp_uniform_rank_2’)
- mutationsTuple[str, …], optional
Tuple of GP mutation operator names to use in the adaptive pool. Available GP mutation operators:
Point mutations: ‘gp_weak_point’, ‘gp_average_point’, ‘gp_strong_point’
Growing mutations: ‘gp_weak_grow’, ‘gp_average_grow’, ‘gp_strong_grow’
Swap mutations: ‘gp_weak_swap’, ‘gp_average_swap’, ‘gp_strong_swap’
Shrink mutations: ‘gp_weak_shrink’, ‘gp_average_shrink’, ‘gp_strong_shrink’
Custom rate: ‘gp_custom_rate_point’, ‘gp_custom_rate_grow’, ‘gp_custom_rate_swap’, ‘gp_custom_rate_shrink’
- Default: (‘gp_weak_point’, ‘gp_average_point’, ‘gp_strong_point’,
‘gp_weak_grow’, ‘gp_average_grow’, ‘gp_strong_grow’)
- max_levelint, optional (default=16)
Maximum tree depth allowed.
- init_levelint, optional (default=4)
Initial tree depth.
- init_populationOptional[NDArray], optional (default=None)
Initial population of trees.
- Kfloat, optional (default=2)
Coefficient for probability adjustment based on operator success.
- selection_threshold_probafloat, optional (default=0.05)
Minimum probability threshold for selection operators.
- crossover_threshold_probafloat, optional (default=0.05)
Minimum probability threshold for crossover operators.
- mutation_threshold_probafloat, optional (default=0.05)
Minimum probability threshold for mutation operators.
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode tree to phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations.
- minimizationbool, optional (default=False)
If True, minimize; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations.
- keep_historybool, optional (default=False)
If True, keeps history of populations.
- n_jobsint, optional (default=1)
Number of parallel jobs.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to genotype_to_phenotype.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold.
References
[1]Semenkin, Eugene & Semenkina, Maria. (2012). Self-configuring genetic programming algorithm with modified uniform crossover. IEEE Congress on Evolutionary Computation, 1-6. http://dx.doi.org/10.1109/CEC.2012.6256587
- __init__(fitness_function: Callable[[ndarray[Any, dtype[Any]]], ndarray[Any, dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('gp_standard', 'gp_one_point', 'gp_uniform_rank_2'), mutations: Tuple[str, ...] = ('gp_weak_point', 'gp_average_point', 'gp_strong_point', 'gp_weak_grow', 'gp_average_grow', 'gp_strong_grow'), max_level: int = 16, init_level: int = 4, init_population: ndarray[Any, dtype[_ScalarType_co]] | None = None, K: float = 2, selection_threshold_proba: float = 0.05, crossover_threshold_proba: float = 0.05, mutation_threshold_proba: float = 0.05, genotype_to_phenotype: Callable[[ndarray[Any, dtype[_ScalarType_co]]], ndarray[Any, dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
- static binary_string_population(pop_size: int, str_len: int) ndarray[Any, dtype[int8]]
- fit() EvolutionaryAlgorithm
Execute the evolutionary optimization process.
Runs the evolutionary algorithm for the specified number of iterations, evolving the population to optimize the fitness function. The method handles initialization, population evolution, fitness evaluation, and termination conditions.
- Returns:
- EvolutionaryAlgorithm
Returns self to allow method chaining.
Notes
The optimization process includes:
Random state initialization
Initial population generation
Iterative evolution:
Selection and variation operations
Fitness evaluation
Population update
Progress tracking (if enabled)
Termination checking
The process terminates when:
Maximum iterations reached
Optimal value achieved (if specified)
No improvement for specified number of iterations (if configured)
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution found:', fittest['phenotype'])
- get_calls() int
Get the number of fitness function calls already performed.
Returns the total number of times the fitness function has been evaluated since the start of the optimization process. This includes all evaluations across generations and parallel executions.
- Returns:
- int
Number of fitness function calls performed so far.
See also
get_remains_callsReturns the number of remaining fitness evaluations.
Examples
>>> optimizer.fit() >>> calls = optimizer.get_calls() >>> print(f"Fitness function was called {calls} times")
- get_fittest() Dict
Get the best solution found during optimization.
Returns a dictionary containing the genotype, phenotype, and fitness value of the best individual found during the evolutionary process.
- Returns:
- Dict
Dictionary with keys:
- ‘genotype’: array-like
Internal representation of the best solution.
- ‘phenotype’: array-like
Decoded representation of the best solution.
- ‘fitness’: float
Fitness value of the best solution.
Examples
>>> optimizer.fit() >>> fittest = optimizer.get_fittest() >>> print('Best solution:', fittest['phenotype']) >>> print('Best fitness:', fittest['fitness'])
- get_remains_calls() int
Get the number of remaining fitness function calls.
Returns the number of fitness evaluations that can still be performed based on the configured population size and number of iterations.
- Returns:
- int
Number of remaining fitness function calls.
Examples
>>> optimizer.fit() >>> remaining = optimizer.get_remains_calls() >>> print(f"Remaining calls: {remaining}")
- get_stats() Statistics
Get statistics collected during the optimization process.
Returns a Statistics object (dictionary subclass) containing history of various metrics collected during evolution if keep_history=True was set during initialization.
- Returns:
- Statistics
Dictionary containing evolution statistics with keys such as:
- ‘fitness’: list of arrays
Fitness values of populations at each iteration.
- ‘population_g’: list of arrays
Genotype populations at each iteration.
- ‘population_ph’: list of arrays
Phenotype populations at each iteration.
- ‘max_fitness’: list of float
Maximum fitness at each iteration.
Additional keys may be present depending on the specific algorithm.
Notes
Statistics are only collected if keep_history=True was specified during optimizer initialization. Otherwise, returns an empty dictionary.
Examples
>>> optimizer.fit() >>> stats = optimizer.get_stats() >>> print(f"Number of generations: {len(stats['max_fitness'])}") >>> print(f"Final max fitness: {stats['max_fitness'][-1]}")
- static half_and_half(pop_size: int, uniset: UniversalSet, max_level: int) ndarray[Any, dtype[_ScalarType_co]]