optimizers
- class thefittest.optimizers.DifferentialEvolution(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], right_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], num_variables: int, mutation: str = 'rand_1', F: float = 0.5, CR: float = 0.5, elitism: bool = True, init_population: ndarray[tuple[Any, ...], dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[float64]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Differential Evolution optimizer for continuous optimization problems.
Differential Evolution (DE) is a stochastic population-based optimization algorithm that uses differential mutation and crossover operators to evolve solutions. It is particularly effective for continuous optimization problems.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, num_variables) and return a 1D array of fitness values of shape (pop_size,).
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- left_borderUnion[float, int, np.number, NDArray[np.number]]
Lower bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- right_borderUnion[float, int, np.number, NDArray[np.number]]
Upper bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- num_variablesint
Number of decision variables (problem dimensionality).
- mutationstr, optional (default=”rand_1”)
Mutation strategy to use. Available strategies:
‘rand_1’: mutant = x_r1 + F * (x_r2 - x_r3)
‘best_1’: mutant = x_best + F * (x_r1 - x_r2)
‘current_to_best_1’: mutant = x_i + F * (x_best - x_i) + F * (x_r1 - x_r2)
‘rand_to_best1’: mutant = x_r1 + F * (x_best - x_r1) + F * (x_r2 - x_r3)
‘rand_2’: mutant = x_r1 + F * (x_r2 - x_r3) + F * (x_r4 - x_r5)
‘best_2’: mutant = x_best + F * (x_r1 - x_r2) + F * (x_r3 - x_r4)
- Ffloat, optional (default=0.5)
Differential weight (mutation factor), typically in [0, 2].
- CRfloat, optional (default=0.5)
Crossover probability, should be in [0, 1].
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.float64]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, num_variables).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations and fitness values.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
- Attributes:
- _num_variablesint
Number of decision variables.
- _leftNDArray[np.float64]
Lower bounds for each variable.
- _rightNDArray[np.float64]
Upper bounds for each variable.
- _specified_mutationstr
Selected mutation strategy.
- _FUnion[float, NDArray[np.float64]]
Mutation factor.
- _CRUnion[float, NDArray[np.float64]]
Crossover probability.
Methods
fit()
Execute the evolutionary optimization process.
get_fittest()
Get the best solution found.
get_stats()
Get statistics collected during optimization.
get_remains_calls()
Get the number of remaining fitness function calls.
float_population(pop_size, left_border, right_border, num_variables)
Generate a random population of floating-point vectors.
References
[1]Storn, Rainer & Price, Kenneth. (1995). Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. Journal of Global Optimization. 23.
Examples
>>> from thefittest.benchmarks import Griewank >>> from thefittest.optimizers import DifferentialEvolution >>> >>> # Define problem parameters >>> n_dimension = 100 >>> left_border = -100. >>> right_border = 100. >>> number_of_generations = 500 >>> population_size = 500 >>> >>> # Create optimizer instance >>> optimizer = DifferentialEvolution( ... fitness_function=Griewank(), ... iters=number_of_generations, ... pop_size=population_size, ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... show_progress_each=10, ... minimization=True, ... mutation="rand_1", ... F=0.1, ... CR=0.5, ... keep_history=True ... ) >>> >>> # Run optimization >>> optimizer.fit() >>> >>> # Get results >>> fittest = optimizer.get_fittest() >>> stats = optimizer.get_stats() >>> >>> print('The fittest individ:', fittest['phenotype']) >>> print('with fitness', fittest['fitness'])
- static float_population(pop_size: int, left_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], right_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], num_variables: int) ndarray[tuple[Any, ...], dtype[float64]]
- class thefittest.optimizers.GeneticAlgorithm(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selection: str = 'tournament_5', crossover: str = 'uniform_2', mutation: str = 'weak', init_population: ndarray[tuple[Any, ...], dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[int8]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Holland, J. H. (1992). Genetic algorithms. Scientific American, 267(1), 66-72
- static binary_string_population(pop_size: int, str_len: int) ndarray[tuple[Any, ...], dtype[int8]]
- class thefittest.optimizers.GeneticProgramming(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 7, elitism: bool = True, selection: str = 'rank', crossover: str = 'gp_standard', mutation: str = 'gp_weak_grow', max_level: int = 16, init_level: int = 5, init_population: ndarray[tuple[Any, ...], dtype[_ScalarT]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Koza, John R.. “Genetic programming - on the programming of computers by means of natural selection.” Complex Adaptive Systems (1993)
- static half_and_half(pop_size: int, uniset: UniversalSet, max_level: int) ndarray[tuple[Any, ...], dtype[_ScalarT]]
- class thefittest.optimizers.PDPGA(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('empty', 'one_point', 'two_point', 'uniform_2', 'uniform_7', 'uniform_prop_2', 'uniform_prop_7', 'uniform_rank_2', 'uniform_rank_7', 'uniform_tour_3', 'uniform_tour_7'), mutations: Tuple[str, ...] = ('weak', 'average', 'strong'), init_population: ndarray[tuple[Any, ...], dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[int8]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Niehaus, J., Banzhaf, W. (2001). Adaption of Operator Probabilities in Genetic Programming. In: Miller, J., Tomassini, M., Lanzi, P.L., Ryan, C., Tettamanzi, A.G.B., Langdon, W.B. (eds) Genetic Programming. EuroGP 2001. Lecture Notes in Computer Science, vol 2038. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45355-5_26
- class thefittest.optimizers.PDPGP(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('gp_standard', 'gp_one_point', 'gp_uniform_rank_2'), mutations: Tuple[str, ...] = ('gp_weak_point', 'gp_average_point', 'gp_strong_point', 'gp_weak_grow', 'gp_average_grow', 'gp_strong_grow'), max_level: int = 16, init_level: int = 4, init_population: ndarray[tuple[Any, ...], dtype[_ScalarT]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[_ScalarT]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Niehaus, J., Banzhaf, W. (2001). Adaption of Operator Probabilities in Genetic Programming. In: Miller, J., Tomassini, M., Lanzi, P.L., Ryan, C., Tettamanzi, A.G.B., Langdon, W.B. (eds) Genetic Programming. EuroGP 2001. Lecture Notes in Computer Science, vol 2038. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45355-5_26
- class thefittest.optimizers.SHADE(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], right_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], num_variables: int, elitism: bool = True, init_population: ndarray[tuple[Any, ...], dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[float64]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Success-History based Adaptive Differential Evolution optimizer.
SHADE is an advanced variant of Differential Evolution that adaptively adjusts its control parameters (F and CR) based on the success history of previous generations. It uses historical memory to guide parameter selection and incorporates an archive of recently replaced solutions.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, num_variables) and return a 1D array of fitness values of shape (pop_size,).
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population. Also determines the size of the historical memory for F and CR parameters.
- left_borderUnion[float, int, np.number, NDArray[np.number]]
Lower bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- right_borderUnion[float, int, np.number, NDArray[np.number]]
Upper bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- num_variablesint
Number of decision variables (problem dimensionality).
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.float64]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, num_variables).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations, fitness values, and parameter histories.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
- Attributes:
- _H_FNDArray[np.float64]
Historical memory for mutation factor F, initialized to 0.5.
- _H_CRNDArray[np.float64]
Historical memory for crossover rate CR, initialized to 0.5.
- _H_sizeint
Size of historical memory (equal to pop_size).
- _kint
Current index in the historical memory.
- _pfloat
Proportion of population used for p-best selection (default 0.05).
- _population_g_archive_iNDArray[np.float64]
Archive of replaced solutions used in mutation.
Methods
fit()
Execute the evolutionary optimization process.
get_fittest()
Get the best solution found.
get_stats()
Get statistics collected during optimization (includes H_F and H_CR histories).
get_remains_calls()
Get the number of remaining fitness function calls.
Notes
SHADE uses:
Current-to-pbest/1 mutation strategy with archive
Adaptive F parameter sampled from Cauchy distribution
Adaptive CR parameter sampled from normal distribution
Success-history based parameter adaptation using Lehmer mean
External archive of inferior solutions
References
[1]Tanabe, Ryoji & Fukunaga, Alex. (2013). Success-history based parameter adaptation for Differential Evolution. 2013 IEEE Congress on Evolutionary Computation, CEC 2013. 71-78. 10.1109/CEC.2013.6557555.
Examples
>>> from thefittest.optimizers import SHADE >>> >>> # Define a custom optimization problem >>> def custom_problem(x): ... return (5 - x[:, 0])**2 + (12 - x[:, 1])**2 >>> >>> # Set up problem parameters >>> n_dimension = 2 >>> left_border = -100. >>> right_border = 100. >>> number_of_generations = 100 >>> population_size = 100 >>> >>> # Create SHADE optimizer >>> optimizer = SHADE( ... fitness_function=custom_problem, ... iters=number_of_generations, ... pop_size=population_size, ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... show_progress_each=10, ... minimization=True ... ) >>> >>> # Run optimization >>> optimizer.fit() >>> >>> # Get results >>> fittest = optimizer.get_fittest() >>> print('The fittest individ:', fittest['phenotype']) >>> print('with fitness', fittest['fitness'])
- class thefittest.optimizers.SHAGA(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, str_len: int, elitism: bool = True, init_population: ndarray[tuple[Any, ...], dtype[int8]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[int8]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Stanovov, Vladimir & Akhmedova, Shakhnaz & Semenkin, Eugene. (2019). Genetic Algorithm with Success History based Parameter Adaptation. 180-187. 10.5220/0008071201800187.
- static binary_string_population(pop_size: int, str_len: int) ndarray[tuple[Any, ...], dtype[int8]]
- class thefittest.optimizers.SelfCGA(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, str_len: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('empty', 'one_point', 'two_point', 'uniform_2', 'uniform_7', 'uniform_prop_2', 'uniform_prop_7', 'uniform_rank_2', 'uniform_rank_7', 'uniform_tour_3', 'uniform_tour_7'), mutations: Tuple[str, ...] = ('weak', 'average', 'strong'), init_population: ndarray[tuple[Any, ...], dtype[int8]] | None = None, K: float = 2, selection_threshold_proba: float = 0.05, crossover_threshold_proba: float = 0.05, mutation_threshold_proba: float = 0.05, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[int8]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Semenkin, E.S., Semenkina, M.E. Self-configuring Genetic Algorithm with Modified Uniform Crossover Operator. LNCS, 7331, 2012, pp. 414-421. https://doi.org/10.1007/978-3-642-30976-2_50
- class thefittest.optimizers.SelfCGP(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], uniset: UniversalSet, iters: int, pop_size: int, tour_size: int = 2, mutation_rate: float = 0.05, parents_num: int = 2, elitism: bool = True, selections: Tuple[str, ...] = ('proportional', 'rank', 'tournament_3', 'tournament_5', 'tournament_7'), crossovers: Tuple[str, ...] = ('gp_standard', 'gp_one_point', 'gp_uniform_rank_2'), mutations: Tuple[str, ...] = ('gp_weak_point', 'gp_average_point', 'gp_strong_point', 'gp_weak_grow', 'gp_average_grow', 'gp_strong_grow'), max_level: int = 16, init_level: int = 4, init_population: ndarray[tuple[Any, ...], dtype[_ScalarT]] | None = None, K: float = 2, selection_threshold_proba: float = 0.05, crossover_threshold_proba: float = 0.05, mutation_threshold_proba: float = 0.05, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[_ScalarT]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Semenkin, Eugene & Semenkina, Maria. (2012). Self-configuring genetic programming algorithm with modified uniform crossover. 1-6. http://dx.doi.org/10.1109/CEC.2012.6256587
- class thefittest.optimizers.jDE(fitness_function: Callable[[ndarray[tuple[Any, ...], dtype[Any]]], ndarray[tuple[Any, ...], dtype[float64]]], iters: int, pop_size: int, left_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], right_border: float | int | number | ndarray[tuple[Any, ...], dtype[number]], num_variables: int, mutation: str = 'rand_1', F_min: float = 0.1, F_max: float = 0.9, t_F: float = 0.1, t_CR: float = 0.1, elitism: bool = True, init_population: ndarray[tuple[Any, ...], dtype[float64]] | None = None, genotype_to_phenotype: Callable[[ndarray[tuple[Any, ...], dtype[float64]]], ndarray[tuple[Any, ...], dtype[Any]]] | None = None, optimal_value: float | None = None, termination_error_value: float = 0.0, no_increase_num: int | None = None, minimization: bool = False, show_progress_each: int | None = None, keep_history: bool = False, n_jobs: int = 1, fitness_function_args: Dict | None = None, genotype_to_phenotype_args: Dict | None = None, random_state: int | RandomState | None = None, on_generation: Callable | None = None, fitness_update_eps: float = 0.0)
Self-adaptive Differential Evolution with control parameter adaptation.
jDE (self-adaptive Differential Evolution) is a variant of DE that self-adapts the control parameters F (mutation factor) and CR (crossover rate) during evolution. Each individual has its own F and CR values that evolve along with the solution, allowing the algorithm to automatically tune these parameters for the problem at hand.
- Parameters:
- fitness_functionCallable[[NDArray[Any]], NDArray[np.float64]]
Function to evaluate fitness of solutions. Should accept a 2D array of shape (pop_size, num_variables) and return a 1D array of fitness values of shape (pop_size,).
- itersint
Maximum number of iterations (generations) to run the algorithm.
- pop_sizeint
Number of individuals in the population.
- left_borderUnion[float, int, np.number, NDArray[np.number]]
Lower bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- right_borderUnion[float, int, np.number, NDArray[np.number]]
Upper bound(s) for decision variables. Can be a scalar (same bound for all variables) or an array of shape (num_variables,).
- num_variablesint
Number of decision variables (problem dimensionality).
- mutationstr, optional (default=”rand_1”)
Mutation strategy to use. See DifferentialEvolution for available strategies.
- F_minfloat, optional (default=0.1)
Minimum value for mutation factor F.
- F_maxfloat, optional (default=0.9)
Maximum value for mutation factor F.
- t_Ffloat, optional (default=0.1)
Probability of updating F parameter for each individual.
- t_CRfloat, optional (default=0.1)
Probability of updating CR parameter for each individual.
- elitismbool, optional (default=True)
If True, the best solution is always preserved in the next generation.
- init_populationOptional[NDArray[np.float64]], optional (default=None)
Initial population. If None, population is randomly initialized. Shape should be (pop_size, num_variables).
- genotype_to_phenotypeOptional[Callable], optional (default=None)
Function to decode genotype to phenotype. If None, genotype equals phenotype.
- optimal_valueOptional[float], optional (default=None)
Known optimal value for termination. Algorithm stops if this value is reached.
- termination_error_valuefloat, optional (default=0.0)
Acceptable error from optimal value for termination.
- no_increase_numOptional[int], optional (default=None)
Stop if no improvement for this many iterations. If None, runs all iterations.
- minimizationbool, optional (default=False)
If True, minimize the fitness function; if False, maximize.
- show_progress_eachOptional[int], optional (default=None)
Print progress every N iterations. If None, no progress is shown.
- keep_historybool, optional (default=False)
If True, keeps history of all populations, fitness values, and F/CR parameters.
- n_jobsint, optional (default=1)
Number of parallel jobs for fitness evaluation. -1 uses all processors.
- fitness_function_argsOptional[Dict], optional (default=None)
Additional arguments to pass to fitness function.
- genotype_to_phenotype_argsOptional[Dict], optional (default=None)
Additional arguments to pass to genotype_to_phenotype function.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- on_generationOptional[Callable], optional (default=None)
Callback function called after each generation.
- fitness_update_epsfloat, optional (default=0.0)
Minimum improvement threshold to consider a solution as better.
- Attributes:
- _F_minfloat
Minimum value for mutation factor.
- _F_maxfloat
Maximum value for mutation factor.
- _t_Ffloat
Probability of updating F parameter.
- _t_CRfloat
Probability of updating CR parameter.
- _FNDArray[np.float64]
Array of F values for each individual, initialized to 0.5.
- _CRNDArray[np.float64]
Array of CR values for each individual, initialized to 0.9.
Methods
fit()
Execute the evolutionary optimization process.
get_fittest()
Get the best solution found.
get_stats()
Get statistics collected during optimization (includes F and CR histories).
get_remains_calls()
Get the number of remaining fitness function calls.
Notes
The self-adaptation mechanism works as follows:
Each individual has its own F and CR parameters
With probability t_F, F is randomly regenerated from [F_min, F_max]
With probability t_CR, CR is randomly regenerated from [0, 1]
If the mutated individual is better, it inherits the adapted parameters
Otherwise, the old parameters are preserved
References
[1]Brest, Janez & Greiner, Sao & Bošković, Borko & Mernik, Marjan & Zumer, Viljem. (2007). Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. Evolutionary Computation, IEEE Transactions on. 10. 646 - 657. 10.1109/TEVC.2006.872133.
Examples
>>> from thefittest.benchmarks import Rastrigin >>> from thefittest.optimizers import jDE >>> >>> # Define problem parameters >>> n_dimension = 30 >>> left_border = -5.12 >>> right_border = 5.12 >>> number_of_generations = 200 >>> population_size = 100 >>> >>> # Create jDE optimizer with self-adaptive parameters >>> optimizer = jDE( ... fitness_function=Rastrigin(), ... iters=number_of_generations, ... pop_size=population_size, ... left_border=left_border, ... right_border=right_border, ... num_variables=n_dimension, ... mutation="rand_1", ... F_min=0.1, ... F_max=0.9, ... t_F=0.1, ... t_CR=0.1, ... minimization=True, ... show_progress_each=20, ... keep_history=True ... ) >>> >>> # Run optimization >>> optimizer.fit() >>> >>> # Get results >>> fittest = optimizer.get_fittest() >>> stats = optimizer.get_stats() >>> >>> print('The fittest individ:', fittest['phenotype']) >>> print('with fitness', fittest['fitness']) >>> print('Final F parameters:', stats['F'][-1]) >>> print('Final CR parameters:', stats['CR'][-1])