regressors
The library provides several regressor implementations based on evolutionary algorithms. These regressors can perform symbolic regression, optimize neural network weights, and evolve network architectures for continuous value prediction.
Contents
Genetic Programming Regressors
Genetic Programming regressors evolve symbolic expressions or tree structures to perform regression. They can discover interpretable mathematical models and handle complex non-linear relationships.
Reference: Koza, J. R. (1993). Genetic Programming - On the Programming of Computers by Means of Natural Selection. Complex Adaptive Systems.
Regressor |
Description |
|---|---|
GP-based regressor evolving symbolic expressions for explicit functional relationships |
GeneticProgrammingRegressor
- class thefittest.regressors.GeneticProgrammingRegressor(*, n_iter: int = 300, pop_size: int = 1000, functional_set_names: ~typing.Tuple[str, ...] = ('cos', 'sin', 'add', 'sub', 'mul', 'div'), optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, use_fitness_cache: bool = False)
Bases:
RegressorMixin,BaseGPGenetic Programming-based regressor using evolved symbolic expressions.
This regressor evolves mathematical expressions (trees) to perform symbolic regression by learning explicit functional relationships between input features and target values.
- Parameters:
- n_iterint, optional (default=300)
Number of iterations (generations) for the GP optimization.
- pop_sizeint, optional (default=1000)
Population size for the genetic programming algorithm.
- functional_set_namesTuple[str, …], optional
Tuple of function names to use in evolved expressions. Default: (‘cos’, ‘sin’, ‘add’, ‘sub’, ‘mul’, ‘div’) Available functions: ‘cos’, ‘sin’, ‘add’, ‘sub’, ‘mul’, ‘div’, ‘abs’, ‘logabs’, ‘exp’, ‘sqrtabs’.
- optimizerType[Union[SelfCGP, GeneticProgramming, PDPGP]], optional (default=SelfCGP)
Genetic programming optimizer class to use. Available: SelfCGP (self-configuring), GeneticProgramming (standard), or PDPGP (with dynamic operator probabilities).
- optimizer_argsOptional[dict], optional (default=None)
Additional arguments passed to the optimizer (excluding n_iter and pop_size). Common args: {‘show_progress_each’: 10, ‘max_level’: 5}
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- use_fitness_cachebool, optional (default=False)
If True, caches fitness evaluations to avoid redundant computations.
Notes
The regressor evolves symbolic expressions that explicitly represent the relationship between inputs and outputs. The resulting model is interpretable and can be analyzed mathematically.
Examples
Symbolic Regression with Visualization
>>> import numpy as np >>> import matplotlib.pyplot as plt >>> from thefittest.regressors import GeneticProgrammingRegressor >>> from thefittest.optimizers import PDPGP >>> >>> # Define the true function >>> def problem(x): ... return np.sin(x[:, 0] * 3) * x[:, 0] * 0.5 >>> >>> # Generate training data >>> n_dimension = 2 >>> left_border = -4.5 >>> right_border = 4.5 >>> sample_size = 100 >>> >>> X = np.array([np.linspace(left_border, right_border, sample_size) ... for _ in range(n_dimension)]).T >>> y = problem(X) >>> >>> # Train the model >>> model = GeneticProgrammingRegressor( ... n_iter=500, ... pop_size=1000, ... optimizer=PDPGP, ... optimizer_args={'show_progress_each': 10, 'max_level': 5} ... ) >>> model.fit(X, y) >>> predict = model.predict(X) >>> >>> # Get the evolved symbolic expression >>> tree = model.get_tree() >>> print('Evolved expression:', tree) >>> >>> # Visualize results >>> fig, ax = plt.subplots(figsize=(14, 7), ncols=2, nrows=1) >>> ax[0].plot(X[:, 0], y, label='True y') >>> ax[0].plot(X[:, 0], predict, label='Predicted y') >>> ax[0].legend() >>> tree.plot(ax=ax[1]) >>> plt.tight_layout() >>> plt.show()
- Attributes:
- tree_Tree
The evolved tree expression representing the symbolic model.
- n_features_in_int
Number of features seen during fit.
- __init__(*, n_iter: int = 300, pop_size: int = 1000, functional_set_names: ~typing.Tuple[str, ...] = ('cos', 'sin', 'add', 'sub', 'mul', 'div'), optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, use_fitness_cache: bool = False)
- predict(X: ndarray[Any, dtype[float64]])
Predict target values for X using the evolved symbolic expression.
- Parameters:
- XNDArray[np.float64], shape (n_samples, n_features)
Input samples.
- Returns:
- y_predndarray, shape (n_samples,)
Predicted target values.
- fit(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- get_stats() Statistics
Get optimization statistics from the training process.
- Returns:
- statsStatistics
Statistics object containing fitness history and other metrics collected during the evolutionary optimization process.
- get_tree() Tree
Get the evolved tree expression.
- Returns:
- treeTree
The best evolved tree representing the symbolic expression. For classification, this is the decision tree. For regression, this is the functional approximation.
- score(X, y, sample_weight=None)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted), wheren_samples_fittedis the number of samples used in the fitting for the estimator.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
\(R^2\) of
self.predict(X)w.r.t. y.
Notes
The \(R^2\) score used when calling
scoreon a regressor usesmultioutput='uniform_average'from version 0.23 to keep consistent with default value ofr2_score(). This influences thescoremethod of all the multioutput regressors (except forMultiOutputRegressor).
- set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') GeneticProgrammingRegressor
Request metadata passed to the
scoremethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weightparameter inscore.
- Returns:
- selfobject
The updated object.
Neural Network Regressors
Neural network regressors combine traditional neural architectures with evolutionary optimization. Instead of gradient descent, they use evolutionary algorithms to train networks or evolve architectures.
Regressor |
Description |
|---|---|
Multi-Layer Perceptron with evolutionary algorithm-based weight optimization (Cotta et al., 2002) |
|
Neural network with GP-evolved architecture and EA-optimized weights (Lipinsky & Semenkin, 2006) |
MLPEARegressor
- class thefittest.regressors.MLPEARegressor(*, n_iter: int = 100, pop_size: int = 500, hidden_layers: ~typing.Tuple[int, ...] = (100, ), activation: str = 'sigma', offset: bool = True, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu')
Bases:
RegressorMixin,BaseMLPEAMulti-Layer Perceptron regressor with Evolutionary Algorithm-based training.
This regressor uses evolutionary algorithms to optimize neural network weights instead of traditional gradient-based methods. It’s particularly useful when gradient information is unavailable or unreliable.
- Parameters:
- n_iterint, optional (default=100)
Number of iterations (generations) for weight optimization.
- pop_sizeint, optional (default=500)
Population size for the evolutionary algorithm.
- hidden_layersTuple[int, …], optional (default=(100,))
Tuple specifying the number of neurons in each hidden layer. Empty tuple or (0,) means no hidden layers (linear model). Example: (15, 15) creates two hidden layers with 15 neurons each.
- activationstr, optional (default=”sigma”)
Activation function for hidden layers. Available: ‘sigma’ (sigmoid), ‘relu’, ‘gauss’ (Gaussian), ‘tanh’, ‘ln’ (natural logarithm normalization), ‘softmax’.
- offsetbool, optional (default=True)
If True, adds bias terms to the network.
- weights_optimizerType, optional (default=SHADE)
Evolutionary algorithm class for optimizing weights, or PyTorch optimizer. Available EA: SHADE, jDE, DifferentialEvolution, SHAGA, etc. Available torch.optim: Adam, SGD, RMSprop, etc. Note: When using torch.optim optimizers, pop_size parameter is ignored.
- weights_optimizer_argsOptional[dict], optional (default=None)
Additional arguments passed to the weights optimizer (excluding n_iter and pop_size). For EA optimizers: {‘show_progress_each’: 10} For torch.optim: {‘lr’: 0.01, ‘weight_decay’: 0.0001} Note: Use ‘epochs’ or ‘iters’ to set training iterations for torch.optim.
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- devicestr, optional (default=”cpu”)
Device for PyTorch computations: ‘cpu’ or ‘cuda’.
Notes
Requires PyTorch. Install with: pip install thefittest[torch]
The regressor uses evolutionary algorithms to find optimal network weights, which can be more robust to local minima compared to gradient descent but may require more function evaluations.
Examples
Regression with Noisy Data
>>> import numpy as np >>> from thefittest.regressors import MLPEARegressor >>> from thefittest.optimizers import SHAGA >>> from sklearn.model_selection import train_test_split >>> from sklearn.preprocessing import scale >>> from sklearn.metrics import r2_score >>> >>> # Define the true function >>> def problem(x): ... return np.sin(x[:, 0] * 3) * x[:, 0] * 0.5 >>> >>> # Generate training data with noise >>> n_dimension = 1 >>> left_border = -4.5 >>> right_border = 4.5 >>> sample_size = 100 >>> >>> X = np.array([np.linspace(left_border, right_border, sample_size) ... for _ in range(n_dimension)]).T >>> noise = np.random.normal(0, 0.1, size=sample_size) >>> y = problem(X) + noise >>> >>> X_scaled = scale(X) >>> X_train, X_test, y_train, y_test = train_test_split( ... X_scaled, y, test_size=0.1 ... ) >>> >>> # Train the model >>> model = MLPEARegressor( ... n_iter=500, ... pop_size=500, ... hidden_layers=[15, 15], ... weights_optimizer=SHAGA, ... weights_optimizer_args={'show_progress_each': 10} ... ) >>> model.fit(X_train, y_train) >>> predict = model.predict(X_test) >>> >>> print("r2_score:", r2_score(y_test, predict))
- Attributes:
- net_torch.nn.Module
The trained neural network.
- n_features_in_int
Number of features seen during fit.
- __init__(*, n_iter: int = 100, pop_size: int = 500, hidden_layers: ~typing.Tuple[int, ...] = (100, ), activation: str = 'sigma', offset: bool = True, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu')
- predict(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
Predict target values for X using the trained neural network.
- Parameters:
- XArrayLike, shape (n_samples, n_features)
Input samples.
- Returns:
- y_predndarray, shape (n_samples,)
Predicted target values.
- fit(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_net() Net
Get the trained neural network.
- Returns:
- netNet
The trained neural network with optimized weights. Can be used for visualization or further analysis.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- get_stats() Statistics
Get optimization statistics from the weight training process.
- Returns:
- statsStatistics
Statistics object containing fitness history and other metrics collected during the weight optimization process. Returns None if torch.optim optimizer was used.
- score(X, y, sample_weight=None)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted), wheren_samples_fittedis the number of samples used in the fitting for the estimator.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
\(R^2\) of
self.predict(X)w.r.t. y.
Notes
The \(R^2\) score used when calling
scoreon a regressor usesmultioutput='uniform_average'from version 0.23 to keep consistent with default value ofr2_score(). This influences thescoremethod of all the multioutput regressors (except forMultiOutputRegressor).
- set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') MLPEARegressor
Request metadata passed to the
scoremethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weightparameter inscore.
- Returns:
- selfobject
The updated object.
GeneticProgrammingNeuralNetRegressor
- class thefittest.regressors.GeneticProgrammingNeuralNetRegressor(*, n_iter: int = 15, pop_size: int = 50, input_block_size: int = 1, max_hidden_block_size: int = 9, offset: bool = True, test_sample_ratio: float = 0.5, optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, net_size_penalty: float = 0.0, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu', use_fitness_cache: bool = False, fitness_cache_size: int = 1000)
Bases:
RegressorMixin,BaseGPNNGenetic Programming-based Neural Network regressor with evolved architecture.
This regressor evolves both the neural network architecture and weights using genetic programming. The network structure is represented as a tree, and weights are optimized using evolutionary algorithms or gradient-based optimizers.
- Parameters:
- n_iterint, optional (default=15)
Number of iterations (generations) for architecture evolution.
- pop_sizeint, optional (default=50)
Population size for the architecture evolution.
- input_block_sizeint, optional (default=1)
Size of input processing blocks.
- max_hidden_block_sizeint, optional (default=9)
Maximum size of hidden layer blocks.
- offsetbool, optional (default=True)
If True, adds bias terms to the network.
- test_sample_ratiofloat, optional (default=0.5)
Ratio of data to use for validation during evolution.
- optimizerType[Union[SelfCGP, GeneticProgramming, PDPGP]], optional (default=SelfCGP)
Genetic programming optimizer for evolving architecture. Available: SelfCGP, GeneticProgramming, or PDPGP.
- optimizer_argsOptional[dict], optional (default=None)
Additional arguments for the architecture optimizer (excluding n_iter and pop_size).
- weights_optimizerType, optional (default=SHADE)
Optimizer for network weights. Can be evolutionary algorithm or torch.optim optimizer. Available: SHADE, SHAGA, jDE, or torch.optim.Adam, torch.optim.SGD, etc.
- weights_optimizer_argsOptional[dict], optional (default=None)
Additional arguments for the weights optimizer. For EA optimizers: {‘iters’: 150, ‘pop_size’: 150, ‘show_progress_each’: 10} For torch.optim: {‘iters’: 1000, ‘lr’: 0.01} (pop_size is ignored)
- net_size_penaltyfloat, optional (default=0.0)
Penalty coefficient for network complexity (larger = simpler networks).
- random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)
Random state for reproducibility.
- devicestr, optional (default=”cpu”)
Device for PyTorch computations: ‘cpu’ or ‘cuda’.
- use_fitness_cachebool, optional (default=False)
If True, caches fitness evaluations.
- fitness_cache_sizeint, optional (default=1000)
Maximum size of fitness cache.
Notes
Requires PyTorch. Install with: pip install thefittest[torch]
This is a two-stage optimization: first, GP evolves the network architecture, then for each architecture, weights are optimized using either evolutionary algorithms or gradient-based methods. This can discover novel network structures but is computationally intensive.
Examples
Regression with Evolved Architecture and EA Optimizer
>>> import numpy as np >>> from thefittest.regressors import GeneticProgrammingNeuralNetRegressor >>> from thefittest.optimizers import PDPGP, SHAGA >>> from sklearn.model_selection import train_test_split >>> from sklearn.preprocessing import scale >>> from sklearn.metrics import r2_score >>> >>> # Define the problem >>> def problem(x): ... return np.sin(x[:, 0] * 3) * x[:, 0] * 0.5 >>> >>> # Generate data >>> n_dimension = 1 >>> sample_size = 100 >>> X = np.array([np.linspace(-4.5, 4.5, sample_size) ... for _ in range(n_dimension)]).T >>> noise = np.random.normal(0, 0.1, size=sample_size) >>> y = problem(X) + noise >>> >>> X_scaled = scale(X) >>> X_train, X_test, y_train, y_test = train_test_split( ... X_scaled, y, test_size=0.1 ... ) >>> >>> # Train with evolutionary algorithm for weights >>> model = GeneticProgrammingNeuralNetRegressor( ... n_iter=10, ... pop_size=10, ... optimizer=PDPGP, ... optimizer_args={'show_progress_each': 1}, ... weights_optimizer=SHAGA, ... weights_optimizer_args={'iters': 150, 'pop_size': 150} ... ) >>> model.fit(X_train, y_train) >>> predict = model.predict(X_test) >>> print("r2_score:", r2_score(y_test, predict))
Using PyTorch Optimizer (Adam) for Weight Training
>>> import torch.optim as optim >>> >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model = GeneticProgrammingNeuralNetRegressor( ... n_iter=10, ... pop_size=10, ... optimizer=PDPGP, ... optimizer_args={'show_progress_each': 1}, ... weights_optimizer=optim.Adam, ... weights_optimizer_args={'iters': 1000, 'lr': 0.01}, ... device=device ... ) >>> model.fit(X_train, y_train) >>> predict = model.predict(X_test)
- Attributes:
- net_torch.nn.Module
The evolved and trained neural network.
- tree_Tree
Tree representation of the evolved architecture.
- n_features_in_int
Number of features seen during fit.
- __init__(*, n_iter: int = 15, pop_size: int = 50, input_block_size: int = 1, max_hidden_block_size: int = 9, offset: bool = True, test_sample_ratio: float = 0.5, optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, net_size_penalty: float = 0.0, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu', use_fitness_cache: bool = False, fitness_cache_size: int = 1000)
- predict(X: ndarray[Any, dtype[float64]])
Predict target values using the evolved neural network.
- Parameters:
- XNDArray[np.float64], shape (n_samples, n_features)
Input samples.
- Returns:
- y_predndarray, shape (n_samples,)
Predicted target values.
- fit(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
- static genotype_to_phenotype_tree(tree: Tree, n_variables: int, n_outputs: int, output_activation: str, offset: bool) Net
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_net() Net
Get the evolved and trained neural network.
- Returns:
- netNet
The neural network with GP-evolved architecture and optimized weights. Can be used for visualization or further analysis.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- get_stats() Statistics
Get optimization statistics from the architecture evolution process.
- Returns:
- statsStatistics
Statistics object containing fitness history and other metrics collected during the architecture evolution process.
- get_tree() Tree
Get the evolved tree representing the network architecture.
- Returns:
- treeTree
The tree expression that encodes the evolved neural network structure. Each node represents network layers and connections.
- score(X, y, sample_weight=None)
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted), wheren_samples_fittedis the number of samples used in the fitting for the estimator.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
\(R^2\) of
self.predict(X)w.r.t. y.
Notes
The \(R^2\) score used when calling
scoreon a regressor usesmultioutput='uniform_average'from version 0.23 to keep consistent with default value ofr2_score(). This influences thescoremethod of all the multioutput regressors (except forMultiOutputRegressor).
- set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') GeneticProgrammingNeuralNetRegressor
Request metadata passed to the
scoremethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weightparameter inscore.
- Returns:
- selfobject
The updated object.