classifiers

The library provides several classifier implementations based on evolutionary algorithms. These classifiers can learn complex decision boundaries, evolve neural network architectures, and optimize network weights using evolutionary strategies.

Contents

Genetic Programming Classifiers

Genetic Programming classifiers evolve symbolic expressions or tree structures to perform classification. They can discover interpretable decision rules and handle non-linear separations.

Reference: Koza, J. R. (1993). Genetic Programming - On the Programming of Computers by Means of Natural Selection. Complex Adaptive Systems.

Classifier

Description

GeneticProgrammingClassifier

GP-based classifier evolving symbolic expressions for decision boundaries

GeneticProgrammingClassifier

class thefittest.classifiers.GeneticProgrammingClassifier(*, n_iter: int = 300, pop_size: int = 1000, functional_set_names: ~typing.Tuple[str, ...] = ('cos', 'sin', 'add', 'sub', 'mul', 'div'), optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, use_fitness_cache: bool = False)

Bases: ClassifierMixin, BaseGP

Genetic Programming-based classifier using evolved symbolic expressions.

This classifier evolves mathematical expressions (trees) to perform classification by learning symbolic representations of decision boundaries. It can handle both binary and multi-class classification problems.

Parameters:
n_iterint, optional (default=300)

Number of iterations (generations) for the GP optimization.

pop_sizeint, optional (default=1000)

Population size for the genetic programming algorithm.

functional_set_namesTuple[str, …], optional

Tuple of function names to use in evolved expressions. Default: (‘cos’, ‘sin’, ‘add’, ‘sub’, ‘mul’, ‘div’) Available functions: ‘cos’, ‘sin’, ‘add’, ‘sub’, ‘mul’, ‘div’, ‘abs’, ‘logabs’, ‘exp’, ‘sqrtabs’.

optimizerType[Union[SelfCGP, GeneticProgramming, PDPGP]], optional (default=SelfCGP)

Genetic programming optimizer class to use. Available: SelfCGP (self-configuring), GeneticProgramming (standard), or PDPGP (with dynamic operator probabilities).

optimizer_argsOptional[dict], optional (default=None)

Additional arguments passed to the optimizer (excluding n_iter and pop_size). Common args: {‘show_progress_each’: 10, ‘max_level’: 5}

random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)

Random state for reproducibility.

use_fitness_cachebool, optional (default=False)

If True, caches fitness evaluations to avoid redundant computations.

Notes

The classifier evolves symbolic expressions that map input features to class probabilities using sigmoid activation. For multi-class problems, it evolves one tree per class using one-vs-rest approach.

Examples

Binary Classification

>>> from thefittest.classifiers import GeneticProgrammingClassifier
>>> from thefittest.optimizers import SelfCGP
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>>
>>> # Generate binary classification data
>>> X, y = make_classification(n_samples=100, n_features=4, n_classes=2)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
>>>
>>> # Create and train classifier
>>> model = GeneticProgrammingClassifier(
...     n_iter=100,
...     pop_size=500,
...     optimizer=SelfCGP,
...     optimizer_args={'show_progress_each': 20, 'max_level': 5}
... )
>>> model.fit(X_train, y_train)
>>>
>>> # Make predictions
>>> predictions = model.predict(X_test)
>>> probabilities = model.predict_proba(X_test)
>>> tree = model.get_tree()
>>> print('Evolved expression:', tree)
__init__(*, n_iter: int = 300, pop_size: int = 1000, functional_set_names: ~typing.Tuple[str, ...] = ('cos', 'sin', 'add', 'sub', 'mul', 'div'), optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, use_fitness_cache: bool = False)
predict_proba(X: ndarray[Any, dtype[float64]])

Predict class probabilities for X.

Parameters:
XNDArray[np.float64], shape (n_samples, n_features)

Input samples.

Returns:
probaNDArray[np.float64], shape (n_samples, n_classes)

Class probabilities for each sample. For binary classification: [[P(class=0), P(class=1)], …] For multi-class: [[P(class=0), …, P(class=K-1)], …]

predict(X: ndarray[Any, dtype[float64]])

Predict class labels for X.

Parameters:
XNDArray[np.float64], shape (n_samples, n_features)

Input samples.

Returns:
y_predndarray, shape (n_samples,)

Predicted class labels.

fit(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_stats() Statistics

Get optimization statistics from the training process.

Returns:
statsStatistics

Statistics object containing fitness history and other metrics collected during the evolutionary optimization process.

get_tree() Tree

Get the evolved tree expression.

Returns:
treeTree

The best evolved tree representing the symbolic expression. For classification, this is the decision tree. For regression, this is the functional approximation.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

Mean accuracy of self.predict(X) w.r.t. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') GeneticProgrammingClassifier

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

Neural Network Classifiers

Neural network classifiers combine traditional neural architectures with evolutionary optimization. Instead of gradient descent, they use evolutionary algorithms to train networks or evolve architectures.

Classifier

Description

MLPEAClassifier

Multi-Layer Perceptron with evolutionary algorithm-based weight optimization (Cotta et al., 2002)

GeneticProgrammingNeuralNetClassifier

Neural network with GP-evolved architecture and EA-optimized weights (Lipinsky & Semenkin, 2006)

MLPEAClassifier

class thefittest.classifiers.MLPEAClassifier(*, n_iter: int = 100, pop_size: int = 500, hidden_layers: ~typing.Tuple[int, ...] = (0, ), activation: str = 'sigma', offset: bool = True, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu')

Bases: ClassifierMixin, BaseMLPEA

Multi-Layer Perceptron classifier with Evolutionary Algorithm-based training.

This classifier uses evolutionary algorithms to optimize neural network weights instead of traditional gradient-based methods.

Parameters:
n_iterint, optional (default=100)

Number of iterations (generations) for weight optimization.

pop_sizeint, optional (default=500)

Population size for the evolutionary algorithm.

hidden_layersTuple[int, …], optional (default=(0,))

Tuple specifying the number of neurons in each hidden layer. Empty tuple or (0,) means no hidden layers (linear model). Example: (5, 5) creates two hidden layers with 5 neurons each.

activationstr, optional (default=”sigma”)

Activation function for hidden layers. Available: ‘sigma’ (sigmoid), ‘relu’, ‘gauss’ (Gaussian), ‘tanh’, ‘ln’ (natural logarithm normalization), ‘softmax’.

offsetbool, optional (default=True)

If True, adds bias terms to the network.

weights_optimizerType, optional (default=SHADE)

Evolutionary algorithm class for optimizing weights, or PyTorch optimizer. Available EA: SHADE, jDE, DifferentialEvolution, SHAGA, etc. Available torch.optim: Adam, SGD, RMSprop, etc. Note: When using torch.optim optimizers, pop_size parameter is ignored.

weights_optimizer_argsOptional[dict], optional (default=None)

Additional arguments passed to the weights optimizer (excluding n_iter and pop_size). For EA optimizers: {‘show_progress_each’: 10} For torch.optim: {‘lr’: 0.01, ‘weight_decay’: 0.0001} Note: Use ‘epochs’ or ‘iters’ to set training iterations for torch.optim.

random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)

Random state for reproducibility.

devicestr, optional (default=”cpu”)

Device for PyTorch computations: ‘cpu’ or ‘cuda’.

Notes

Requires PyTorch. Install with: pip install thefittest[torch]

The classifier uses evolutionary algorithms to find optimal network weights, which can be more robust to local minima compared to gradient descent but may require more function evaluations.

Examples

Multi-class Classification with Iris Dataset

>>> from thefittest.optimizers import SHAGA
>>> from thefittest.benchmarks import IrisDataset
>>> from thefittest.classifiers import MLPEAClassifier
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.preprocessing import minmax_scale
>>> from sklearn.metrics import confusion_matrix, f1_score
>>>
>>> # Load and prepare data
>>> data = IrisDataset()
>>> X = data.get_X()
>>> y = data.get_y()
>>> X_scaled = minmax_scale(X)
>>>
>>> X_train, X_test, y_train, y_test = train_test_split(
...     X_scaled, y, test_size=0.1
... )
>>>
>>> # Create and train classifier
>>> model = MLPEAClassifier(
...     n_iter=500,
...     pop_size=500,
...     hidden_layers=[5, 5],
...     weights_optimizer=SHAGA,
...     weights_optimizer_args={'show_progress_each': 10}
... )
>>>
>>> model.fit(X_train, y_train)
>>> predict = model.predict(X_test)
>>>
>>> print("Confusion matrix:\n", confusion_matrix(y_test, predict))
>>> print("F1 score:", f1_score(y_test, predict, average="macro"))
__init__(*, n_iter: int = 100, pop_size: int = 500, hidden_layers: ~typing.Tuple[int, ...] = (0, ), activation: str = 'sigma', offset: bool = True, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu')
predict_proba(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]) ndarray[Any, dtype[float64]]

Predict class probabilities for X using the trained neural network.

Parameters:
XArrayLike, shape (n_samples, n_features)

Input samples.

Returns:
probaNDArray[np.float64], shape (n_samples, n_classes)

Class probabilities for each sample.

predict(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])

Predict class labels for X.

Parameters:
XArrayLike, shape (n_samples, n_features)

Input samples.

Returns:
y_predndarray, shape (n_samples,)

Predicted class labels.

fit(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_net() Net

Get the trained neural network.

Returns:
netNet

The trained neural network with optimized weights. Can be used for visualization or further analysis.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_stats() Statistics

Get optimization statistics from the weight training process.

Returns:
statsStatistics

Statistics object containing fitness history and other metrics collected during the weight optimization process. Returns None if torch.optim optimizer was used.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

Mean accuracy of self.predict(X) w.r.t. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') MLPEAClassifier

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.

GeneticProgrammingNeuralNetClassifier

class thefittest.classifiers.GeneticProgrammingNeuralNetClassifier(*, n_iter: int = 15, pop_size: int = 50, input_block_size: int = 1, max_hidden_block_size: int = 9, offset: bool = True, test_sample_ratio: float = 0.5, optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, net_size_penalty: float = 0.0, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu', use_fitness_cache: bool = False, fitness_cache_size: int = 1000)

Bases: ClassifierMixin, BaseGPNN

Genetic Programming-based Neural Network classifier with evolved architecture.

This classifier evolves both the neural network architecture and weights using genetic programming. The network structure is represented as a tree, and weights are optimized using evolutionary algorithms.

Parameters:
n_iterint, optional (default=15)

Number of iterations (generations) for architecture evolution.

pop_sizeint, optional (default=50)

Population size for the architecture evolution.

input_block_sizeint, optional (default=1)

Size of input processing blocks.

max_hidden_block_sizeint, optional (default=9)

Maximum size of hidden layer blocks.

offsetbool, optional (default=True)

If True, adds bias terms to the network.

test_sample_ratiofloat, optional (default=0.5)

Ratio of data to use for validation during evolution.

optimizerType[Union[SelfCGP, GeneticProgramming, PDPGP]], optional (default=SelfCGP)

Genetic programming optimizer for evolving architecture. Available: SelfCGP, GeneticProgramming, or PDPGP.

optimizer_argsOptional[dict], optional (default=None)

Additional arguments for the architecture optimizer (excluding n_iter and pop_size).

weights_optimizerType, optional (default=SHADE)

Evolutionary algorithm for optimizing network weights.

weights_optimizer_argsOptional[dict], optional (default=None)

Additional arguments for the weights optimizer. Note: Use ‘iters’ and ‘pop_size’ keys for setting iterations and population size. Example: {‘iters’: 150, ‘pop_size’: 150, ‘show_progress_each’: 10}

net_size_penaltyfloat, optional (default=0.0)

Penalty coefficient for network complexity (larger = simpler networks).

random_stateOptional[Union[int, np.random.RandomState]], optional (default=None)

Random state for reproducibility.

devicestr, optional (default=”cpu”)

Device for PyTorch computations: ‘cpu’ or ‘cuda’.

use_fitness_cachebool, optional (default=False)

If True, caches fitness evaluations.

fitness_cache_sizeint, optional (default=1000)

Maximum size of fitness cache.

Notes

Requires PyTorch. Install with: pip install thefittest[torch]

This is a two-stage optimization: first, GP evolves the network architecture, then for each architecture, an EA optimizes the weights. This can discover novel network structures but is computationally intensive.

Examples

Multi-class Classification with Evolved Architecture

>>> from thefittest.optimizers import SelfCGP, SHAGA
>>> from thefittest.benchmarks import IrisDataset
>>> from thefittest.classifiers import GeneticProgrammingNeuralNetClassifier
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.preprocessing import minmax_scale
>>> from sklearn.metrics import confusion_matrix, f1_score
>>>
>>> # Load and prepare data
>>> data = IrisDataset()
>>> X = data.get_X()
>>> y = data.get_y()
>>> X_scaled = minmax_scale(X)
>>>
>>> X_train, X_test, y_train, y_test = train_test_split(
...     X_scaled, y, test_size=0.1
... )
>>>
>>> # Create classifier with evolved architecture
>>> model = GeneticProgrammingNeuralNetClassifier(
...     n_iter=10,
...     pop_size=10,
...     optimizer=SelfCGP,
...     optimizer_args={'show_progress_each': 1},
...     weights_optimizer=SHAGA,
...     weights_optimizer_args={'iters': 150, 'pop_size': 150}
... )
>>>
>>> model.fit(X_train, y_train)
>>> predict = model.predict(X_test)
>>>
>>> print("Confusion matrix:\n", confusion_matrix(y_test, predict))
>>> print("F1 score:", f1_score(y_test, predict, average="macro"))
__init__(*, n_iter: int = 15, pop_size: int = 50, input_block_size: int = 1, max_hidden_block_size: int = 9, offset: bool = True, test_sample_ratio: float = 0.5, optimizer: ~typing.Type[~thefittest.optimizers._selfcgp.SelfCGP] | ~typing.Type[~thefittest.optimizers._geneticprogramming.GeneticProgramming] = <class 'thefittest.optimizers._selfcgp.SelfCGP'>, optimizer_args: dict[str, ~typing.Any] | None = None, weights_optimizer: ~typing.Type[~thefittest.optimizers._differentialevolution.DifferentialEvolution] | ~typing.Type[~thefittest.optimizers._jde.jDE] | ~typing.Type[~thefittest.optimizers._shade.SHADE] | ~typing.Type[~thefittest.optimizers._geneticalgorithm.GeneticAlgorithm] | ~typing.Type[~thefittest.optimizers._selfcga.SelfCGA] | ~typing.Type[~thefittest.optimizers._shaga.SHAGA] | ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'thefittest.optimizers._shade.SHADE'>, weights_optimizer_args: dict[str, ~typing.Any] | None = None, net_size_penalty: float = 0.0, random_state: int | ~numpy.random.mtrand.RandomState | None = None, device: str = 'cpu', use_fitness_cache: bool = False, fitness_cache_size: int = 1000)
predict_proba(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]) ndarray[Any, dtype[float64]]

Predict class probabilities using the evolved neural network.

Parameters:
XArrayLike, shape (n_samples, n_features)

Input samples.

Returns:
probaNDArray[np.float64], shape (n_samples, n_classes)

Class probabilities for each sample.

predict(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])

Predict class labels for X.

Parameters:
XArrayLike, shape (n_samples, n_features)

Input samples.

Returns:
y_predndarray, shape (n_samples,)

Predicted class labels.

fit(X: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], y: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes])
static genotype_to_phenotype_tree(tree: Tree, n_variables: int, n_outputs: int, output_activation: str, offset: bool) Net
get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_net() Net

Get the evolved and trained neural network.

Returns:
netNet

The neural network with GP-evolved architecture and optimized weights. Can be used for visualization or further analysis.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_stats() Statistics

Get optimization statistics from the architecture evolution process.

Returns:
statsStatistics

Statistics object containing fitness history and other metrics collected during the architecture evolution process.

get_tree() Tree

Get the evolved tree representing the network architecture.

Returns:
treeTree

The tree expression that encodes the evolved neural network structure. Each node represents network layers and connections.

score(X, y, sample_weight=None)

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
Xarray-like of shape (n_samples, n_features)

Test samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs)

True labels for X.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
scorefloat

Mean accuracy of self.predict(X) w.r.t. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') GeneticProgrammingNeuralNetClassifier

Request metadata passed to the score method.

Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config()). Please see User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

New in version 1.3.

Note

This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.

Parameters:
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns:
selfobject

The updated object.