跳到内容

分类器

基类: TPOTEstimator

源代码位于 tpot/tpot_estimator/templates/tpottemplates.py
class TPOTClassifier(TPOTEstimator):
    def __init__(       self,
                        search_space = "linear",
                        scorers=['roc_auc_ovr'], 
                        scorers_weights=[1],
                        cv = 10,
                        other_objective_functions=[], #tpot.objectives.estimator_objective_functions.number_of_nodes_objective],
                        other_objective_functions_weights = [],
                        objective_function_names = None,
                        bigger_is_better = True,
                        categorical_features = None,
                        memory = None,
                        preprocessing = False,
                        max_time_mins=60, 
                        max_eval_time_mins=10, 
                        n_jobs = 1,
                        validation_strategy = "none",
                        validation_fraction = .2, 
                        early_stop = None,
                        warm_start = False,
                        periodic_checkpoint_folder = None, 
                        verbose = 2,
                        memory_limit = None,
                        client = None,
                        random_state=None,
                        allow_inner_classifiers=None,
                        **tpotestimator_kwargs,

        ):
        """
        An sklearn baseestimator that uses genetic programming to optimize a classification pipeline.
        For more parameters, see the TPOTEstimator class.

        Parameters
        ----------

        search_space : (String, tpot.search_spaces.SearchSpace)
            - String : The default search space to use for the optimization.
            | String     | Description      |
            | :---        |    :----:   |
            | linear  | A linear pipeline with the structure of "Selector->(transformers+Passthrough)->(classifiers/regressors+Passthrough)->final classifier/regressor." For both the transformer and inner estimator layers, TPOT may choose one or more transformers/classifiers, or it may choose none. The inner classifier/regressor layer is optional. |
            | linear-light | Same search space as linear, but without the inner classifier/regressor layer and with a reduced set of faster running estimators. |
            | graph | TPOT will optimize a pipeline in the shape of a directed acyclic graph. The nodes of the graph can include selectors, scalers, transformers, or classifiers/regressors (inner classifiers/regressors can optionally be not included). This will return a custom GraphPipeline rather than an sklearn Pipeline. More details in Tutorial 6. |
            | graph-light | Same as graph search space, but without the inner classifier/regressors and with a reduced set of faster running estimators. |
            | mdr |TPOT will search over a series of feature selectors and Multifactor Dimensionality Reduction models to find a series of operators that maximize prediction accuracy. The TPOT MDR configuration is specialized for genome-wide association studies (GWAS), and is described in detail online here.

            Note that TPOT MDR may be slow to run because the feature selection routines are computationally expensive, especially on large datasets. |
            - SearchSpace : The search space to use for the optimization. This should be an instance of a SearchSpace.
                The search space to use for the optimization. This should be an instance of a SearchSpace.
                TPOT has groups of search spaces found in the following folders, tpot.search_spaces.nodes for the nodes in the pipeline and tpot.search_spaces.pipelines for the pipeline structure.

        scorers : (list, scorer)
            A scorer or list of scorers to be used in the cross-validation process.
            see https://scikit-learn.cn/stable/modules/model_evaluation.html

        scorers_weights : list
            A list of weights to be applied to the scorers during the optimization process.

        classification : bool
            If True, the problem is treated as a classification problem. If False, the problem is treated as a regression problem.
            Used to determine the CV strategy.

        cv : int, cross-validator
            - (int): Number of folds to use in the cross-validation process. By uses the sklearn.model_selection.KFold cross-validator for regression and StratifiedKFold for classification. In both cases, shuffled is set to True.
            - (sklearn.model_selection.BaseCrossValidator): A cross-validator to use in the cross-validation process.
                - max_depth (int): The maximum depth from any node to the root of the pipelines to be generated.

        other_objective_functions : list, default=[]
            A list of other objective functions to apply to the pipeline. The function takes a single parameter for the graphpipeline estimator and returns either a single score or a list of scores.

        other_objective_functions_weights : list, default=[]
            A list of weights to be applied to the other objective functions.

        objective_function_names : list, default=None
            A list of names to be applied to the objective functions. If None, will use the names of the objective functions.

        bigger_is_better : bool, default=True
            If True, the objective function is maximized. If False, the objective function is minimized. Use negative weights to reverse the direction.

        categorical_features : list or None
            Categorical columns to inpute and/or one hot encode during the preprocessing step. Used only if preprocessing is not False.

        categorical_features: list or None
            Categorical columns to inpute and/or one hot encode during the preprocessing step. Used only if preprocessing is not False.
            - None : If None, TPOT will automatically use object columns in pandas dataframes as objects for one hot encoding in preprocessing.
            - List of categorical features. If X is a dataframe, this should be a list of column names. If X is a numpy array, this should be a list of column indices


        memory: Memory object or string, default=None
            If supplied, pipeline will cache each transformer after calling fit with joblib.Memory. This feature
            is used to avoid computing the fit transformers within a pipeline if the parameters
            and input data are identical with another fitted pipeline during optimization process.
            - String 'auto':
                TPOT uses memory caching with a temporary directory and cleans it up upon shutdown.
            - String path of a caching directory
                TPOT uses memory caching with the provided directory and TPOT does NOT clean
                the caching directory up upon shutdown. If the directory does not exist, TPOT will
                create it.
            - Memory object:
                TPOT uses the instance of joblib.Memory for memory caching,
                and TPOT does NOT clean the caching directory up upon shutdown.
            - None:
                TPOT does not use memory caching.

        preprocessing : bool or BaseEstimator/Pipeline,
            EXPERIMENTAL
            A pipeline that will be used to preprocess the data before CV. Note that the parameters for these steps are not optimized. Add them to the search space to be optimized.
            - bool : If True, will use a default preprocessing pipeline which includes imputation followed by one hot encoding.
            - Pipeline : If an instance of a pipeline is given, will use that pipeline as the preprocessing pipeline.

        max_time_mins : float, default=float("inf")
            Maximum time to run the optimization. If none or inf, will run until the end of the generations.

        max_eval_time_mins : float, default=60*5
            Maximum time to evaluate a single individual. If none or inf, there will be no time limit per evaluation.


        n_jobs : int, default=1
            Number of processes to run in parallel.

        validation_strategy : str, default='none'
            EXPERIMENTAL The validation strategy to use for selecting the final pipeline from the population. TPOT may overfit the cross validation score. A second validation set can be used to select the final pipeline.
            - 'auto' : Automatically determine the validation strategy based on the dataset shape.
            - 'reshuffled' : Use the same data for cross validation and final validation, but with different splits for the folds. This is the default for small datasets.
            - 'split' : Use a separate validation set for final validation. Data will be split according to validation_fraction. This is the default for medium datasets.
            - 'none' : Do not use a separate validation set for final validation. Select based on the original cross-validation score. This is the default for large datasets.

        validation_fraction : float, default=0.2
          EXPERIMENTAL The fraction of the dataset to use for the validation set when validation_strategy is 'split'. Must be between 0 and 1.

        early_stop : int, default=None
            Number of generations without improvement before early stopping. All objectives must have converged within the tolerance for this to be triggered. In general a value of around 5-20 is good.

        warm_start : bool, default=False
            If True, will use the continue the evolutionary algorithm from the last generation of the previous run.

        periodic_checkpoint_folder : str, default=None
            Folder to save the population to periodically. If None, no periodic saving will be done.
            If provided, training will resume from this checkpoint.


        verbose : int, default=1
            How much information to print during the optimization process. Higher values include the information from lower values.
            0. nothing
            1. progress bar

            3. best individual
            4. warnings
            >=5. full warnings trace
            6. evaluations progress bar. (Temporary: This used to be 2. Currently, using evaluation progress bar may prevent some instances were we terminate a generation early due to it reaching max_time_mins in the middle of a generation OR a pipeline failed to be terminated normally and we need to manually terminate it.)


        memory_limit : str, default=None
            Memory limit for each job. See Dask [LocalCluster documentation](https://distributed.dask.org.cn/en/stable/api.html#distributed.Client) for more information.

        client : dask.distributed.Client, default=None
            A dask client to use for parallelization. If not None, this will override the n_jobs and memory_limit parameters. If None, will create a new client with num_workers=n_jobs and memory_limit=memory_limit.

        random_state : int, None, default=None
            A seed for reproducability of experiments. This value will be passed to numpy.random.default_rng() to create an instnce of the genrator to pass to other classes

            - int
                Will be used to create and lock in Generator instance with 'numpy.random.default_rng()'
            - None
                Will be used to create Generator for 'numpy.random.default_rng()' where a fresh, unpredictable entropy will be pulled from the OS

        allow_inner_classifiers : bool, default=True
            If True, the search space will include ensembled classifiers. 

        Attributes
        ----------

        fitted_pipeline_ : GraphPipeline
            A fitted instance of the GraphPipeline that inherits from sklearn BaseEstimator. This is fitted on the full X, y passed to fit.

        evaluated_individuals : A pandas data frame containing data for all evaluated individuals in the run.
            Columns:
            - *objective functions : The first few columns correspond to the passed in scorers and objective functions
            - Parents : A tuple containing the indexes of the pipelines used to generate the pipeline of that row. If NaN, this pipeline was generated randomly in the initial population.
            - Variation_Function : Which variation function was used to mutate or crossover the parents. If NaN, this pipeline was generated randomly in the initial population.
            - Individual : The internal representation of the individual that is used during the evolutionary algorithm. This is not an sklearn BaseEstimator.
            - Generation : The generation the pipeline first appeared.
            - Pareto_Front	: The nondominated front that this pipeline belongs to. 0 means that its scores is not strictly dominated by any other individual.
                            To save on computational time, the best frontier is updated iteratively each generation.
                            The pipelines with the 0th pareto front do represent the exact best frontier. However, the pipelines with pareto front >= 1 are only in reference to the other pipelines in the final population.
                            All other pipelines are set to NaN.
            - Instance	: The unfitted GraphPipeline BaseEstimator.
            - *validation objective functions : Objective function scores evaluated on the validation set.
            - Validation_Pareto_Front : The full pareto front calculated on the validation set. This is calculated for all pipelines with Pareto_Front equal to 0. Unlike the Pareto_Front which only calculates the frontier and the final population, the Validation Pareto Front is calculated for all pipelines tested on the validation set.

        pareto_front : The same pandas dataframe as evaluated individuals, but containing only the frontier pareto front pipelines.
        """
        self.search_space = search_space
        self.scorers = scorers
        self.scorers_weights = scorers_weights
        self.cv = cv
        self.other_objective_functions = other_objective_functions
        self.other_objective_functions_weights = other_objective_functions_weights
        self.objective_function_names = objective_function_names
        self.bigger_is_better = bigger_is_better
        self.categorical_features = categorical_features
        self.memory = memory
        self.preprocessing = preprocessing
        self.max_time_mins = max_time_mins
        self.max_eval_time_mins = max_eval_time_mins
        self.n_jobs = n_jobs
        self.validation_strategy = validation_strategy
        self.validation_fraction = validation_fraction
        self.early_stop = early_stop
        self.warm_start = warm_start
        self.periodic_checkpoint_folder = periodic_checkpoint_folder
        self.verbose = verbose
        self.memory_limit = memory_limit
        self.client = client
        self.random_state = random_state
        self.tpotestimator_kwargs = tpotestimator_kwargs
        self.allow_inner_classifiers = allow_inner_classifiers

        self.initialized = False

    def fit(self, X, y):

        if not self.initialized:

            get_search_space_params = {"n_classes": len(np.unique(y)), 
                                       "n_samples":len(y), 
                                       "n_features":X.shape[1], 
                                       "random_state":self.random_state}

            search_space = get_template_search_spaces(self.search_space, classification=True, inner_predictors=self.allow_inner_classifiers, **get_search_space_params)


            super(TPOTClassifier,self).__init__(
                search_space=search_space,
                scorers=self.scorers, 
                scorers_weights=self.scorers_weights,
                cv = self.cv,
                other_objective_functions=self.other_objective_functions, #tpot.objectives.estimator_objective_functions.number_of_nodes_objective],
                other_objective_functions_weights = self.other_objective_functions_weights,
                objective_function_names = self.objective_function_names,
                bigger_is_better = self.bigger_is_better,
                categorical_features = self.categorical_features,
                memory = self.memory,
                preprocessing = self.preprocessing,
                max_time_mins=self.max_time_mins, 
                max_eval_time_mins=self.max_eval_time_mins, 
                n_jobs=self.n_jobs,
                validation_strategy = self.validation_strategy,
                validation_fraction = self.validation_fraction, 
                early_stop = self.early_stop,
                warm_start = self.warm_start,
                periodic_checkpoint_folder = self.periodic_checkpoint_folder, 
                verbose = self.verbose,
                classification=True,
                memory_limit = self.memory_limit,
                client = self.client,
                random_state=self.random_state,
                **self.tpotestimator_kwargs)
            self.initialized = True

        return super().fit(X,y)


    def predict(self, X, **predict_params):
        check_is_fitted(self)
        #X=check_array(X)
        return self.fitted_pipeline_.predict(X,**predict_params)

__init__(search_space='linear', scorers=['roc_auc_ovr'], scorers_weights=[1], cv=10, other_objective_functions=[], other_objective_functions_weights=[], objective_function_names=None, bigger_is_better=True, categorical_features=None, memory=None, preprocessing=False, max_time_mins=60, max_eval_time_mins=10, n_jobs=1, validation_strategy='none', validation_fraction=0.2, early_stop=None, warm_start=False, periodic_checkpoint_folder=None, verbose=2, memory_limit=None, client=None, random_state=None, allow_inner_classifiers=None, **tpotestimator_kwargs)

一个使用遗传编程优化分类管道的 sklearn 基础估计器。更多参数请参见 TPOTEstimator 类。

参数

名称 类型 描述 默认值
search_space (字符串, SearchSpace)
  • 字符串:用于优化的默认搜索空间。 | 字符串 | 描述 | | :--- | :----: | | linear | 结构为“选择器->(转换器+直通)->(分类器/回归器+直通)->最终分类器/回归器”的线性管道。对于转换器层和内部估计器层,TPOT 可以选择一个或多个转换器/分类器,或者不选择任何一个。内部分类器/回归器层是可选的。 | | linear-light | 与 linear 具有相同的搜索空间,但没有内部分类器/回归器层,并且包含一组更快的估计器。 | | graph | TPOT 将优化一个有向无环图形状的管道。图的节点可以包括选择器、缩放器、转换器或分类器/回归器(内部分类器/回归器可以选择不包含)。这将返回一个自定义的 GraphPipeline 而不是 sklearn Pipeline。更多详细信息请参阅教程 6。 | | graph-light | 与 graph 搜索空间相同,但没有内部分类器/回归器,并且包含一组更快的估计器。 | | mdr | TPOT 将搜索一系列特征选择器和多因子降维模型,以找到一系列能够最大化预测准确性的操作符。TPOT MDR 配置专门用于全基因组关联研究 (GWAS),并在网上这里详细描述。

请注意,TPOT MDR 的运行速度可能较慢,因为特征选择例程计算成本较高,特别是在大型数据集上。 | - SearchSpace:用于优化的搜索空间。这应该是一个 SearchSpace 实例。TPOT 在以下文件夹中包含搜索空间组:tpot.search_spaces.nodes 用于管道中的节点,tpot.search_spaces.pipelines 用于管道结构。

'linear'
scorers (列表, scorer)

在交叉验证过程中使用的评分器或评分器列表。参见 https://scikit-learn.cn/stable/modules/model_evaluation.html

['roc_auc_ovr']
scorers_weights 列表

在优化过程中应用于评分器的权重列表。

[1]
classification 布尔值

如果为 True,则问题被视为分类问题。如果为 False,则问题被视为回归问题。用于确定交叉验证策略。

必需
cv (整数, 交叉 - 验证器)
  • (整数):交叉验证过程中使用的折叠数。默认使用 sklearn.model_selection.KFold 交叉验证器进行回归,使用 StratifiedKFold 进行分类。在这两种情况下,shuffled 都设置为 True。
  • (sklearn.model_selection.BaseCrossValidator):在交叉验证过程中使用的交叉验证器。
    • max_depth (整数):生成的管道中从任何节点到根的最大深度。
10
other_objective_functions 列表

应用于管道的其他目标函数列表。该函数接受 graphpipeline 估计器的一个参数,并返回单个分数或分数列表。

[]
other_objective_functions_weights 列表

应用于其他目标函数的权重列表。

[]
objective_function_names 列表

应用于目标函数的名称列表。如果为 None,将使用目标函数的名称。

None
bigger_is_better 布尔值

如果为 True,则最大化目标函数。如果为 False,则最小化目标函数。使用负权重来反转方向。

True
categorical_features 列表 或 None

在预处理步骤中进行填充和/或独热编码的类别列。仅当 preprocessing 不为 False 时使用。

None
categorical_features

- None:如果为 None,TPOT 将自动使用 pandas 数据框中的对象列作为预处理中独热编码的对象。- 类别特征列表。如果 X 是数据框,这应该是一个列名列表。如果 X 是 numpy 数组,这应该是一个列索引列表。

None
memory

如果提供,管道将在调用 fit 并使用 joblib.Memory 后缓存每个转换器。此功能用于避免在优化过程中,如果参数和输入数据与另一个已拟合的管道相同,则重复计算管道内的已拟合转换器。- 字符串 'auto':TPOT 使用临时目录进行内存缓存,并在关闭时清理它。- 字符串 缓存目录路径:TPOT 使用提供的目录进行内存缓存,并且 TPOT 在关闭时不会清理缓存目录。如果目录不存在,TPOT 将创建它。- Memory 对象:TPOT 使用 joblib.Memory 实例进行内存缓存,并且 TPOT 在关闭时不会清理缓存目录。- None:TPOT 不使用内存缓存。

None
preprocessing (布尔值BaseEstimator / Pipeline)

实验性 用于在交叉验证前预处理数据的管道。请注意,这些步骤的参数不会被优化。将它们添加到搜索空间以进行优化。- 布尔值:如果为 True,将使用默认的预处理管道,包括填充和随后的独热编码。- Pipeline:如果给定一个管道实例,将使用该管道作为预处理管道。

False
max_time_mins 浮点数

运行优化的最大时间。如果为 none 或 inf,将运行直到代数结束。

float("inf")
max_eval_time_mins 浮点数

评估单个个体的最大时间。如果为 none 或 inf,则每次评估没有时间限制。

60*5
n_jobs 整数

并行运行的进程数。

1
validation_strategy 字符串

实验性 用于从种群中选择最终管道的验证策略。TPOT 可能会过拟合交叉验证分数。可以使用第二个验证集来选择最终管道。- 'auto':根据数据集形状自动确定验证策略。- 'reshuffled':交叉验证和最终验证使用相同的数据,但折叠的分割不同。这是小型数据集的默认设置。- 'split':使用单独的验证集进行最终验证。数据将根据 validation_fraction 进行分割。这是中型数据集的默认设置。- 'none':不使用单独的验证集进行最终验证。根据原始交叉验证分数进行选择。这是大型数据集的默认设置。

'none'
validation_fraction 浮点数

实验性 当 validation_strategy 为 'split' 时,用于验证集的数据集比例。必须在 0 到 1 之间。

0.2
early_stop 整数

在早期停止前没有改进的代数。所有目标必须在容差范围内收敛才能触发此功能。通常,值设置为 5-20 左右是比较好的。

None
warm_start 布尔值

如果为 True,将从上次运行的最后一代码继续执行进化算法。

False
periodic_checkpoint_folder 字符串

定期保存种群的文件夹。如果为 None,则不会进行定期保存。如果提供,训练将从该检查点恢复。

None
verbose 整数

在优化过程中打印信息的详细程度。较高的值包含较低值的信息。0. 无信息 1. 进度条

  1. 最佳个体
  2. 警告

    =5. 完整警告追踪

  3. 评估进度条。(临时:这以前是 2。目前,使用评估进度条可能会阻止某些情况下我们提前终止一代,因为在某一代中期达到 max_time_mins,或者管道未能正常终止,需要手动终止。)
1
memory_limit 字符串

每个作业的内存限制。参见 Dask LocalCluster documentation 获取更多信息。

None
client Client

用于并行化的 dask 客户端。如果不是 None,这将覆盖 n_jobs 和 memory_limit 参数。如果为 None,将创建一个新的客户端,其中 num_workers=n_jobs 且 memory_limit=memory_limit。

None
random_state (整数, None)

用于实验可重复性的种子。此值将传递给 numpy.random.default_rng() 以创建要传递给其他类的生成器实例

  • 整数 将用于使用 'numpy.random.default_rng()' 创建并锁定 Generator 实例
  • None 将用于使用 'numpy.random.default_rng()' 创建 Generator,其中将从操作系统中提取新的、不可预测的熵
None
allow_inner_classifiers 布尔值

如果为 True,搜索空间将包含集成分类器。

True

属性

名称 类型 描述
fitted_pipeline_ GraphPipeline

一个已拟合的 GraphPipeline 实例,它继承自 sklearn BaseEstimator。这是在传递给 fit 的完整的 X, y 上拟合的。

evaluated_individuals 一个 pandas 数据框,包含运行中所有评估过的个体的数据。

列:- _objective functions:前几列对应于传入的评分器和目标函数 - Parents:一个元组,包含用于生成该行管道的管道的索引。如果为 NaN,则此管道是在初始种群中随机生成的。- Variation_Function:用于对父代进行变异或交叉的变异函数。如果为 NaN,则此管道是在初始种群中随机生成的。- Individual:在进化算法期间使用的个体内部表示。这不是一个 sklearn BaseEstimator。- Generation:管道首次出现的代数。- Pareto_Front:此管道所属的非支配前沿。0 表示其分数未被任何其他个体严格支配。为了节省计算时间,最佳前沿在每一代迭代更新。具有第 0 个 Pareto 前沿的管道确实代表了精确的最佳前沿。然而,Pareto 前沿 >= 1 的管道仅与最终种群中的其他管道相关。所有其他管道都设置为 NaN。- Instance:未拟合的 GraphPipeline BaseEstimator。- _validation objective functions:在验证集上评估的目标函数分数。- Validation_Pareto_Front:在验证集上计算的完整 Pareto 前沿。这是为所有 Pareto_Front 等于 0 的管道计算的。与仅计算前沿和最终种群的 Pareto_Front 不同,Validation Pareto 前沿是为在验证集上测试过的所有管道计算的。

pareto_front 与 evaluated_individuals 相同的 pandas 数据框,但仅包含前沿 Pareto 前沿管道。
源代码位于 tpot/tpot_estimator/templates/tpottemplates.py
def __init__(       self,
                    search_space = "linear",
                    scorers=['roc_auc_ovr'], 
                    scorers_weights=[1],
                    cv = 10,
                    other_objective_functions=[], #tpot.objectives.estimator_objective_functions.number_of_nodes_objective],
                    other_objective_functions_weights = [],
                    objective_function_names = None,
                    bigger_is_better = True,
                    categorical_features = None,
                    memory = None,
                    preprocessing = False,
                    max_time_mins=60, 
                    max_eval_time_mins=10, 
                    n_jobs = 1,
                    validation_strategy = "none",
                    validation_fraction = .2, 
                    early_stop = None,
                    warm_start = False,
                    periodic_checkpoint_folder = None, 
                    verbose = 2,
                    memory_limit = None,
                    client = None,
                    random_state=None,
                    allow_inner_classifiers=None,
                    **tpotestimator_kwargs,

    ):
    """
    An sklearn baseestimator that uses genetic programming to optimize a classification pipeline.
    For more parameters, see the TPOTEstimator class.

    Parameters
    ----------

    search_space : (String, tpot.search_spaces.SearchSpace)
        - String : The default search space to use for the optimization.
        | String     | Description      |
        | :---        |    :----:   |
        | linear  | A linear pipeline with the structure of "Selector->(transformers+Passthrough)->(classifiers/regressors+Passthrough)->final classifier/regressor." For both the transformer and inner estimator layers, TPOT may choose one or more transformers/classifiers, or it may choose none. The inner classifier/regressor layer is optional. |
        | linear-light | Same search space as linear, but without the inner classifier/regressor layer and with a reduced set of faster running estimators. |
        | graph | TPOT will optimize a pipeline in the shape of a directed acyclic graph. The nodes of the graph can include selectors, scalers, transformers, or classifiers/regressors (inner classifiers/regressors can optionally be not included). This will return a custom GraphPipeline rather than an sklearn Pipeline. More details in Tutorial 6. |
        | graph-light | Same as graph search space, but without the inner classifier/regressors and with a reduced set of faster running estimators. |
        | mdr |TPOT will search over a series of feature selectors and Multifactor Dimensionality Reduction models to find a series of operators that maximize prediction accuracy. The TPOT MDR configuration is specialized for genome-wide association studies (GWAS), and is described in detail online here.

        Note that TPOT MDR may be slow to run because the feature selection routines are computationally expensive, especially on large datasets. |
        - SearchSpace : The search space to use for the optimization. This should be an instance of a SearchSpace.
            The search space to use for the optimization. This should be an instance of a SearchSpace.
            TPOT has groups of search spaces found in the following folders, tpot.search_spaces.nodes for the nodes in the pipeline and tpot.search_spaces.pipelines for the pipeline structure.

    scorers : (list, scorer)
        A scorer or list of scorers to be used in the cross-validation process.
        see https://scikit-learn.cn/stable/modules/model_evaluation.html

    scorers_weights : list
        A list of weights to be applied to the scorers during the optimization process.

    classification : bool
        If True, the problem is treated as a classification problem. If False, the problem is treated as a regression problem.
        Used to determine the CV strategy.

    cv : int, cross-validator
        - (int): Number of folds to use in the cross-validation process. By uses the sklearn.model_selection.KFold cross-validator for regression and StratifiedKFold for classification. In both cases, shuffled is set to True.
        - (sklearn.model_selection.BaseCrossValidator): A cross-validator to use in the cross-validation process.
            - max_depth (int): The maximum depth from any node to the root of the pipelines to be generated.

    other_objective_functions : list, default=[]
        A list of other objective functions to apply to the pipeline. The function takes a single parameter for the graphpipeline estimator and returns either a single score or a list of scores.

    other_objective_functions_weights : list, default=[]
        A list of weights to be applied to the other objective functions.

    objective_function_names : list, default=None
        A list of names to be applied to the objective functions. If None, will use the names of the objective functions.

    bigger_is_better : bool, default=True
        If True, the objective function is maximized. If False, the objective function is minimized. Use negative weights to reverse the direction.

    categorical_features : list or None
        Categorical columns to inpute and/or one hot encode during the preprocessing step. Used only if preprocessing is not False.

    categorical_features: list or None
        Categorical columns to inpute and/or one hot encode during the preprocessing step. Used only if preprocessing is not False.
        - None : If None, TPOT will automatically use object columns in pandas dataframes as objects for one hot encoding in preprocessing.
        - List of categorical features. If X is a dataframe, this should be a list of column names. If X is a numpy array, this should be a list of column indices


    memory: Memory object or string, default=None
        If supplied, pipeline will cache each transformer after calling fit with joblib.Memory. This feature
        is used to avoid computing the fit transformers within a pipeline if the parameters
        and input data are identical with another fitted pipeline during optimization process.
        - String 'auto':
            TPOT uses memory caching with a temporary directory and cleans it up upon shutdown.
        - String path of a caching directory
            TPOT uses memory caching with the provided directory and TPOT does NOT clean
            the caching directory up upon shutdown. If the directory does not exist, TPOT will
            create it.
        - Memory object:
            TPOT uses the instance of joblib.Memory for memory caching,
            and TPOT does NOT clean the caching directory up upon shutdown.
        - None:
            TPOT does not use memory caching.

    preprocessing : bool or BaseEstimator/Pipeline,
        EXPERIMENTAL
        A pipeline that will be used to preprocess the data before CV. Note that the parameters for these steps are not optimized. Add them to the search space to be optimized.
        - bool : If True, will use a default preprocessing pipeline which includes imputation followed by one hot encoding.
        - Pipeline : If an instance of a pipeline is given, will use that pipeline as the preprocessing pipeline.

    max_time_mins : float, default=float("inf")
        Maximum time to run the optimization. If none or inf, will run until the end of the generations.

    max_eval_time_mins : float, default=60*5
        Maximum time to evaluate a single individual. If none or inf, there will be no time limit per evaluation.


    n_jobs : int, default=1
        Number of processes to run in parallel.

    validation_strategy : str, default='none'
        EXPERIMENTAL The validation strategy to use for selecting the final pipeline from the population. TPOT may overfit the cross validation score. A second validation set can be used to select the final pipeline.
        - 'auto' : Automatically determine the validation strategy based on the dataset shape.
        - 'reshuffled' : Use the same data for cross validation and final validation, but with different splits for the folds. This is the default for small datasets.
        - 'split' : Use a separate validation set for final validation. Data will be split according to validation_fraction. This is the default for medium datasets.
        - 'none' : Do not use a separate validation set for final validation. Select based on the original cross-validation score. This is the default for large datasets.

    validation_fraction : float, default=0.2
      EXPERIMENTAL The fraction of the dataset to use for the validation set when validation_strategy is 'split'. Must be between 0 and 1.

    early_stop : int, default=None
        Number of generations without improvement before early stopping. All objectives must have converged within the tolerance for this to be triggered. In general a value of around 5-20 is good.

    warm_start : bool, default=False
        If True, will use the continue the evolutionary algorithm from the last generation of the previous run.

    periodic_checkpoint_folder : str, default=None
        Folder to save the population to periodically. If None, no periodic saving will be done.
        If provided, training will resume from this checkpoint.


    verbose : int, default=1
        How much information to print during the optimization process. Higher values include the information from lower values.
        0. nothing
        1. progress bar

        3. best individual
        4. warnings
        >=5. full warnings trace
        6. evaluations progress bar. (Temporary: This used to be 2. Currently, using evaluation progress bar may prevent some instances were we terminate a generation early due to it reaching max_time_mins in the middle of a generation OR a pipeline failed to be terminated normally and we need to manually terminate it.)


    memory_limit : str, default=None
        Memory limit for each job. See Dask [LocalCluster documentation](https://distributed.dask.org.cn/en/stable/api.html#distributed.Client) for more information.

    client : dask.distributed.Client, default=None
        A dask client to use for parallelization. If not None, this will override the n_jobs and memory_limit parameters. If None, will create a new client with num_workers=n_jobs and memory_limit=memory_limit.

    random_state : int, None, default=None
        A seed for reproducability of experiments. This value will be passed to numpy.random.default_rng() to create an instnce of the genrator to pass to other classes

        - int
            Will be used to create and lock in Generator instance with 'numpy.random.default_rng()'
        - None
            Will be used to create Generator for 'numpy.random.default_rng()' where a fresh, unpredictable entropy will be pulled from the OS

    allow_inner_classifiers : bool, default=True
        If True, the search space will include ensembled classifiers. 

    Attributes
    ----------

    fitted_pipeline_ : GraphPipeline
        A fitted instance of the GraphPipeline that inherits from sklearn BaseEstimator. This is fitted on the full X, y passed to fit.

    evaluated_individuals : A pandas data frame containing data for all evaluated individuals in the run.
        Columns:
        - *objective functions : The first few columns correspond to the passed in scorers and objective functions
        - Parents : A tuple containing the indexes of the pipelines used to generate the pipeline of that row. If NaN, this pipeline was generated randomly in the initial population.
        - Variation_Function : Which variation function was used to mutate or crossover the parents. If NaN, this pipeline was generated randomly in the initial population.
        - Individual : The internal representation of the individual that is used during the evolutionary algorithm. This is not an sklearn BaseEstimator.
        - Generation : The generation the pipeline first appeared.
        - Pareto_Front	: The nondominated front that this pipeline belongs to. 0 means that its scores is not strictly dominated by any other individual.
                        To save on computational time, the best frontier is updated iteratively each generation.
                        The pipelines with the 0th pareto front do represent the exact best frontier. However, the pipelines with pareto front >= 1 are only in reference to the other pipelines in the final population.
                        All other pipelines are set to NaN.
        - Instance	: The unfitted GraphPipeline BaseEstimator.
        - *validation objective functions : Objective function scores evaluated on the validation set.
        - Validation_Pareto_Front : The full pareto front calculated on the validation set. This is calculated for all pipelines with Pareto_Front equal to 0. Unlike the Pareto_Front which only calculates the frontier and the final population, the Validation Pareto Front is calculated for all pipelines tested on the validation set.

    pareto_front : The same pandas dataframe as evaluated individuals, but containing only the frontier pareto front pipelines.
    """
    self.search_space = search_space
    self.scorers = scorers
    self.scorers_weights = scorers_weights
    self.cv = cv
    self.other_objective_functions = other_objective_functions
    self.other_objective_functions_weights = other_objective_functions_weights
    self.objective_function_names = objective_function_names
    self.bigger_is_better = bigger_is_better
    self.categorical_features = categorical_features
    self.memory = memory
    self.preprocessing = preprocessing
    self.max_time_mins = max_time_mins
    self.max_eval_time_mins = max_eval_time_mins
    self.n_jobs = n_jobs
    self.validation_strategy = validation_strategy
    self.validation_fraction = validation_fraction
    self.early_stop = early_stop
    self.warm_start = warm_start
    self.periodic_checkpoint_folder = periodic_checkpoint_folder
    self.verbose = verbose
    self.memory_limit = memory_limit
    self.client = client
    self.random_state = random_state
    self.tpotestimator_kwargs = tpotestimator_kwargs
    self.allow_inner_classifiers = allow_inner_classifiers

    self.initialized = False