跳到内容

稳态估计器

此文件是 TPOT 库的一部分。

当前版本的 TPOT 由 Cedars-Sinai 的以下人员开发:- Pedro Henrique Ribeiro (https://github.com/perib, https://www.linkedin.com/in/pedro-ribeiro/) - Anil Saini (anil.saini@cshs.org) - Jose Hernandez (jgh9094@gmail.com) - Jay Moran (jay.moran@cshs.org) - Nicholas Matsumoto (nicholas.matsumoto@cshs.org) - Hyunjun Choi (hyunjun.choi@cshs.org) - Gabriel Ketron (gabriel.ketron@cshs.org) - Miguel E. Hernandez (miguel.e.hernandez@cshs.org) - Jason Moore (moorejh28@gmail.com)

TPOT 的原始版本主要由宾夕法尼亚大学的以下人员开发:- Randal S. Olson (rso@randalolson.com) - Weixuan Fu (weixuanf@upenn.edu) - Daniel Angell (dpa34@drexel.edu) - Jason Moore (moorejh28@gmail.com) - 以及许多慷慨的开源贡献者

TPOT 是自由软件:您可以根据自由软件基金会发布的 GNU 较宽松通用公共许可证(第 3 版或您选择的任何后续版本)的条款重新分发和/或修改它。

分发 TPOT 是希望它会有用,但没有任何担保;甚至不包括适销性或特定用途适用性的默示担保。有关更多详细信息,请参阅 GNU 较宽松通用公共许可证。

您应该已经收到了 TPOT 随附的 GNU 较宽松通用公共许可证副本。如果未收到,请参阅 https://gnu.ac.cn/licenses/

TPOTEstimatorSteadyState

基类:BaseEstimator

源代码位于 tpot/tpot_estimator/steady_state_estimator.py
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
class TPOTEstimatorSteadyState(BaseEstimator):
    def __init__(self,  
                        search_space,
                        scorers= [],
                        scorers_weights = [],
                        classification = False,
                        cv = 10,
                        other_objective_functions=[], #tpot.objectives.estimator_objective_functions.number_of_nodes_objective],
                        other_objective_functions_weights = [],
                        objective_function_names = None,
                        bigger_is_better = True,


                        export_graphpipeline = False,
                        memory = None,

                        categorical_features = None,
                        subsets = None,
                        preprocessing = False,
                        validation_strategy = "none",
                        validation_fraction = .2,
                        disable_label_encoder = False,

                        initial_population_size = 50,
                        population_size = 50,
                        max_evaluated_individuals = None,



                        early_stop = None,
                        early_stop_mins = None,
                        scorers_early_stop_tol = 0.001,
                        other_objectives_early_stop_tol = None,
                        max_time_mins=None,
                        max_eval_time_mins=10,
                        n_jobs=1,
                        memory_limit = None,
                        client = None,

                        crossover_probability=.2,
                        mutate_probability=.7,
                        mutate_then_crossover_probability=.05,
                        crossover_then_mutate_probability=.05,
                        survival_selector = survival_select_NSGA2,
                        parent_selector = tournament_selection_dominated,
                        budget_range = None,
                        budget_scaling = .5,
                        individuals_until_end_budget = 1,
                        stepwise_steps = 5,

                        warm_start = False,

                        verbose = 0,
                        periodic_checkpoint_folder = None,
                        callback = None,
                        processes = True,

                        scatter = True,

                        # random seed for random number generator (rng)
                        random_state = None,

                        optuna_optimize_pareto_front = False,
                        optuna_optimize_pareto_front_trials = 100,
                        optuna_optimize_pareto_front_timeout = 60*10,
                        optuna_storage = "sqlite:///optuna.db",
                        ):

        '''
        An sklearn baseestimator that uses genetic programming to optimize a pipeline.

        Parameters
        ----------

        scorers : (list, scorer)
            A scorer or list of scorers to be used in the cross-validation process.
            see https://scikit-learn.cn/stable/modules/model_evaluation.html

        scorers_weights : list
            A list of weights to be applied to the scorers during the optimization process.

        classification : bool
            If True, the problem is treated as a classification problem. If False, the problem is treated as a regression problem.
            Used to determine the CV strategy.

        cv : int, cross-validator
            - (int): Number of folds to use in the cross-validation process. By uses the sklearn.model_selection.KFold cross-validator for regression and StratifiedKFold for classification. In both cases, shuffled is set to True.
            - (sklearn.model_selection.BaseCrossValidator): A cross-validator to use in the cross-validation process.

        other_objective_functions : list, default=[]
            A list of other objective functions to apply to the pipeline. The function takes a single parameter for the graphpipeline estimator and returns either a single score or a list of scores.

        other_objective_functions_weights : list, default=[]
            A list of weights to be applied to the other objective functions.

        objective_function_names : list, default=None
            A list of names to be applied to the objective functions. If None, will use the names of the objective functions.

        bigger_is_better : bool, default=True
            If True, the objective function is maximized. If False, the objective function is minimized. Use negative weights to reverse the direction.


        max_size : int, default=np.inf
            The maximum number of nodes of the pipelines to be generated.

        linear_pipeline : bool, default=False
            If True, the pipelines generated will be linear. If False, the pipelines generated will be directed acyclic graphs.

        root_config_dict : dict, default='auto'
            The configuration dictionary to use for the root node of the model.
            If 'auto', will use "classifiers" if classification=True, else "regressors".
            - 'selectors' : A selection of sklearn Selector methods.
            - 'classifiers' : A selection of sklearn Classifier methods.
            - 'regressors' : A selection of sklearn Regressor methods.
            - 'transformers' : A selection of sklearn Transformer methods.
            - 'arithmetic_transformer' : A selection of sklearn Arithmetic Transformer methods that replicate symbolic classification/regression operators.
            - 'passthrough' : A node that just passes though the input. Useful for passing through raw inputs into inner nodes.
            - 'feature_set_selector' : A selector that pulls out specific subsets of columns from the data. Only well defined as a leaf.
                                        Subsets are set with the subsets parameter.
            - 'skrebate' : Includes ReliefF, SURF, SURFstar, MultiSURF.
            - 'MDR' : Includes MDR.
            - 'ContinuousMDR' : Includes ContinuousMDR.
            - 'genetic encoders' : Includes Genetic Encoder methods as used in AutoQTL.
            - 'FeatureEncodingFrequencySelector': Includes FeatureEncodingFrequencySelector method as used in AutoQTL.
            - list : a list of strings out of the above options to include the corresponding methods in the configuration dictionary.

        inner_config_dict : dict, default=["selectors", "transformers"]
            The configuration dictionary to use for the inner nodes of the model generation.
            Default ["selectors", "transformers"]
            - 'selectors' : A selection of sklearn Selector methods.
            - 'classifiers' : A selection of sklearn Classifier methods.
            - 'regressors' : A selection of sklearn Regressor methods.
            - 'transformers' : A selection of sklearn Transformer methods.
            - 'arithmetic_transformer' : A selection of sklearn Arithmetic Transformer methods that replicate symbolic classification/regression operators.
            - 'passthrough' : A node that just passes though the input. Useful for passing through raw inputs into inner nodes.
            - 'feature_set_selector' : A selector that pulls out specific subsets of columns from the data. Only well defined as a leaf.
                                        Subsets are set with the subsets parameter.
            - 'skrebate' : Includes ReliefF, SURF, SURFstar, MultiSURF.
            - 'MDR' : Includes MDR.
            - 'ContinuousMDR' : Includes ContinuousMDR.
            - 'genetic encoders' : Includes Genetic Encoder methods as used in AutoQTL.
            - 'FeatureEncodingFrequencySelector': Includes FeatureEncodingFrequencySelector method as used in AutoQTL.
            - list : a list of strings out of the above options to include the corresponding methods in the configuration dictionary.
            - None : If None and max_depth>1, the root_config_dict will be used for the inner nodes as well.

        leaf_config_dict : dict, default=None
            The configuration dictionary to use for the leaf node of the model. If set, leaf nodes must be from this dictionary.
            Otherwise leaf nodes will be generated from the root_config_dict.
            Default None
            - 'selectors' : A selection of sklearn Selector methods.
            - 'classifiers' : A selection of sklearn Classifier methods.
            - 'regressors' : A selection of sklearn Regressor methods.
            - 'transformers' : A selection of sklearn Transformer methods.
            - 'arithmetic_transformer' : A selection of sklearn Arithmetic Transformer methods that replicate symbolic classification/regression operators.
            - 'passthrough' : A node that just passes though the input. Useful for passing through raw inputs into inner nodes.
            - 'feature_set_selector' : A selector that pulls out specific subsets of columns from the data. Only well defined as a leaf.
                                        Subsets are set with the subsets parameter.
            - 'skrebate' : Includes ReliefF, SURF, SURFstar, MultiSURF.
            - 'MDR' : Includes MDR.
            - 'ContinuousMDR' : Includes ContinuousMDR.
            - 'genetic encoders' : Includes Genetic Encoder methods as used in AutoQTL.
            - 'FeatureEncodingFrequencySelector': Includes FeatureEncodingFrequencySelector method as used in AutoQTL.
            - list : a list of strings out of the above options to include the corresponding methods in the configuration dictionary.
            - None : If None, a leaf will not be required (i.e. the pipeline can be a single root node). Leaf nodes will be generated from the inner_config_dict.

        categorical_features: list or None
            Categorical columns to inpute and/or one hot encode during the preprocessing step. Used only if preprocessing is not False.
            - None : If None, TPOT will automatically use object columns in pandas dataframes as objects for one hot encoding in preprocessing.
            - List of categorical features. If X is a dataframe, this should be a list of column names. If X is a numpy array, this should be a list of column indices


        memory: Memory object or string, default=None
            If supplied, pipeline will cache each transformer after calling fit with joblib.Memory. This feature
            is used to avoid computing the fit transformers within a pipeline if the parameters
            and input data are identical with another fitted pipeline during optimization process.
            - String 'auto':
                TPOT uses memory caching with a temporary directory and cleans it up upon shutdown.
            - String path of a caching directory
                TPOT uses memory caching with the provided directory and TPOT does NOT clean
                the caching directory up upon shutdown. If the directory does not exist, TPOT will
                create it.
            - Memory object:
                TPOT uses the instance of joblib.Memory for memory caching,
                and TPOT does NOT clean the caching directory up upon shutdown.
            - None:
                TPOT does not use memory caching.

        preprocessing : bool or BaseEstimator/Pipeline,
            EXPERIMENTAL
            A pipeline that will be used to preprocess the data before CV.
            - bool : If True, will use a default preprocessing pipeline.
            - Pipeline : If an instance of a pipeline is given, will use that pipeline as the preprocessing pipeline.

        validation_strategy : str, default='none'
            EXPERIMENTAL The validation strategy to use for selecting the final pipeline from the population. TPOT may overfit the cross validation score. A second validation set can be used to select the final pipeline.
            - 'auto' : Automatically determine the validation strategy based on the dataset shape.
            - 'reshuffled' : Use the same data for cross validation and final validation, but with different splits for the folds. This is the default for small datasets.
            - 'split' : Use a separate validation set for final validation. Data will be split according to validation_fraction. This is the default for medium datasets.
            - 'none' : Do not use a separate validation set for final validation. Select based on the original cross-validation score. This is the default for large datasets.

        validation_fraction : float, default=0.2
          EXPERIMENTAL The fraction of the dataset to use for the validation set when validation_strategy is 'split'. Must be between 0 and 1.

        disable_label_encoder : bool, default=False
            If True, TPOT will check if the target needs to be relabeled to be sequential ints from 0 to N. This is necessary for XGBoost compatibility. If the labels need to be encoded, TPOT will use sklearn.preprocessing.LabelEncoder to encode the labels. The encoder can be accessed via the self.label_encoder_ attribute.
            If False, no additional label encoders will be used.

        population_size : int, default=50
            Size of the population

        initial_population_size : int, default=None
            Size of the initial population. If None, population_size will be used.

        population_scaling : int, default=0.5
            Scaling factor to use when determining how fast we move the threshold moves from the start to end percentile.

        generations_until_end_population : int, default=1
            Number of generations until the population size reaches population_size

        generations : int, default=50
            Number of generations to run

        early_stop : int, default=None
            Number of evaluated individuals without improvement before early stopping. Counted across all objectives independently. Triggered when all objectives have not improved by the given number of individuals.

        early_stop_mins : float, default=None
            Number of seconds without improvement before early stopping. All objectives must not have improved for the given number of seconds for this to be triggered.

        scorers_early_stop_tol :
            -list of floats
                list of tolerances for each scorer. If the difference between the best score and the current score is less than the tolerance, the individual is considered to have converged
                If an index of the list is None, that item will not be used for early stopping
            -int
                If an int is given, it will be used as the tolerance for all objectives

        other_objectives_early_stop_tol :
            -list of floats
                list of tolerances for each of the other objective function. If the difference between the best score and the current score is less than the tolerance, the individual is considered to have converged
                If an index of the list is None, that item will not be used for early stopping
            -int
                If an int is given, it will be used as the tolerance for all objectives

        max_time_mins : float, default=float("inf")
            Maximum time to run the optimization. If none or inf, will run until the end of the generations.

        max_eval_time_mins : float, default=60*5
            Maximum time to evaluate a single individual. If none or inf, there will be no time limit per evaluation.

        n_jobs : int, default=1
            Number of processes to run in parallel.

        memory_limit : str, default=None
            Memory limit for each job. See Dask [LocalCluster documentation](https://distributed.dask.org.cn/en/stable/api.html#distributed.Client) for more information.

        client : dask.distributed.Client, default=None
            A dask client to use for parallelization. If not None, this will override the n_jobs and memory_limit parameters. If None, will create a new client with num_workers=n_jobs and memory_limit=memory_limit.

        crossover_probability : float, default=.2
            Probability of generating a new individual by crossover between two individuals.

        mutate_probability : float, default=.7
            Probability of generating a new individual by crossover between one individuals.

        mutate_then_crossover_probability : float, default=.05
            Probability of generating a new individual by mutating two individuals followed by crossover.

        crossover_then_mutate_probability : float, default=.05
            Probability of generating a new individual by crossover between two individuals followed by a mutation of the resulting individual.

        survival_selector : function, default=survival_select_NSGA2
            Function to use to select individuals for survival. Must take a matrix of scores and return selected indexes.
            Used to selected population_size individuals at the start of each generation to use for mutation and crossover.

        parent_selector : function, default=parent_select_NSGA2
            Function to use to select pairs parents for crossover and individuals for mutation. Must take a matrix of scores and return selected indexes.

        budget_range : list [start, end], default=None
            A starting and ending budget to use for the budget scaling.

        budget_scaling float : [0,1], default=0.5
            A scaling factor to use when determining how fast we move the budget from the start to end budget.

        individuals_until_end_budget : int, default=1
            The number of generations to run before reaching the max budget.

        stepwise_steps : int, default=1
            The number of staircase steps to take when scaling the budget and population size.

        threshold_evaluation_pruning : list [start, end], default=None
            starting and ending percentile to use as a threshold for the evaluation early stopping.
            Values between 0 and 100.

        threshold_evaluation_scaling : float [0,inf), default=0.5
            A scaling factor to use when determining how fast we move the threshold moves from the start to end percentile.
            Must be greater than zero. Higher numbers will move the threshold to the end faster.

        min_history_threshold : int, default=0
            The minimum number of previous scores needed before using threshold early stopping.

        selection_evaluation_pruning : list, default=None
            A lower and upper percent of the population size to select each round of CV.
            Values between 0 and 1.

        selection_evaluation_scaling : float, default=0.5
            A scaling factor to use when determining how fast we move the threshold moves from the start to end percentile.
            Must be greater than zero. Higher numbers will move the threshold to the end faster.

        n_initial_optimizations : int, default=0
            Number of individuals to optimize before starting the evolution.

        optimization_cv : int
           Number of folds to use for the optuna optimization's internal cross-validation.

        max_optimize_time_seconds : float, default=60*5
            Maximum time to run an optimization

        optimization_steps : int, default=10
            Number of steps per optimization

        warm_start : bool, default=False
            If True, will use the continue the evolutionary algorithm from the last generation of the previous run.


        verbose : int, default=1
            How much information to print during the optimization process. Higher values include the information from lower values.
            0. nothing
            1. progress bar

            3. best individual
            4. warnings
            >=5. full warnings trace

        random_state : int, None, default=None
            A seed for reproducability of experiments. This value will be passed to numpy.random.default_rng() to create an instnce of the genrator to pass to other classes
            - int
                Will be used to create and lock in Generator instance with 'numpy.random.default_rng()'
            - None
                Will be used to create Generator for 'numpy.random.default_rng()' where a fresh, unpredictable entropy will be pulled from the OS


        periodic_checkpoint_folder : str, default=None
            Folder to save the population to periodically. If None, no periodic saving will be done.
            If provided, training will resume from this checkpoint.

        callback : tpot.CallBackInterface, default=None
            Callback object. Not implemented

        processes : bool, default=True
            If True, will use multiprocessing to parallelize the optimization process. If False, will use threading.
            True seems to perform better. However, False is required for interactive debugging.

        Attributes
        ----------

        fitted_pipeline_ : GraphPipeline
            A fitted instance of the GraphPipeline that inherits from sklearn BaseEstimator. This is fitted on the full X, y passed to fit.

        evaluated_individuals : A pandas data frame containing data for all evaluated individuals in the run.
            Columns:
            - *objective functions : The first few columns correspond to the passed in scorers and objective functions
            - Parents : A tuple containing the indexes of the pipelines used to generate the pipeline of that row. If NaN, this pipeline was generated randomly in the initial population.
            - Variation_Function : Which variation function was used to mutate or crossover the parents. If NaN, this pipeline was generated randomly in the initial population.
            - Individual : The internal representation of the individual that is used during the evolutionary algorithm. This is not an sklearn BaseEstimator.
            - Generation : The generation the pipeline first appeared.
            - Pareto_Front	: The nondominated front that this pipeline belongs to. 0 means that its scores is not strictly dominated by any other individual.
                            To save on computational time, the best frontier is updated iteratively each generation.
                            The pipelines with the 0th pareto front do represent the exact best frontier. However, the pipelines with pareto front >= 1 are only in reference to the other pipelines in the final population.
                            All other pipelines are set to NaN.
            - Instance	: The unfitted GraphPipeline BaseEstimator.
            - *validation objective functions : Objective function scores evaluated on the validation set.
            - Validation_Pareto_Front : The full pareto front calculated on the validation set. This is calculated for all pipelines with Pareto_Front equal to 0. Unlike the Pareto_Front which only calculates the frontier and the final population, the Validation Pareto Front is calculated for all pipelines tested on the validation set.

        pareto_front : The same pandas dataframe as evaluated individuals, but containing only the frontier pareto front pipelines.
        '''

        # sklearn BaseEstimator must have a corresponding attribute for each parameter.
        # These should not be modified once set.

        self.search_space = search_space
        self.scorers = scorers
        self.scorers_weights = scorers_weights
        self.classification = classification
        self.cv = cv
        self.other_objective_functions = other_objective_functions
        self.other_objective_functions_weights = other_objective_functions_weights
        self.objective_function_names = objective_function_names
        self.bigger_is_better = bigger_is_better

        self.export_graphpipeline = export_graphpipeline
        self.memory = memory

        self.categorical_features = categorical_features
        self.preprocessing = preprocessing
        self.validation_strategy = validation_strategy
        self.validation_fraction = validation_fraction
        self.disable_label_encoder = disable_label_encoder
        self.population_size = population_size
        self.initial_population_size = initial_population_size

        self.early_stop = early_stop
        self.early_stop_mins = early_stop_mins
        self.scorers_early_stop_tol = scorers_early_stop_tol
        self.other_objectives_early_stop_tol = other_objectives_early_stop_tol
        self.max_time_mins = max_time_mins
        self.max_eval_time_mins = max_eval_time_mins
        self.n_jobs= n_jobs
        self.memory_limit = memory_limit
        self.client = client

        self.crossover_probability = crossover_probability
        self.mutate_probability = mutate_probability
        self.mutate_then_crossover_probability= mutate_then_crossover_probability
        self.crossover_then_mutate_probability= crossover_then_mutate_probability
        self.survival_selector=survival_selector
        self.parent_selector=parent_selector
        self.budget_range = budget_range
        self.budget_scaling = budget_scaling
        self.individuals_until_end_budget = individuals_until_end_budget
        self.stepwise_steps = stepwise_steps

        self.warm_start = warm_start

        self.verbose = verbose
        self.periodic_checkpoint_folder = periodic_checkpoint_folder
        self.callback = callback
        self.processes = processes


        self.scatter = scatter

        self.optuna_optimize_pareto_front = optuna_optimize_pareto_front
        self.optuna_optimize_pareto_front_trials = optuna_optimize_pareto_front_trials
        self.optuna_optimize_pareto_front_timeout = optuna_optimize_pareto_front_timeout
        self.optuna_storage = optuna_storage

        # create random number generator based on rngseed
        self.rng = np.random.default_rng(random_state)
        # save random state passed to us for other functions that use random_state
        self.random_state = random_state

        self.max_evaluated_individuals = max_evaluated_individuals

        #Initialize other used params

        if self.initial_population_size is None:
            self._initial_population_size = self.population_size
        else:
            self._initial_population_size = self.initial_population_size

        if isinstance(self.scorers, str):
            self._scorers = [self.scorers]

        elif callable(self.scorers):
            self._scorers = [self.scorers]
        else:
            self._scorers = self.scorers

        self._scorers = [sklearn.metrics.get_scorer(scoring) for scoring in self._scorers]
        self._scorers_early_stop_tol = self.scorers_early_stop_tol

        self._evolver = tpot.evolvers.SteadyStateEvolver



        self.objective_function_weights = [*scorers_weights, *other_objective_functions_weights]


        if self.objective_function_names is None:
            obj_names = [f.__name__ for f in other_objective_functions]
        else:
            obj_names = self.objective_function_names
        self.objective_names = [f._score_func.__name__ if hasattr(f,"_score_func") else f.__name__ for f in self._scorers] + obj_names


        if not isinstance(self.other_objectives_early_stop_tol, list):
            self._other_objectives_early_stop_tol = [self.other_objectives_early_stop_tol for _ in range(len(self.other_objective_functions))]
        else:
            self._other_objectives_early_stop_tol = self.other_objectives_early_stop_tol

        if not isinstance(self._scorers_early_stop_tol, list):
            self._scorers_early_stop_tol = [self._scorers_early_stop_tol for _ in range(len(self._scorers))]
        else:
            self._scorers_early_stop_tol = self._scorers_early_stop_tol

        self.early_stop_tol = [*self._scorers_early_stop_tol, *self._other_objectives_early_stop_tol]

        self._evolver_instance = None
        self.evaluated_individuals = None

        self.label_encoder_ = None

        set_dask_settings()


    def fit(self, X, y):
        if self.client is not None: #If user passed in a client manually
           _client = self.client
        else:

            if self.verbose >= 4:
                silence_logs = 30
            elif self.verbose >=5:
                silence_logs = 40
            else:
                silence_logs = 50
            cluster = LocalCluster(n_workers=self.n_jobs, #if no client is passed in and no global client exists, create our own
                    threads_per_worker=1,
                    processes=self.processes,
                    silence_logs=silence_logs,
                    memory_limit=self.memory_limit)
            _client = Client(cluster)


        if self.classification and not self.disable_label_encoder and not check_if_y_is_encoded(y):
            warnings.warn("Labels are not encoded as ints from 0 to N. For compatibility with some classifiers such as sklearn, TPOT has encoded y with the sklearn LabelEncoder. When using pipelines outside the main TPOT estimator class, you can encode the labels with est.label_encoder_")
            self.label_encoder_ = LabelEncoder()
            y = self.label_encoder_.fit_transform(y)

        self.evaluated_individuals = None
        #determine validation strategy
        if self.validation_strategy == 'auto':
            nrows = X.shape[0]
            ncols = X.shape[1]

            if nrows/ncols < 20:
                validation_strategy = 'reshuffled'
            elif nrows/ncols < 100:
                validation_strategy = 'split'
            else:
                validation_strategy = 'none'
        else:
            validation_strategy = self.validation_strategy

        if validation_strategy == 'split':
            if self.classification:
                X, X_val, y, y_val = train_test_split(X, y, test_size=self.validation_fraction, stratify=y, random_state=self.random_state)
            else:
                X, X_val, y, y_val = train_test_split(X, y, test_size=self.validation_fraction, random_state=self.random_state)


        X_original = X
        y_original = y
        if isinstance(self.cv, int) or isinstance(self.cv, float):
            n_folds = self.cv
        else:
            n_folds = self.cv.get_n_splits(X, y)

        if self.classification:
            X, y = remove_underrepresented_classes(X, y, n_folds)

        if self.preprocessing:
            #X = pd.DataFrame(X)

            if not isinstance(self.preprocessing, bool) and isinstance(self.preprocessing, sklearn.base.BaseEstimator):
                self._preprocessing_pipeline = self.preprocessing

            #TODO: check if there are missing values in X before imputation. If not, don't include imputation in pipeline. Check if there are categorical columns. If not, don't include one hot encoding in pipeline
            else: #if self.preprocessing is True or not a sklearn estimator

                pipeline_steps = []

                if self.categorical_features is not None: #if categorical features are specified, use those
                    pipeline_steps.append(("impute_categorical", tpot.builtin_modules.ColumnSimpleImputer(self.categorical_features, strategy='most_frequent')))
                    pipeline_steps.append(("impute_numeric", tpot.builtin_modules.ColumnSimpleImputer("numeric", strategy='mean')))
                    pipeline_steps.append(("ColumnOneHotEncoder", tpot.builtin_modules.ColumnOneHotEncoder(self.categorical_features, strategy='most_frequent')))

                else:
                    if isinstance(X, pd.DataFrame):
                        categorical_columns = X.select_dtypes(include=['object']).columns
                        if len(categorical_columns) > 0:
                            pipeline_steps.append(("impute_categorical", tpot.builtin_modules.ColumnSimpleImputer("categorical", strategy='most_frequent')))
                            pipeline_steps.append(("impute_numeric", tpot.builtin_modules.ColumnSimpleImputer("numeric", strategy='mean')))
                            pipeline_steps.append(("ColumnOneHotEncoder", tpot.builtin_modules.ColumnOneHotEncoder("categorical", strategy='most_frequent')))
                        else:
                            pipeline_steps.append(("impute_numeric", tpot.builtin_modules.ColumnSimpleImputer("all", strategy='mean')))
                    else:
                        pipeline_steps.append(("impute_numeric", tpot.builtin_modules.ColumnSimpleImputer("all", strategy='mean')))

                self._preprocessing_pipeline = sklearn.pipeline.Pipeline(pipeline_steps)

            X = self._preprocessing_pipeline.fit_transform(X, y)

        else:
            self._preprocessing_pipeline = None

        #_, y = sklearn.utils.check_X_y(X, y, y_numeric=True)

        #Set up the configuation dictionaries and the search spaces

        #check if self.cv is a number
        if isinstance(self.cv, int) or isinstance(self.cv, float):
            if self.classification:
                self.cv_gen = sklearn.model_selection.StratifiedKFold(n_splits=self.cv, shuffle=True, random_state=self.random_state)
            else:
                self.cv_gen = sklearn.model_selection.KFold(n_splits=self.cv, shuffle=True, random_state=self.random_state)

        else:
            self.cv_gen = sklearn.model_selection.check_cv(self.cv, y, classifier=self.classification)


        n_samples= int(math.floor(X.shape[0]/n_folds))
        n_features=X.shape[1]

        if isinstance(X, pd.DataFrame):
            self.feature_names = X.columns
        else:
            self.feature_names = None




        def objective_function(pipeline_individual,
                                            X,
                                            y,
                                            is_classification=self.classification,
                                            scorers= self._scorers,
                                            cv=self.cv_gen,
                                            other_objective_functions=self.other_objective_functions,
                                            export_graphpipeline=self.export_graphpipeline,
                                            memory=self.memory,
                                            **kwargs):
            return objective_function_generator(
                pipeline_individual,
                X,
                y,
                is_classification=is_classification,
                scorers= scorers,
                cv=cv,
                other_objective_functions=other_objective_functions,
                export_graphpipeline=export_graphpipeline,
                memory=memory,
                **kwargs,
            )

        def ind_generator(rng):
            rng = np.random.default_rng(rng)
            while True:
                yield self.search_space.generate(rng)



        if self.scatter:
            X_future = _client.scatter(X)
            y_future = _client.scatter(y)
        else:
            X_future = X
            y_future = y

        #If warm start and we have an evolver instance, use the existing one
        if not(self.warm_start and self._evolver_instance is not None):
            self._evolver_instance = self._evolver(   individual_generator=ind_generator(self.rng),
                                            objective_functions= [objective_function],
                                            objective_function_weights = self.objective_function_weights,
                                            objective_names=self.objective_names,
                                            bigger_is_better = self.bigger_is_better,
                                            population_size= self.population_size,

                                            initial_population_size = self._initial_population_size,
                                            n_jobs=self.n_jobs,
                                            verbose = self.verbose,
                                            max_time_mins =      self.max_time_mins ,
                                            max_eval_time_mins = self.max_eval_time_mins,



                                            periodic_checkpoint_folder = self.periodic_checkpoint_folder,


                                            early_stop_tol = self.early_stop_tol,
                                            early_stop= self.early_stop,
                                            early_stop_mins =  self.early_stop_mins,

                                            budget_range = self.budget_range,
                                            budget_scaling = self.budget_scaling,
                                            individuals_until_end_budget = self.individuals_until_end_budget,


                                            stepwise_steps = self.stepwise_steps,
                                            client = _client,
                                            objective_kwargs = {"X": X_future, "y": y_future},
                                            survival_selector=self.survival_selector,
                                            parent_selector=self.parent_selector,

                                            crossover_probability = self.crossover_probability,
                                            mutate_probability = self.mutate_probability,
                                            mutate_then_crossover_probability= self.mutate_then_crossover_probability,
                                            crossover_then_mutate_probability= self.crossover_then_mutate_probability,


                                            max_evaluated_individuals = self.max_evaluated_individuals,

                                            rng=self.rng,
                                            )


        self._evolver_instance.optimize()
        #self._evolver_instance.population.update_pareto_fronts(self.objective_names, self.objective_function_weights)
        self.make_evaluated_individuals()


        if self.optuna_optimize_pareto_front:
            pareto_front_inds = self.pareto_front['Individual'].values
            all_graphs, all_scores = tpot.individual_representations.graph_pipeline_individual.simple_parallel_optuna(pareto_front_inds,  objective_function, self.objective_function_weights, _client, storage=self.optuna_storage, steps=self.optuna_optimize_pareto_front_trials, verbose=self.verbose, max_eval_time_mins=self.max_eval_time_mins, max_time_mins=self.optuna_optimize_pareto_front_timeout, **{"X": X, "y": y})
            all_scores = tpot.utils.eval_utils.process_scores(all_scores, len(self.objective_function_weights))

            if len(all_graphs) > 0:
                df = pd.DataFrame(np.column_stack((all_graphs, all_scores,np.repeat("Optuna",len(all_graphs)))), columns=["Individual"] + self.objective_names +["Parents"])
                for obj in self.objective_names:
                    df[obj] = df[obj].apply(convert_to_float)

                self.evaluated_individuals = pd.concat([self.evaluated_individuals, df], ignore_index=True)
            else:
                print("WARNING NO OPTUNA TRIALS COMPLETED")

        tpot.utils.get_pareto_frontier(self.evaluated_individuals, column_names=self.objective_names, weights=self.objective_function_weights)

        if validation_strategy == 'reshuffled':
            best_pareto_front_idx = list(self.pareto_front.index)
            best_pareto_front = list(self.pareto_front.loc[best_pareto_front_idx]['Individual'])

            #reshuffle rows
            X, y = sklearn.utils.shuffle(X, y, random_state=self.random_state)

            if self.scatter:
                X_future = _client.scatter(X)
                y_future = _client.scatter(y)
            else:
                X_future = X
                y_future = y

            val_objective_function_list = [lambda   ind,
                                                    X,
                                                    y,
                                                    is_classification=self.classification,
                                                    scorers= self._scorers,
                                                    cv=self.cv_gen,
                                                    other_objective_functions=self.other_objective_functions,
                                                    export_graphpipeline=self.export_graphpipeline,
                                                    memory=self.memory,

                                                    **kwargs: objective_function_generator(
                                                                                                ind,
                                                                                                X,
                                                                                                y,
                                                                                                is_classification=is_classification,
                                                                                                scorers= scorers,
                                                                                                cv=cv,
                                                                                                other_objective_functions=other_objective_functions,
                                                                                                export_graphpipeline=export_graphpipeline,
                                                                                                memory=memory,
                                                                                                **kwargs,
                                                                                                )]

            objective_kwargs = {"X": X_future, "y": y_future}
            val_scores, start_times, end_times, eval_errors = tpot.utils.eval_utils.parallel_eval_objective_list(best_pareto_front, val_objective_function_list, verbose=self.verbose, max_eval_time_mins=self.max_eval_time_mins, n_expected_columns=len(self.objective_names), client=_client, **objective_kwargs)

            val_objective_names = ['validation_'+name for name in self.objective_names]
            self.objective_names_for_selection = val_objective_names
            self.evaluated_individuals.loc[best_pareto_front_idx,val_objective_names] = val_scores
            self.evaluated_individuals.loc[best_pareto_front_idx,'validation_start_times'] = start_times
            self.evaluated_individuals.loc[best_pareto_front_idx,'validation_end_times'] = end_times
            self.evaluated_individuals.loc[best_pareto_front_idx,'validation_eval_errors'] = eval_errors

            self.evaluated_individuals["Validation_Pareto_Front"] = tpot.utils.get_pareto_frontier(self.evaluated_individuals, column_names=val_objective_names, weights=self.objective_function_weights)
        elif validation_strategy == 'split':


            if self.scatter:
                X_future = _client.scatter(X)
                y_future = _client.scatter(y)
                X_val_future = _client.scatter(X_val)
                y_val_future = _client.scatter(y_val)
            else:
                X_future = X
                y_future = y
                X_val_future = X_val
                y_val_future = y_val

            objective_kwargs = {"X": X_future, "y": y_future, "X_val" : X_val_future, "y_val":y_val_future }

            best_pareto_front_idx = list(self.pareto_front.index)
            best_pareto_front = list(self.pareto_front.loc[best_pareto_front_idx]['Individual'])
            val_objective_function_list = [lambda   ind,
                                                    X,
                                                    y,
                                                    X_val,
                                                    y_val,
                                                    scorers= self._scorers,
                                                    other_objective_functions=self.other_objective_functions,
                                                    export_graphpipeline=self.export_graphpipeline,
                                                    memory=self.memory,
                                                    **kwargs: val_objective_function_generator(
                                                        ind,
                                                        X,
                                                        y,
                                                        X_val,
                                                        y_val,
                                                        scorers= scorers,
                                                        other_objective_functions=other_objective_functions,
                                                        export_graphpipeline=export_graphpipeline,
                                                        memory=memory,
                                                        **kwargs,
                                                        )]

            val_scores, start_times, end_times, eval_errors = tpot.utils.eval_utils.parallel_eval_objective_list(best_pareto_front, val_objective_function_list, verbose=self.verbose, max_eval_time_mins=self.max_eval_time_mins, n_expected_columns=len(self.objective_names), client=_client, **objective_kwargs)



            val_objective_names = ['validation_'+name for name in self.objective_names]
            self.objective_names_for_selection = val_objective_names
            self.evaluated_individuals.loc[best_pareto_front_idx,val_objective_names] = val_scores
            self.evaluated_individuals.loc[best_pareto_front_idx,'validation_start_times'] = start_times
            self.evaluated_individuals.loc[best_pareto_front_idx,'validation_end_times'] = end_times
            self.evaluated_individuals.loc[best_pareto_front_idx,'validation_eval_errors'] = eval_errors

            self.evaluated_individuals["Validation_Pareto_Front"] = tpot.utils.get_pareto_frontier(self.evaluated_individuals, column_names=val_objective_names, weights=self.objective_function_weights)
        else:
            self.objective_names_for_selection = self.objective_names

        val_scores = self.evaluated_individuals[self.evaluated_individuals[self.objective_names_for_selection].isin(["TIMEOUT","INVALID"]).any(axis=1).ne(True)][self.objective_names_for_selection].astype(float)
        weighted_scores = val_scores*self.objective_function_weights

        if self.bigger_is_better:
            best_indices = list(weighted_scores.sort_values(by=self.objective_names_for_selection, ascending=False).index)
        else:
            best_indices = list(weighted_scores.sort_values(by=self.objective_names_for_selection, ascending=True).index)

        for best_idx in best_indices:

            best_individual = self.evaluated_individuals.loc[best_idx]['Individual']
            self.selected_best_score =  self.evaluated_individuals.loc[best_idx]


            #TODO
            #best_individual_pipeline = best_individual.export_pipeline(memory=self.memory, cross_val_predict_cv=self.cross_val_predict_cv)
            if self.export_graphpipeline:
                best_individual_pipeline = best_individual.export_flattened_graphpipeline(memory=self.memory)
            else:
                best_individual_pipeline = best_individual.export_pipeline(memory=self.memory)

            if self.preprocessing:
                self.fitted_pipeline_ = sklearn.pipeline.make_pipeline(sklearn.base.clone(self._preprocessing_pipeline), best_individual_pipeline )
            else:
                self.fitted_pipeline_ = best_individual_pipeline

            try:
                self.fitted_pipeline_.fit(X_original,y_original) #TODO use y_original as well?
                break
            except Exception as e:
                if self.verbose >= 4:
                    warnings.warn("Final pipeline failed to fit. Rarely, the pipeline might work on the objective function but fail on the full dataset. Generally due to interactions with different features being selected or transformations having different properties. Trying next pipeline")
                    print(e)
                continue


        if self.client is None: #no client was passed in
            #close cluster and client
            # _client.close()
            # cluster.close()
            try:
                _client.shutdown()
                cluster.close()
            #catch exception
            except Exception as e:
                print("Error shutting down client and cluster")
                Warning(e)

        return self

    def _estimator_has(attr):
        '''Check if we can delegate a method to the underlying estimator.
        First, we check the first fitted final estimator if available, otherwise we
        check the unfitted final estimator.
        '''
        return  lambda self: (self.fitted_pipeline_ is not None and
            hasattr(self.fitted_pipeline_, attr)
        )






    @available_if(_estimator_has('predict'))
    def predict(self, X, **predict_params):
        check_is_fitted(self)
        #X = check_array(X)
        preds = self.fitted_pipeline_.predict(X,**predict_params)
        if self.classification and self.label_encoder_:
            preds = self.label_encoder_.inverse_transform(preds)

        return preds

    @available_if(_estimator_has('predict_proba'))
    def predict_proba(self, X, **predict_params):
        check_is_fitted(self)
        #X = check_array(X)
        return self.fitted_pipeline_.predict_proba(X,**predict_params)

    @available_if(_estimator_has('decision_function'))
    def decision_function(self, X, **predict_params):
        check_is_fitted(self)
        #X = check_array(X)
        return self.fitted_pipeline_.decision_function(X,**predict_params)

    @available_if(_estimator_has('transform'))
    def transform(self, X, **predict_params):
        check_is_fitted(self)
        #X = check_array(X)
        return self.fitted_pipeline_.transform(X,**predict_params)

    @property
    def classes_(self):
        """The classes labels. Only exist if the last step is a classifier."""

        if self.label_encoder_:
            return self.label_encoder_.classes_
        else:
            return self.fitted_pipeline_.classes_

    @property
    def _estimator_type(self):
        return self.fitted_pipeline_._estimator_type

    def make_evaluated_individuals(self):
        #check if _evolver_instance exists
        if self.evaluated_individuals is None:
            self.evaluated_individuals  =  self._evolver_instance.population.evaluated_individuals.copy()
            objects = list(self.evaluated_individuals.index)
            object_to_int = dict(zip(objects, range(len(objects))))
            self.evaluated_individuals = self.evaluated_individuals.set_index(self.evaluated_individuals.index.map(object_to_int))
            self.evaluated_individuals['Parents'] = self.evaluated_individuals['Parents'].apply(lambda row: convert_parents_tuples_to_integers(row, object_to_int))

            self.evaluated_individuals["Instance"] = self.evaluated_individuals["Individual"].apply(lambda ind: apply_make_pipeline(ind, preprocessing_pipeline=self._preprocessing_pipeline, export_graphpipeline=self.export_graphpipeline, memory=self.memory))

        return self.evaluated_individuals

    @property
    def pareto_front(self):
        #check if _evolver_instance exists
        if self.evaluated_individuals is None:
            return None
        else:
            if "Pareto_Front" not in self.evaluated_individuals:
                return self.evaluated_individuals
            else:
                return self.evaluated_individuals[self.evaluated_individuals["Pareto_Front"]==1]

classes_ 属性

类别标签。仅当最后一步是分类器时存在。

__init__(search_space, scorers=[], scorers_weights=[], classification=False, cv=10, other_objective_functions=[], other_objective_functions_weights=[], objective_function_names=None, bigger_is_better=True, export_graphpipeline=False, memory=None, categorical_features=None, subsets=None, preprocessing=False, validation_strategy='none', validation_fraction=0.2, disable_label_encoder=False, initial_population_size=50, population_size=50, max_evaluated_individuals=None, early_stop=None, early_stop_mins=None, scorers_early_stop_tol=0.001, other_objectives_early_stop_tol=None, max_time_mins=None, max_eval_time_mins=10, n_jobs=1, memory_limit=None, client=None, crossover_probability=0.2, mutate_probability=0.7, mutate_then_crossover_probability=0.05, crossover_then_mutate_probability=0.05, survival_selector=survival_select_NSGA2, parent_selector=tournament_selection_dominated, budget_range=None, budget_scaling=0.5, individuals_until_end_budget=1, stepwise_steps=5, warm_start=False, verbose=0, periodic_checkpoint_folder=None, callback=None, processes=True, scatter=True, random_state=None, optuna_optimize_pareto_front=False, optuna_optimize_pareto_front_trials=100, optuna_optimize_pareto_front_timeout=60 * 10, optuna_storage='sqlite:///optuna.db')

一个使用遗传编程优化流水线的 sklearn 基础估计器。

参数

名称 类型 描述 默认值
scorers (列表, 评分器)

在交叉验证过程中使用的评分器或评分器列表。参见 https://scikit-learn.cn/stable/modules/model_evaluation.html

[]
scorers_weights 列表

在优化过程中应用于评分器的权重列表。

[]
classification 布尔值

如果为 True,问题被视为分类问题。如果为 False,问题被视为回归问题。用于确定交叉验证策略。

False
cv (整数, 交叉 - 验证器)
  • (int): 交叉验证过程中使用的折叠数。对于回归问题,使用 sklearn.model_selection.KFold 交叉验证器;对于分类问题,使用 StratifiedKFold。在这两种情况下,shuffled 都设置为 True。
  • (sklearn.model_selection.BaseCrossValidator): 在交叉验证过程中使用的交叉验证器。
10
other_objective_functions 列表

应用于流水线的其他目标函数列表。该函数接受 graphpipeline 估计器的一个参数,并返回单个分数或分数列表。

[]
other_objective_functions_weights 列表

应用于其他目标函数的权重列表。

[]
objective_function_names 列表

应用于目标函数的名称列表。如果为 None,将使用目标函数的名称。

None
bigger_is_better 布尔值

如果为 True,目标函数最大化。如果为 False,目标函数最小化。使用负权重反转方向。

True
max_size 整数

生成的流水线的最大节点数。

np.inf
linear_pipeline 布尔值

如果为 True,生成的流水线将是线性的。如果为 False,生成的流水线将是有向无环图。

False
root_config_dict 字典

用于模型根节点的配置字典。如果为 'auto',则在 classification=True 时使用 "classifiers",否则使用 "regressors"。- 'selectors':sklearn 选择器方法的一个选择。- 'classifiers':sklearn 分类器方法的一个选择。- 'regressors':sklearn 回归器方法的一个选择。- 'transformers':sklearn 转换器方法的一个选择。- 'arithmetic_transformer':复制符号分类/回归运算符的 sklearn 算术转换器方法的一个选择。- 'passthrough':一个只传递输入的节点。用于将原始输入传递到内部节点。- 'feature_set_selector':一个从数据中提取特定列子集的选择器。仅作为叶节点有明确定义。子集通过 subsets 参数设置。- 'skrebate':包括 ReliefF、SURF、SURFstar、MultiSURF。- 'MDR':包括 MDR。- 'ContinuousMDR':包括 ContinuousMDR。- 'genetic encoders':包括 AutoQTL 中使用的遗传编码器方法。- 'FeatureEncodingFrequencySelector':包括 AutoQTL 中使用的 FeatureEncodingFrequencySelector 方法。- list:上述选项中的字符串列表,用于在配置字典中包含相应的方法。

'auto'
inner_config_dict 字典

用于模型生成内部节点的配置字典。默认值 ["selectors", "transformers"] - 'selectors':sklearn 选择器方法的一个选择。- 'classifiers':sklearn 分类器方法的一个选择。- 'regressors':sklearn 回归器方法的一个选择。- 'transformers':sklearn 转换器方法的一个选择。- 'arithmetic_transformer':复制符号分类/回归运算符的 sklearn 算术转换器方法的一个选择。- 'passthrough':一个只传递输入的节点。用于将原始输入传递到内部节点。- 'feature_set_selector':一个从数据中提取特定列子集的选择器。仅作为叶节点有明确定义。子集通过 subsets 参数设置。- 'skrebate':包括 ReliefF、SURF、SURFstar、MultiSURF。- 'MDR':包括 MDR。- 'ContinuousMDR':包括 ContinuousMDR。- 'genetic encoders':包括 AutoQTL 中使用的遗传编码器方法。- 'FeatureEncodingFrequencySelector':包括 AutoQTL 中使用的 FeatureEncodingFrequencySelector 方法。- list:上述选项中的字符串列表,用于在配置字典中包含相应的方法。- None:如果为 None 且 max_depth>1,root_config_dict 也将用于内部节点。

["selectors", "transformers"]
leaf_config_dict 字典

用于模型叶节点的配置字典。如果设置,叶节点必须来自此字典。否则,叶节点将从 root_config_dict 生成。默认值 None - 'selectors':sklearn 选择器方法的一个选择。- 'classifiers':sklearn 分类器方法的一个选择。- 'regressors':sklearn 回归器方法的一个选择。- 'transformers':sklearn 转换器方法的一个选择。- 'arithmetic_transformer':复制符号分类/回归运算符的 sklearn 算术转换器方法的一个选择。- 'passthrough':一个只传递输入的节点。用于将原始输入传递到内部节点。- 'feature_set_selector':一个从数据中提取特定列子集的选择器。仅作为叶节点有明确定义。子集通过 subsets 参数设置。- 'skrebate':包括 ReliefF、SURF、SURFstar、MultiSURF。- 'MDR':包括 MDR。- 'ContinuousMDR':包括 ContinuousMDR。- 'genetic encoders':包括 AutoQTL 中使用的遗传编码器方法。- 'FeatureEncodingFrequencySelector':包括 AutoQTL 中使用的 FeatureEncodingFrequencySelector 方法。- list:上述选项中的字符串列表,用于在配置字典中包含相应的方法。- None:如果为 None,则不需要叶节点(即流水线可以是单个根节点)。叶节点将从 inner_config_dict 生成。

None
categorical_features

在预处理步骤中需要填充和/或进行独热编码的分类列。仅在 preprocessing 不为 False 时使用。- None:如果为 None,TPOT 将在预处理中自动使用 pandas 数据框中的对象列作为独热编码的对象。- 分类特征列表。如果 X 是数据框,则应为列名列表。如果 X 是 numpy 数组,则应为列索引列表。

None
memory

如果提供,流水线将在使用 joblib.Memory 调用 fit 后缓存每个转换器。此功能用于避免在优化过程中,如果参数和输入数据与另一个已拟合的流水线相同时,重新计算流水线内的拟合转换器。- 字符串 'auto':TPOT 使用临时目录进行内存缓存,并在关闭时清理。- 缓存目录的字符串路径:TPOT 使用提供的目录进行内存缓存,TPOT 不会在关闭时清理缓存目录。如果目录不存在,TPOT 将创建它。- Memory 对象:TPOT 使用 joblib.Memory 的实例进行内存缓存,TPOT 不会在关闭时清理缓存目录。- None:TPOT 不使用内存缓存。

None
preprocessing (布尔值BaseEstimator / Pipeline)

实验性功能。在交叉验证前用于预处理数据的流水线。- bool:如果为 True,将使用默认的预处理流水线。- Pipeline:如果提供流水线实例,将使用该流水线作为预处理流水线。

False
validation_strategy 字符串

实验性功能。用于从种群中选择最终流水线的验证策略。TPOT 可能会过拟合交叉验证分数。可以使用第二个验证集来选择最终流水线。- 'auto':根据数据集形状自动确定验证策略。- 'reshuffled':对交叉验证和最终验证使用相同的数据,但折叠划分不同。这是小数据集的默认设置。- 'split':使用单独的验证集进行最终验证。数据将根据 validation_fraction 进行分割。这是中等数据集的默认设置。- 'none':不使用单独的验证集进行最终验证。根据原始交叉验证分数进行选择。这是大数据集的默认设置。

'none'
validation_fraction 浮点数

实验性功能。当 validation_strategy 为 'split' 时,用于验证集的数据集比例。必须介于 0 和 1 之间。

0.2
disable_label_encoder 布尔值

如果为 True,TPOT 将检查目标是否需要重新标记为从 0 到 N 的连续整数。这对于 XGBoost 兼容性是必需的。如果标签需要编码,TPOT 将使用 sklearn.preprocessing.LabelEncoder 对标签进行编码。编码器可以通过 self.label_encoder_ 属性访问。如果为 False,将不使用额外的标签编码器。

False
population_size 整数

种群大小

50
initial_population_size 整数

初始种群大小。如果为 None,将使用 population_size。

None
population_scaling 整数

确定阈值从起始百分位数移动到结束百分位数的速度时使用的缩放因子。

0.5
generations_until_end_population 整数

种群大小达到 population_size 所需的代数。

1
generations 整数

运行的代数

50
early_stop 整数

在早期停止前没有改进的评估个体数量。独立计算所有目标。当所有目标在给定个体数量内都没有改进时触发。

None
early_stop_mins 浮点数

在早期停止前没有改进的秒数。所有目标必须在给定秒数内都没有改进,此功能才会触发。

None
scorers_early_stop_tol

-浮点数列表:每个评分器的容差列表。如果最佳分数与当前分数的差小于容差,则认为个体已收敛。如果列表中的某个索引为 None,则该项不会用于早期停止。-整数:如果给定一个整数,它将作为所有目标的容差。

0.001
other_objectives_early_stop_tol

-浮点数列表:每个其他目标函数的容差列表。如果最佳分数与当前分数的差小于容差,则认为个体已收敛。如果列表中的某个索引为 None,则该项不会用于早期停止。-整数:如果给定一个整数,它将作为所有目标的容差。

None
max_time_mins 浮点数

运行优化的最长时间(分钟)。如果为 None 或 inf,将运行直到代数结束。

float("inf")
max_eval_time_mins 浮点数

评估单个个体的最长时间(分钟)。如果为 None 或 inf,则每次评估没有时间限制。

60*5
n_jobs 整数

并行运行的进程数。

1
memory_limit 字符串

每个作业的内存限制。有关更多信息,请参阅 Dask LocalCluster 文档

None
client 客户端

用于并行化的 dask 客户端。如果不是 None,这将覆盖 n_jobs 和 memory_limit 参数。如果为 None,将创建一个 num_workers=n_jobs 且 memory_limit=memory_limit 的新客户端。

None
crossover_probability 浮点数

通过两个个体交叉生成新个体的概率。

.2
mutate_probability 浮点数

通过一个个体变异生成新个体的概率。

.7
mutate_then_crossover_probability 浮点数

对两个个体进行变异然后交叉生成新个体的概率。

.05
crossover_then_mutate_probability 浮点数

对两个个体进行交叉然后对结果个体进行变异生成新个体的概率。

.05
survival_selector 函数

用于选择生存个体的函数。必须接受一个分数矩阵并返回选定的索引。用于在每代开始时选择 population_size 个体用于变异和交叉。

survival_select_NSGA2
parent_selector 函数

用于选择交叉父母对和变异个体的函数。必须接受一个分数矩阵并返回选定的索引。

parent_select_NSGA2
budget_range 列表[开始, 结束]

用于预算缩放的起始和结束预算。

None
budget_scaling

确定预算从起始预算移动到结束预算的速度时使用的缩放因子。

0.5
individuals_until_end_budget 整数

在达到最大预算前运行的代数。

1
stepwise_steps 整数

缩放预算和种群大小时采用的阶梯步数。

1
threshold_evaluation_pruning 列表[开始, 结束]

用作评估早期停止阈值的起始和结束百分位数。值介于 0 和 100 之间。

None
threshold_evaluation_scaling 浮点数 [0,inf)

确定阈值从起始百分位数移动到结束百分位数的速度时使用的缩放因子。必须大于零。数值越大,阈值越快移动到结束百分位数。

0.5
min_history_threshold 整数

在使用阈值早期停止前所需的最小历史分数数量。

0
selection_evaluation_pruning 列表

每次交叉验证选择的种群大小的下限和上限百分比。值介于 0 和 1 之间。

None
selection_evaluation_scaling 浮点数

确定阈值从起始百分位数移动到结束百分位数的速度时使用的缩放因子。必须大于零。数值越大,阈值越快移动到结束百分位数。

0.5
n_initial_optimizations 整数

在开始进化前需要优化的个体数量。

0
optimization_cv 整数

optuna 优化内部交叉验证使用的折叠数。

必需
max_optimize_time_seconds 浮点数

运行优化的最长时间(秒)

60*5
optimization_steps 整数

每次优化的步数

10
warm_start 布尔值

如果为 True,将从上次运行的最后一代替续进化算法。

False
verbose 整数

优化过程中打印的信息量。较高的值包含较低值的信息。0. 无 1. 进度条

  1. 最佳个体
  2. 警告

    =5. 完整的警告跟踪

1
random_state (整数, None)

用于实验可重现性的种子。此值将传递给 numpy.random.default_rng() 以创建生成器实例,并传递给其他类。- 整数:将用于创建 Generator 实例并锁定种子,使用 'numpy.random.default_rng()'。- None:将用于创建 Generator,使用 'numpy.random.default_rng()',其中将从操作系统中提取新的、不可预测的熵。

None
periodic_checkpoint_folder 字符串

定期保存种群的文件夹。如果为 None,则不进行定期保存。如果提供,训练将从此检查点恢复。

None
callback CallBackInterface

回调对象。未实现

None
processes 布尔值

如果为 True,将使用多进程并行化优化过程。如果为 False,将使用多线程。True 似乎性能更好。但是,进行交互式调试时需要设置为 False。

True

属性

名称 类型 描述
fitted_pipeline_ GraphPipeline

已拟合的 GraphPipeline 实例,它继承自 sklearn BaseEstimator。这是在传递给 fit 的完整 X, y 上拟合的。

evaluated_individuals 一个 pandas 数据框,包含运行中所有已评估个体的数据。

列:- 目标函数:前几列对应于传入的评分器和其他目标函数。- Parents:一个元组,包含用于生成该行流水线的流水线索引。如果为 NaN,此流水线是在初始种群中随机生成的。- Variation_Function:用于变异或交叉父代的变异函数。如果为 NaN,此流水线是在初始种群中随机生成的。- Individual:进化算法中使用的个体内部表示。这不是一个 sklearn BaseEstimator。- Generation:流水线首次出现的代数。- Pareto_Front:此流水线所属的非支配前沿。0 表示其分数未被任何其他个体严格支配。为了节省计算时间,最佳前沿在每代迭代更新。Pareto_Front 为 0 的流水线确实代表了精确的最佳前沿。然而,Pareto_Front >= 1 的流水线仅参照最终种群中的其他流水线。所有其他流水线都设置为 NaN。- Instance:未拟合的 GraphPipeline BaseEstimator。- 验证目标函数:在验证集上评估的目标函数分数。- Validation_Pareto_Front:在验证集上计算的完整帕累托前沿。这是为 Pareto_Front 等于 0 的所有流水线计算的。与仅计算前沿和最终种群的 Pareto_Front 不同,Validation Pareto Front 是为在验证集上测试的所有流水线计算的。

pareto_front 与 evaluated_individuals 相同的 pandas 数据框,但仅包含前沿帕累托前沿流水线。
源代码位于 tpot/tpot_estimator/steady_state_estimator.py
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
def __init__(self,  
                    search_space,
                    scorers= [],
                    scorers_weights = [],
                    classification = False,
                    cv = 10,
                    other_objective_functions=[], #tpot.objectives.estimator_objective_functions.number_of_nodes_objective],
                    other_objective_functions_weights = [],
                    objective_function_names = None,
                    bigger_is_better = True,


                    export_graphpipeline = False,
                    memory = None,

                    categorical_features = None,
                    subsets = None,
                    preprocessing = False,
                    validation_strategy = "none",
                    validation_fraction = .2,
                    disable_label_encoder = False,

                    initial_population_size = 50,
                    population_size = 50,
                    max_evaluated_individuals = None,



                    early_stop = None,
                    early_stop_mins = None,
                    scorers_early_stop_tol = 0.001,
                    other_objectives_early_stop_tol = None,
                    max_time_mins=None,
                    max_eval_time_mins=10,
                    n_jobs=1,
                    memory_limit = None,
                    client = None,

                    crossover_probability=.2,
                    mutate_probability=.7,
                    mutate_then_crossover_probability=.05,
                    crossover_then_mutate_probability=.05,
                    survival_selector = survival_select_NSGA2,
                    parent_selector = tournament_selection_dominated,
                    budget_range = None,
                    budget_scaling = .5,
                    individuals_until_end_budget = 1,
                    stepwise_steps = 5,

                    warm_start = False,

                    verbose = 0,
                    periodic_checkpoint_folder = None,
                    callback = None,
                    processes = True,

                    scatter = True,

                    # random seed for random number generator (rng)
                    random_state = None,

                    optuna_optimize_pareto_front = False,
                    optuna_optimize_pareto_front_trials = 100,
                    optuna_optimize_pareto_front_timeout = 60*10,
                    optuna_storage = "sqlite:///optuna.db",
                    ):

    '''
    An sklearn baseestimator that uses genetic programming to optimize a pipeline.

    Parameters
    ----------

    scorers : (list, scorer)
        A scorer or list of scorers to be used in the cross-validation process.
        see https://scikit-learn.cn/stable/modules/model_evaluation.html

    scorers_weights : list
        A list of weights to be applied to the scorers during the optimization process.

    classification : bool
        If True, the problem is treated as a classification problem. If False, the problem is treated as a regression problem.
        Used to determine the CV strategy.

    cv : int, cross-validator
        - (int): Number of folds to use in the cross-validation process. By uses the sklearn.model_selection.KFold cross-validator for regression and StratifiedKFold for classification. In both cases, shuffled is set to True.
        - (sklearn.model_selection.BaseCrossValidator): A cross-validator to use in the cross-validation process.

    other_objective_functions : list, default=[]
        A list of other objective functions to apply to the pipeline. The function takes a single parameter for the graphpipeline estimator and returns either a single score or a list of scores.

    other_objective_functions_weights : list, default=[]
        A list of weights to be applied to the other objective functions.

    objective_function_names : list, default=None
        A list of names to be applied to the objective functions. If None, will use the names of the objective functions.

    bigger_is_better : bool, default=True
        If True, the objective function is maximized. If False, the objective function is minimized. Use negative weights to reverse the direction.


    max_size : int, default=np.inf
        The maximum number of nodes of the pipelines to be generated.

    linear_pipeline : bool, default=False
        If True, the pipelines generated will be linear. If False, the pipelines generated will be directed acyclic graphs.

    root_config_dict : dict, default='auto'
        The configuration dictionary to use for the root node of the model.
        If 'auto', will use "classifiers" if classification=True, else "regressors".
        - 'selectors' : A selection of sklearn Selector methods.
        - 'classifiers' : A selection of sklearn Classifier methods.
        - 'regressors' : A selection of sklearn Regressor methods.
        - 'transformers' : A selection of sklearn Transformer methods.
        - 'arithmetic_transformer' : A selection of sklearn Arithmetic Transformer methods that replicate symbolic classification/regression operators.
        - 'passthrough' : A node that just passes though the input. Useful for passing through raw inputs into inner nodes.
        - 'feature_set_selector' : A selector that pulls out specific subsets of columns from the data. Only well defined as a leaf.
                                    Subsets are set with the subsets parameter.
        - 'skrebate' : Includes ReliefF, SURF, SURFstar, MultiSURF.
        - 'MDR' : Includes MDR.
        - 'ContinuousMDR' : Includes ContinuousMDR.
        - 'genetic encoders' : Includes Genetic Encoder methods as used in AutoQTL.
        - 'FeatureEncodingFrequencySelector': Includes FeatureEncodingFrequencySelector method as used in AutoQTL.
        - list : a list of strings out of the above options to include the corresponding methods in the configuration dictionary.

    inner_config_dict : dict, default=["selectors", "transformers"]
        The configuration dictionary to use for the inner nodes of the model generation.
        Default ["selectors", "transformers"]
        - 'selectors' : A selection of sklearn Selector methods.
        - 'classifiers' : A selection of sklearn Classifier methods.
        - 'regressors' : A selection of sklearn Regressor methods.
        - 'transformers' : A selection of sklearn Transformer methods.
        - 'arithmetic_transformer' : A selection of sklearn Arithmetic Transformer methods that replicate symbolic classification/regression operators.
        - 'passthrough' : A node that just passes though the input. Useful for passing through raw inputs into inner nodes.
        - 'feature_set_selector' : A selector that pulls out specific subsets of columns from the data. Only well defined as a leaf.
                                    Subsets are set with the subsets parameter.
        - 'skrebate' : Includes ReliefF, SURF, SURFstar, MultiSURF.
        - 'MDR' : Includes MDR.
        - 'ContinuousMDR' : Includes ContinuousMDR.
        - 'genetic encoders' : Includes Genetic Encoder methods as used in AutoQTL.
        - 'FeatureEncodingFrequencySelector': Includes FeatureEncodingFrequencySelector method as used in AutoQTL.
        - list : a list of strings out of the above options to include the corresponding methods in the configuration dictionary.
        - None : If None and max_depth>1, the root_config_dict will be used for the inner nodes as well.

    leaf_config_dict : dict, default=None
        The configuration dictionary to use for the leaf node of the model. If set, leaf nodes must be from this dictionary.
        Otherwise leaf nodes will be generated from the root_config_dict.
        Default None
        - 'selectors' : A selection of sklearn Selector methods.
        - 'classifiers' : A selection of sklearn Classifier methods.
        - 'regressors' : A selection of sklearn Regressor methods.
        - 'transformers' : A selection of sklearn Transformer methods.
        - 'arithmetic_transformer' : A selection of sklearn Arithmetic Transformer methods that replicate symbolic classification/regression operators.
        - 'passthrough' : A node that just passes though the input. Useful for passing through raw inputs into inner nodes.
        - 'feature_set_selector' : A selector that pulls out specific subsets of columns from the data. Only well defined as a leaf.
                                    Subsets are set with the subsets parameter.
        - 'skrebate' : Includes ReliefF, SURF, SURFstar, MultiSURF.
        - 'MDR' : Includes MDR.
        - 'ContinuousMDR' : Includes ContinuousMDR.
        - 'genetic encoders' : Includes Genetic Encoder methods as used in AutoQTL.
        - 'FeatureEncodingFrequencySelector': Includes FeatureEncodingFrequencySelector method as used in AutoQTL.
        - list : a list of strings out of the above options to include the corresponding methods in the configuration dictionary.
        - None : If None, a leaf will not be required (i.e. the pipeline can be a single root node). Leaf nodes will be generated from the inner_config_dict.

    categorical_features: list or None
        Categorical columns to inpute and/or one hot encode during the preprocessing step. Used only if preprocessing is not False.
        - None : If None, TPOT will automatically use object columns in pandas dataframes as objects for one hot encoding in preprocessing.
        - List of categorical features. If X is a dataframe, this should be a list of column names. If X is a numpy array, this should be a list of column indices


    memory: Memory object or string, default=None
        If supplied, pipeline will cache each transformer after calling fit with joblib.Memory. This feature
        is used to avoid computing the fit transformers within a pipeline if the parameters
        and input data are identical with another fitted pipeline during optimization process.
        - String 'auto':
            TPOT uses memory caching with a temporary directory and cleans it up upon shutdown.
        - String path of a caching directory
            TPOT uses memory caching with the provided directory and TPOT does NOT clean
            the caching directory up upon shutdown. If the directory does not exist, TPOT will
            create it.
        - Memory object:
            TPOT uses the instance of joblib.Memory for memory caching,
            and TPOT does NOT clean the caching directory up upon shutdown.
        - None:
            TPOT does not use memory caching.

    preprocessing : bool or BaseEstimator/Pipeline,
        EXPERIMENTAL
        A pipeline that will be used to preprocess the data before CV.
        - bool : If True, will use a default preprocessing pipeline.
        - Pipeline : If an instance of a pipeline is given, will use that pipeline as the preprocessing pipeline.

    validation_strategy : str, default='none'
        EXPERIMENTAL The validation strategy to use for selecting the final pipeline from the population. TPOT may overfit the cross validation score. A second validation set can be used to select the final pipeline.
        - 'auto' : Automatically determine the validation strategy based on the dataset shape.
        - 'reshuffled' : Use the same data for cross validation and final validation, but with different splits for the folds. This is the default for small datasets.
        - 'split' : Use a separate validation set for final validation. Data will be split according to validation_fraction. This is the default for medium datasets.
        - 'none' : Do not use a separate validation set for final validation. Select based on the original cross-validation score. This is the default for large datasets.

    validation_fraction : float, default=0.2
      EXPERIMENTAL The fraction of the dataset to use for the validation set when validation_strategy is 'split'. Must be between 0 and 1.

    disable_label_encoder : bool, default=False
        If True, TPOT will check if the target needs to be relabeled to be sequential ints from 0 to N. This is necessary for XGBoost compatibility. If the labels need to be encoded, TPOT will use sklearn.preprocessing.LabelEncoder to encode the labels. The encoder can be accessed via the self.label_encoder_ attribute.
        If False, no additional label encoders will be used.

    population_size : int, default=50
        Size of the population

    initial_population_size : int, default=None
        Size of the initial population. If None, population_size will be used.

    population_scaling : int, default=0.5
        Scaling factor to use when determining how fast we move the threshold moves from the start to end percentile.

    generations_until_end_population : int, default=1
        Number of generations until the population size reaches population_size

    generations : int, default=50
        Number of generations to run

    early_stop : int, default=None
        Number of evaluated individuals without improvement before early stopping. Counted across all objectives independently. Triggered when all objectives have not improved by the given number of individuals.

    early_stop_mins : float, default=None
        Number of seconds without improvement before early stopping. All objectives must not have improved for the given number of seconds for this to be triggered.

    scorers_early_stop_tol :
        -list of floats
            list of tolerances for each scorer. If the difference between the best score and the current score is less than the tolerance, the individual is considered to have converged
            If an index of the list is None, that item will not be used for early stopping
        -int
            If an int is given, it will be used as the tolerance for all objectives

    other_objectives_early_stop_tol :
        -list of floats
            list of tolerances for each of the other objective function. If the difference between the best score and the current score is less than the tolerance, the individual is considered to have converged
            If an index of the list is None, that item will not be used for early stopping
        -int
            If an int is given, it will be used as the tolerance for all objectives

    max_time_mins : float, default=float("inf")
        Maximum time to run the optimization. If none or inf, will run until the end of the generations.

    max_eval_time_mins : float, default=60*5
        Maximum time to evaluate a single individual. If none or inf, there will be no time limit per evaluation.

    n_jobs : int, default=1
        Number of processes to run in parallel.

    memory_limit : str, default=None
        Memory limit for each job. See Dask [LocalCluster documentation](https://distributed.dask.org.cn/en/stable/api.html#distributed.Client) for more information.

    client : dask.distributed.Client, default=None
        A dask client to use for parallelization. If not None, this will override the n_jobs and memory_limit parameters. If None, will create a new client with num_workers=n_jobs and memory_limit=memory_limit.

    crossover_probability : float, default=.2
        Probability of generating a new individual by crossover between two individuals.

    mutate_probability : float, default=.7
        Probability of generating a new individual by crossover between one individuals.

    mutate_then_crossover_probability : float, default=.05
        Probability of generating a new individual by mutating two individuals followed by crossover.

    crossover_then_mutate_probability : float, default=.05
        Probability of generating a new individual by crossover between two individuals followed by a mutation of the resulting individual.

    survival_selector : function, default=survival_select_NSGA2
        Function to use to select individuals for survival. Must take a matrix of scores and return selected indexes.
        Used to selected population_size individuals at the start of each generation to use for mutation and crossover.

    parent_selector : function, default=parent_select_NSGA2
        Function to use to select pairs parents for crossover and individuals for mutation. Must take a matrix of scores and return selected indexes.

    budget_range : list [start, end], default=None
        A starting and ending budget to use for the budget scaling.

    budget_scaling float : [0,1], default=0.5
        A scaling factor to use when determining how fast we move the budget from the start to end budget.

    individuals_until_end_budget : int, default=1
        The number of generations to run before reaching the max budget.

    stepwise_steps : int, default=1
        The number of staircase steps to take when scaling the budget and population size.

    threshold_evaluation_pruning : list [start, end], default=None
        starting and ending percentile to use as a threshold for the evaluation early stopping.
        Values between 0 and 100.

    threshold_evaluation_scaling : float [0,inf), default=0.5
        A scaling factor to use when determining how fast we move the threshold moves from the start to end percentile.
        Must be greater than zero. Higher numbers will move the threshold to the end faster.

    min_history_threshold : int, default=0
        The minimum number of previous scores needed before using threshold early stopping.

    selection_evaluation_pruning : list, default=None
        A lower and upper percent of the population size to select each round of CV.
        Values between 0 and 1.

    selection_evaluation_scaling : float, default=0.5
        A scaling factor to use when determining how fast we move the threshold moves from the start to end percentile.
        Must be greater than zero. Higher numbers will move the threshold to the end faster.

    n_initial_optimizations : int, default=0
        Number of individuals to optimize before starting the evolution.

    optimization_cv : int
       Number of folds to use for the optuna optimization's internal cross-validation.

    max_optimize_time_seconds : float, default=60*5
        Maximum time to run an optimization

    optimization_steps : int, default=10
        Number of steps per optimization

    warm_start : bool, default=False
        If True, will use the continue the evolutionary algorithm from the last generation of the previous run.


    verbose : int, default=1
        How much information to print during the optimization process. Higher values include the information from lower values.
        0. nothing
        1. progress bar

        3. best individual
        4. warnings
        >=5. full warnings trace

    random_state : int, None, default=None
        A seed for reproducability of experiments. This value will be passed to numpy.random.default_rng() to create an instnce of the genrator to pass to other classes
        - int
            Will be used to create and lock in Generator instance with 'numpy.random.default_rng()'
        - None
            Will be used to create Generator for 'numpy.random.default_rng()' where a fresh, unpredictable entropy will be pulled from the OS


    periodic_checkpoint_folder : str, default=None
        Folder to save the population to periodically. If None, no periodic saving will be done.
        If provided, training will resume from this checkpoint.

    callback : tpot.CallBackInterface, default=None
        Callback object. Not implemented

    processes : bool, default=True
        If True, will use multiprocessing to parallelize the optimization process. If False, will use threading.
        True seems to perform better. However, False is required for interactive debugging.

    Attributes
    ----------

    fitted_pipeline_ : GraphPipeline
        A fitted instance of the GraphPipeline that inherits from sklearn BaseEstimator. This is fitted on the full X, y passed to fit.

    evaluated_individuals : A pandas data frame containing data for all evaluated individuals in the run.
        Columns:
        - *objective functions : The first few columns correspond to the passed in scorers and objective functions
        - Parents : A tuple containing the indexes of the pipelines used to generate the pipeline of that row. If NaN, this pipeline was generated randomly in the initial population.
        - Variation_Function : Which variation function was used to mutate or crossover the parents. If NaN, this pipeline was generated randomly in the initial population.
        - Individual : The internal representation of the individual that is used during the evolutionary algorithm. This is not an sklearn BaseEstimator.
        - Generation : The generation the pipeline first appeared.
        - Pareto_Front	: The nondominated front that this pipeline belongs to. 0 means that its scores is not strictly dominated by any other individual.
                        To save on computational time, the best frontier is updated iteratively each generation.
                        The pipelines with the 0th pareto front do represent the exact best frontier. However, the pipelines with pareto front >= 1 are only in reference to the other pipelines in the final population.
                        All other pipelines are set to NaN.
        - Instance	: The unfitted GraphPipeline BaseEstimator.
        - *validation objective functions : Objective function scores evaluated on the validation set.
        - Validation_Pareto_Front : The full pareto front calculated on the validation set. This is calculated for all pipelines with Pareto_Front equal to 0. Unlike the Pareto_Front which only calculates the frontier and the final population, the Validation Pareto Front is calculated for all pipelines tested on the validation set.

    pareto_front : The same pandas dataframe as evaluated individuals, but containing only the frontier pareto front pipelines.
    '''

    # sklearn BaseEstimator must have a corresponding attribute for each parameter.
    # These should not be modified once set.

    self.search_space = search_space
    self.scorers = scorers
    self.scorers_weights = scorers_weights
    self.classification = classification
    self.cv = cv
    self.other_objective_functions = other_objective_functions
    self.other_objective_functions_weights = other_objective_functions_weights
    self.objective_function_names = objective_function_names
    self.bigger_is_better = bigger_is_better

    self.export_graphpipeline = export_graphpipeline
    self.memory = memory

    self.categorical_features = categorical_features
    self.preprocessing = preprocessing
    self.validation_strategy = validation_strategy
    self.validation_fraction = validation_fraction
    self.disable_label_encoder = disable_label_encoder
    self.population_size = population_size
    self.initial_population_size = initial_population_size

    self.early_stop = early_stop
    self.early_stop_mins = early_stop_mins
    self.scorers_early_stop_tol = scorers_early_stop_tol
    self.other_objectives_early_stop_tol = other_objectives_early_stop_tol
    self.max_time_mins = max_time_mins
    self.max_eval_time_mins = max_eval_time_mins
    self.n_jobs= n_jobs
    self.memory_limit = memory_limit
    self.client = client

    self.crossover_probability = crossover_probability
    self.mutate_probability = mutate_probability
    self.mutate_then_crossover_probability= mutate_then_crossover_probability
    self.crossover_then_mutate_probability= crossover_then_mutate_probability
    self.survival_selector=survival_selector
    self.parent_selector=parent_selector
    self.budget_range = budget_range
    self.budget_scaling = budget_scaling
    self.individuals_until_end_budget = individuals_until_end_budget
    self.stepwise_steps = stepwise_steps

    self.warm_start = warm_start

    self.verbose = verbose
    self.periodic_checkpoint_folder = periodic_checkpoint_folder
    self.callback = callback
    self.processes = processes


    self.scatter = scatter

    self.optuna_optimize_pareto_front = optuna_optimize_pareto_front
    self.optuna_optimize_pareto_front_trials = optuna_optimize_pareto_front_trials
    self.optuna_optimize_pareto_front_timeout = optuna_optimize_pareto_front_timeout
    self.optuna_storage = optuna_storage

    # create random number generator based on rngseed
    self.rng = np.random.default_rng(random_state)
    # save random state passed to us for other functions that use random_state
    self.random_state = random_state

    self.max_evaluated_individuals = max_evaluated_individuals

    #Initialize other used params

    if self.initial_population_size is None:
        self._initial_population_size = self.population_size
    else:
        self._initial_population_size = self.initial_population_size

    if isinstance(self.scorers, str):
        self._scorers = [self.scorers]

    elif callable(self.scorers):
        self._scorers = [self.scorers]
    else:
        self._scorers = self.scorers

    self._scorers = [sklearn.metrics.get_scorer(scoring) for scoring in self._scorers]
    self._scorers_early_stop_tol = self.scorers_early_stop_tol

    self._evolver = tpot.evolvers.SteadyStateEvolver



    self.objective_function_weights = [*scorers_weights, *other_objective_functions_weights]


    if self.objective_function_names is None:
        obj_names = [f.__name__ for f in other_objective_functions]
    else:
        obj_names = self.objective_function_names
    self.objective_names = [f._score_func.__name__ if hasattr(f,"_score_func") else f.__name__ for f in self._scorers] + obj_names


    if not isinstance(self.other_objectives_early_stop_tol, list):
        self._other_objectives_early_stop_tol = [self.other_objectives_early_stop_tol for _ in range(len(self.other_objective_functions))]
    else:
        self._other_objectives_early_stop_tol = self.other_objectives_early_stop_tol

    if not isinstance(self._scorers_early_stop_tol, list):
        self._scorers_early_stop_tol = [self._scorers_early_stop_tol for _ in range(len(self._scorers))]
    else:
        self._scorers_early_stop_tol = self._scorers_early_stop_tol

    self.early_stop_tol = [*self._scorers_early_stop_tol, *self._other_objectives_early_stop_tol]

    self._evolver_instance = None
    self.evaluated_individuals = None

    self.label_encoder_ = None

    set_dask_settings()

apply_make_pipeline(ind, preprocessing_pipeline=None, export_graphpipeline=False, **pipeline_kwargs)

一个辅助函数,用于从 tpot Individual 类创建 sklearn 流水线列。

参数

名称 类型 描述 默认值
ind

要转换为流水线的个体。

必需
preprocessing_pipeline

要包含在个体流水线之前的预处理流水线。

None
export_graphpipeline

强制将流水线导出为图流水线。将所有嵌套流水线、FeatureUnions 和 GraphPipelines 展平为一个单独的 GraphPipeline。

False
pipeline_kwargs

传递给 export_pipeline 或 export_flattened_graphpipeline 方法的关键字参数。

{}

返回值

类型 描述
sklearn 估计器
源代码位于 tpot/tpot_estimator/estimator_utils.py
def apply_make_pipeline(ind, preprocessing_pipeline=None, export_graphpipeline=False, **pipeline_kwargs):
    """
    Helper function to create a column of sklearn pipelines from the tpot individual class.

    Parameters
    ----------
    ind: tpot.SklearnIndividual
        The individual to convert to a pipeline.
    preprocessing_pipeline: sklearn.pipeline.Pipeline, optional
        The preprocessing pipeline to include before the individual's pipeline.
    export_graphpipeline: bool, default=False
        Force the pipeline to be exported as a graph pipeline. Flattens all nested pipelines, FeatureUnions, and GraphPipelines into a single GraphPipeline.
    pipeline_kwargs: dict
        Keyword arguments to pass to the export_pipeline or export_flattened_graphpipeline method.

    Returns
    -------
    sklearn estimator
    """

    try:

        if export_graphpipeline:
            est = ind.export_flattened_graphpipeline(**pipeline_kwargs)
        else:
            est = ind.export_pipeline(**pipeline_kwargs)


        if preprocessing_pipeline is None:
            return est
        else:
            return sklearn.pipeline.make_pipeline(sklearn.base.clone(preprocessing_pipeline), est)
    except:
        return None

check_if_y_is_encoded(y)

检查目标 y 是否由从 0 到 N 的连续整数组成。XGBoost 要求目标以此方式编码。

参数

名称 类型 描述 默认值
y

目标向量。

必需

返回值

类型 描述
布尔值

如果目标编码为从 0 到 N 的连续整数则为 True,否则为 False

源代码位于 tpot/tpot_estimator/estimator_utils.py
def check_if_y_is_encoded(y):
    '''
    Checks if the target y is composed of sequential ints from 0 to N.
    XGBoost requires the target to be encoded in this way.

    Parameters
    ----------
    y: np.ndarray
        The target vector.

    Returns
    -------
    bool
        True if the target is encoded as sequential ints from 0 to N, False otherwise
    '''
    y = sorted(set(y))
    return all(i == j for i, j in enumerate(y))

convert_parents_tuples_to_integers(row, object_to_int)

一个辅助函数,用于将父行转换为表示父代在种群中索引的整数。

使用自定义索引表示父代的原始 pandas 数据框。此函数将自定义索引转换为整数索引,以便最终用户更容易操作。

参数

名称 类型 描述 默认值
row

要转换的行。

必需
object_to_int

将对象映射到整数索引的字典。

必需
返回值

元组。将自定义索引转换为整数索引的行。

源代码位于 tpot/tpot_estimator/estimator_utils.py
def convert_parents_tuples_to_integers(row, object_to_int):
    """
    Helper function to convert the parent rows into integers representing the index of the parent in the population.

    Original pandas dataframe using a custom index for the parents. This function converts the custom index to an integer index for easier manipulation by end users.

    Parameters
    ----------
    row: list, np.ndarray, tuple
        The row to convert.
    object_to_int: dict
        A dictionary mapping the object to an integer index.

    Returns 
    -------
    tuple
        The row with the custom index converted to an integer index.
    """
    if type(row) == list or type(row) == np.ndarray or type(row) == tuple:
        return tuple(object_to_int[obj] for obj in row)
    else:
        return np.nan

cross_val_score_objective(estimator, X, y, scorers, cv, fold=None)

计算估计器的交叉验证分数。每个折叠只拟合估计器一次,并遍历评分器来评估估计器。

参数

名称 类型 描述 默认值
estimator

要拟合和评分的估计器。

必需
X

特征矩阵。

必需
y

目标向量。

必需
scorers

要使用的评分器。如果是一个列表,将遍历评分器并返回一个评分器列表。如果是一个单个评分器,将返回单个分数。

必需
cv

要使用的交叉验证器。例如,sklearn.model_selection.KFold 或 sklearn.model_selection.StratifiedKFold。

必需
fold

返回分数的折叠。如果为 None,将返回所有分数(每个评分器)的平均值。默认值为 None。

None

返回值

名称 类型 描述
scores ndarray浮点数

估计器在每个评分器上的分数。如果 fold 为 None,将返回所有分数(每个评分器)的平均值。如果使用多个评分器,返回一个列表,否则返回单个评分器的浮点数。

源代码位于 tpot/tpot_estimator/cross_val_utils.py
def cross_val_score_objective(estimator, X, y, scorers, cv, fold=None):
    """
    Compute the cross validated scores for a estimator. Only fits the estimator once per fold, and loops over the scorers to evaluate the estimator.

    Parameters
    ----------
    estimator: sklearn.base.BaseEstimator
        The estimator to fit and score.
    X: np.ndarray or pd.DataFrame
        The feature matrix.
    y: np.ndarray or pd.Series
        The target vector.
    scorers: list or scorer
        The scorers to use. 
        If a list, will loop over the scorers and return a list of scorers.
        If a single scorer, will return a single score.
    cv: sklearn cross-validator
        The cross-validator to use. For example, sklearn.model_selection.KFold or sklearn.model_selection.StratifiedKFold.
    fold: int, optional
        The fold to return the scores for. If None, will return the mean of all the scores (per scorer). Default is None.

    Returns
    -------
    scores: np.ndarray or float
        The scores for the estimator per scorer. If fold is None, will return the mean of all the scores (per scorer).
        Returns a list if multiple scorers are used, otherwise returns a float for the single scorer.

    """

    #check if scores is not iterable
    if not isinstance(scorers, Iterable): 
        scorers = [scorers]
    scores = []
    if fold is None:
        for train_index, test_index in cv.split(X, y):
            this_fold_estimator = sklearn.base.clone(estimator)
            if isinstance(X, pd.DataFrame) or isinstance(X, pd.Series):
                X_train, X_test = X.iloc[train_index], X.iloc[test_index]
            else:
                X_train, X_test = X[train_index], X[test_index]

            if isinstance(y, pd.DataFrame) or isinstance(y, pd.Series):
                y_train, y_test = y.iloc[train_index], y.iloc[test_index]
            else:
                y_train, y_test = y[train_index], y[test_index]


            start = time.time()
            this_fold_estimator.fit(X_train,y_train)
            duration = time.time() - start

            this_fold_scores = [sklearn.metrics.get_scorer(scorer)(this_fold_estimator, X_test, y_test) for scorer in scorers] 
            scores.append(this_fold_scores)
            del this_fold_estimator
            del X_train
            del X_test
            del y_train
            del y_test


        return np.mean(scores,0)
    else:
        this_fold_estimator = sklearn.base.clone(estimator)
        train_index, test_index = list(cv.split(X, y))[fold]
        if isinstance(X, pd.DataFrame) or isinstance(X, pd.Series):
            X_train, X_test = X.iloc[train_index], X.iloc[test_index]
        else:
            X_train, X_test = X[train_index], X[test_index]

        if isinstance(y, pd.DataFrame) or isinstance(y, pd.Series):
            y_train, y_test = y.iloc[train_index], y.iloc[test_index]
        else:
            y_train, y_test = y[train_index], y[test_index]

        start = time.time()
        this_fold_estimator.fit(X_train,y_train)
        duration = time.time() - start
        this_fold_scores = [sklearn.metrics.get_scorer(scorer)(this_fold_estimator, X_test, y_test) for scorer in scorers] 
        return this_fold_scores

objective_function_generator(pipeline, x, y, scorers, cv, other_objective_functions, step=None, budget=None, is_classification=True, export_graphpipeline=False, **pipeline_kwargs)

使用交叉验证评估流水线,并将其分数与独立的其他目标函数的分数连接起来。

参数

名称 类型 描述 默认值
pipeline

要评估的个体。

必需
x

特征矩阵。

必需
y

目标向量。

必需
scorers

用于交叉验证的评分器。

必需
cv

要使用的交叉验证器。例如,sklearn.model_selection.KFold 或 sklearn.model_selection.StratifiedKFold。如果是一个整数,将使用 sklearn.model_selection.KFold,其中 n_splits=cv。

必需
other_objective_functions

用于评估流水线的独立目标函数列表。签名格式为 obj(pipeline) -> float 或 obj(pipeline) -> np.ndarray。这些函数接受未拟合的估计器。

必需
step

返回分数的折叠。如果为 None,将返回所有分数(每个评分器)的平均值。默认值为 None。

None
budget

对数据进行子采样的预算。如果为 None,将使用完整数据集。默认值为 None。将对 budget*len(x) 个样本进行子采样。

None
is_classification

如果为 True,将对子采样进行分层。默认值为 True。

True
export_graphpipeline

强制将流水线导出为图流水线。将所有嵌套的 sklearn 流水线、FeatureUnions 和 GraphPipelines 展平为一个单独的 GraphPipeline。

False
pipeline_kwargs

传递给 export_pipeline 或 export_flattened_graphpipeline 方法的关键字参数。

{}

返回值

类型 描述
ndarray

流水线的连接分数。前 len(scorers) 个元素是交叉验证分数,其余元素是独立的客观函数分数。

源代码位于 tpot/tpot_estimator/estimator_utils.py
def objective_function_generator(pipeline, x,y, scorers, cv, other_objective_functions, step=None, budget=None, is_classification=True, export_graphpipeline=False, **pipeline_kwargs):
    """
    Uses cross validation to evaluate the pipeline using the scorers, and concatenates results with scores from standalone other objective functions.

    Parameters
    ----------
    pipeline: tpot.SklearnIndividual
        The individual to evaluate.
    x: np.ndarray
        The feature matrix.
    y: np.ndarray
        The target vector.
    scorers: list
        The scorers to use for cross validation. 
    cv: int, float, or sklearn cross-validator
        The cross-validator to use. For example, sklearn.model_selection.KFold or sklearn.model_selection.StratifiedKFold.
        If an int, will use sklearn.model_selection.KFold with n_splits=cv.
    other_objective_functions: list
        A list of standalone objective functions to evaluate the pipeline. With signature obj(pipeline) -> float. or obj(pipeline) -> np.ndarray
        These functions take in the unfitted estimator.
    step: int, optional
        The fold to return the scores for. If None, will return the mean of all the scores (per scorer). Default is None.
    budget: float, optional
        The budget to subsample the data. If None, will use the full dataset. Default is None.
        Will subsample budget*len(x) samples.
    is_classification: bool, default=True
        If True, will stratify the subsampling. Default is True.
    export_graphpipeline: bool, default=False
        Force the pipeline to be exported as a graph pipeline. Flattens all nested sklearn pipelines, FeatureUnions, and GraphPipelines into a single GraphPipeline.
    pipeline_kwargs: dict
        Keyword arguments to pass to the export_pipeline or export_flattened_graphpipeline method.

    Returns
    -------
    np.ndarray
        The concatenated scores for the pipeline. The first len(scorers) elements are the cross validation scores, and the remaining elements are the standalone objective functions.

    """

    if export_graphpipeline:
        pipeline = pipeline.export_flattened_graphpipeline(**pipeline_kwargs)
    else:
        pipeline = pipeline.export_pipeline(**pipeline_kwargs)

    if budget is not None and budget < 1:
        if is_classification:
            x,y = sklearn.utils.resample(x,y, stratify=y, n_samples=int(budget*len(x)), replace=False, random_state=1)
        else:
            x,y = sklearn.utils.resample(x,y, n_samples=int(budget*len(x)), replace=False, random_state=1)

        if isinstance(cv, int) or isinstance(cv, float):
            n_splits = cv
        else:
            n_splits = cv.n_splits

    if len(scorers) > 0:
        cv_obj_scores = cross_val_score_objective(sklearn.base.clone(pipeline),x,y,scorers=scorers, cv=cv , fold=step)
    else:
        cv_obj_scores = []

    if other_objective_functions is not None and len(other_objective_functions) >0:
        other_scores = [obj(sklearn.base.clone(pipeline)) for obj in other_objective_functions]
        #flatten
        other_scores = np.array(other_scores).flatten().tolist()
    else:
        other_scores = []

    return np.concatenate([cv_obj_scores,other_scores])

remove_underrepresented_classes(x, y, min_count)

一个辅助函数,用于从数据集中移除样本数少于 min_count 的类别。

参数

名称 类型 描述 默认值
x

特征矩阵。

必需
y

目标向量。

必需
min_count

保留一个类所需的最小样本数。

必需

返回值

类型 描述
(ndarray, ndarray)

已移除样本数少于 min_count 的类别的特征矩阵和目标向量。

源代码位于 tpot/tpot_estimator/estimator_utils.py
def remove_underrepresented_classes(x, y, min_count):
    """
    Helper function to remove classes with less than min_count samples from the dataset.

    Parameters
    ----------
    x: np.ndarray or pd.DataFrame
        The feature matrix.
    y: np.ndarray or pd.Series
        The target vector.
    min_count: int
        The minimum number of samples to keep a class.

    Returns
    -------
    np.ndarray, np.ndarray
        The feature matrix and target vector with rows from classes with less than min_count samples removed.
    """
    if isinstance(y, (np.ndarray, pd.Series)):
        unique, counts = np.unique(y, return_counts=True)
        if min(counts) >= min_count:
            return x, y
        keep_classes = unique[counts >= min_count]
        mask = np.isin(y, keep_classes)
        x = x[mask]
        y = y[mask]
    elif isinstance(y, pd.DataFrame):
        counts = y.apply(pd.Series.value_counts)
        if min(counts) >= min_count:
            return x, y
        keep_classes = counts.index[counts >= min_count].tolist()
        mask = y.isin(keep_classes).all(axis=1)
        x = x[mask]
        y = y[mask]
    else:
        raise TypeError("y must be a numpy array or a pandas Series/DataFrame")
    return x, y

val_objective_function_generator(pipeline, X_train, y_train, X_test, y_test, scorers, other_objective_functions, export_graphpipeline=False, **pipeline_kwargs)

在训练集上训练流水线,并使用评分器和其他目标函数在测试集上对其进行评估。

参数

名称 类型 描述 默认值
pipeline

要评估的个体。

必需
X_train

训练集的特征矩阵。

必需
y_train

训练集的目标向量。

必需
X_test

测试集的特征矩阵。

必需
y_test

测试集的目标向量。

必需
scorers

用于交叉验证的评分器。

必需
other_objective_functions

用于评估流水线的独立目标函数列表。签名格式为 obj(pipeline) -> float 或 obj(pipeline) -> np.ndarray。这些函数接受未拟合的估计器。

必需
export_graphpipeline

强制将流水线导出为图流水线。将所有嵌套的 sklearn 流水线、FeatureUnions 和 GraphPipelines 展平为一个单独的 GraphPipeline。

False
pipeline_kwargs

传递给 export_pipeline 或 export_flattened_graphpipeline 方法的关键字参数。

{}

返回值

类型 描述
ndarray

流水线的连接分数。前 len(scorers) 个元素是交叉验证分数,其余元素是独立的客观函数分数。

源代码位于 tpot/tpot_estimator/estimator_utils.py
def val_objective_function_generator(pipeline, X_train, y_train, X_test, y_test, scorers, other_objective_functions, export_graphpipeline=False, **pipeline_kwargs):
    """
    Trains a pipeline on a training set and evaluates it on a test set using the scorers and other objective functions.

    Parameters
    ----------

    pipeline: tpot.SklearnIndividual
        The individual to evaluate.
    X_train: np.ndarray
        The feature matrix of the training set.
    y_train: np.ndarray
        The target vector of the training set.
    X_test: np.ndarray
        The feature matrix of the test set.
    y_test: np.ndarray
        The target vector of the test set.
    scorers: list
        The scorers to use for cross validation.
    other_objective_functions: list
        A list of standalone objective functions to evaluate the pipeline. With signature obj(pipeline) -> float. or obj(pipeline) -> np.ndarray
        These functions take in the unfitted estimator.
    export_graphpipeline: bool, default=False
        Force the pipeline to be exported as a graph pipeline. Flattens all nested sklearn pipelines, FeatureUnions, and GraphPipelines into a single GraphPipeline.
    pipeline_kwargs: dict
        Keyword arguments to pass to the export_pipeline or export_flattened_graphpipeline method.

    Returns
    -------
    np.ndarray
        The concatenated scores for the pipeline. The first len(scorers) elements are the cross validation scores, and the remaining elements are the standalone objective functions.


    """

    #subsample the data
    if export_graphpipeline:
        pipeline = pipeline.export_flattened_graphpipeline(**pipeline_kwargs)
    else:
        pipeline = pipeline.export_pipeline(**pipeline_kwargs)

    fitted_pipeline = sklearn.base.clone(pipeline)
    fitted_pipeline.fit(X_train, y_train)

    if len(scorers) > 0:
        scores =[sklearn.metrics.get_scorer(scorer)(fitted_pipeline, X_test, y_test) for scorer in scorers]

    other_scores = []
    if other_objective_functions is not None and len(other_objective_functions) >0:
        other_scores = [obj(sklearn.base.clone(pipeline)) for obj in other_objective_functions]

    return np.concatenate([scores,other_scores])