distributed-ml-spark-connect(Python)

Loading...

Distributed ML on Spark Connect

This notebook demonstrates how to perform distributed ML using the pyspark.ml.connect module to train Spark ML models and run model inference on Databricks Connect.

Requirements

  • Set up Databricks Connect on your clusters. See Databricks Connect for Python .
  • Databricks Runtime 14.0 ML or higher installed.
  • Cluster access mode of Assigned.
  • Required python packages: torch and torcheval

1. Logistic regression

Train model and run model prediction

from pyspark.ml.connect.classification import LogisticRegression, LogisticRegressionModel 
lor = LogisticRegression(maxIter=20, learningRate=0.01)                                   
dataset = spark.read.load("dbfs:/weichen/spark_datasets/breast_cancer")
lor_model = lor.fit(dataset)
transformed_dataset = lor_model.transform(dataset)
transformed_dataset.show()
Started distributed training with 1 executor processes /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 1 completes, train loss = 5674.142998907301 Progress: train epoch 2 completes, train loss = 1270.0397550794814 Progress: train epoch 3 completes, train loss = 2316.0906439887153 Progress: train epoch 4 completes, train loss = 1535.3013042873806 Progress: train epoch 5 completes, train loss = 410.84776454501684 Progress: train epoch 6 completes, train loss = 1405.2841084798176 Progress: train epoch 7 completes, train loss = 1173.6955676608616 Progress: train epoch 8 completes, train loss = 574.499418258667 Progress: train epoch 9 completes, train loss = 278.6784723069933 Progress: train epoch 10 completes, train loss = 905.0532099405924 Progress: train epoch 11 completes, train loss = 299.3518350389269 Progress: train epoch 12 completes, train loss = 1627.8923000759548 Progress: train epoch 13 completes, train loss = 1040.84999622239 Progress: train epoch 14 completes, train loss = 1101.4966803656685 Progress: train epoch 15 completes, train loss = 1348.2048685285781 Progress: train epoch 16 completes, train loss = 1020.9244626363119 Progress: train epoch 17 completes, train loss = 579.6666518317329 Progress: train epoch 18 completes, train loss = 441.9967024061415 Progress: train epoch 19 completes, train loss = 516.2259618971083 Progress: train epoch 20 completes, train loss = 614.2414143880209 Finished distributed training with 1 executor processes
+--------------------+-----+----------+--------------------+ | features|label|prediction| probability| +--------------------+-----+----------+--------------------+ |[17.99, 10.38, 12...| 0| 0| [1.0, 0.0]| |[20.57, 17.77, 13...| 0| 0| [1.0, 0.0]| |[19.69, 21.25, 13...| 0| 0| [1.0, 0.0]| |[11.42, 20.38, 77...| 0| 1| [0.0, 1.0]| |[20.29, 14.34, 13...| 0| 0| [1.0, 0.0]| |[12.45, 15.7, 82....| 0| 0| [1.0, 0.0]| |[18.25, 19.98, 11...| 0| 0| [1.0, 0.0]| |[13.71, 20.83, 90...| 0| 0| [1.0, 0.0]| |[13.0, 21.82, 87....| 0| 1|[5.60519385729926...| |[12.46, 24.04, 83...| 0| 0| [1.0, 0.0]| |[16.02, 23.24, 10...| 0| 0| [1.0, 0.0]| |[15.78, 17.89, 10...| 0| 0| [1.0, 0.0]| |[19.17, 24.8, 132...| 0| 0| [1.0, 0.0]| |[15.85, 23.95, 10...| 0| 1| [0.0, 1.0]| |[13.73, 22.61, 93...| 0| 1| [0.0, 1.0]| |[14.54, 27.54, 96...| 0| 0| [1.0, 0.0]| |[14.68, 20.13, 94...| 0| 0| [1.0, 0.0]| |[16.13, 20.68, 10...| 0| 0| [1.0, 0.0]| |[19.81, 22.15, 13...| 0| 0| [1.0, 0.0]| |[13.54, 14.36, 87...| 1| 1| [0.0, 1.0]| +--------------------+-----+----------+--------------------+ only showing top 20 rows

1.2 Run local inference

local_dataset = dataset.toPandas()  
lor_model.transform(local_dataset) # transform on local pandas dataset, no spark job spawned.

1.3 Save and load model

You can save your and load your model to your local file system or to a cloud storage file system.

# save to local file system
lor_model.saveToLocal("/tmp/weichen/lor/lor_model", overwrite=True)
loaded_model = LogisticRegressionModel.loadFromLocal("/tmp/weichen/lor/lor_model")
# save to cloud storage file system
# the path means the default configured cloud storage path,
# on assigned access mode cluster,
# it points to `dbfs:/tmp/weichen/lor_model000001`
lor_model.save("/tmp/weichen/lor/lor_model", overwrite=True)
loaded_model = LogisticRegressionModel.loadFromLocal("/tmp/weichen/lor/lor_model")
%sh

ls /tmp/weichen/lor/lor_model  # check saved model files
LogisticRegressionModel.torch metadata.json
%sh

cat /tmp/weichen/lor/lor_model/metadata.json  # metadata is readable
{"class": "pyspark.ml.connect.classification.LogisticRegressionModel", "timestamp": 1692751793506, "sparkVersion": "3.5.0.dev0", "uid": "LogisticRegression_c2edaa0c245f", "paramMap": {"learningRate": 0.01, "maxIter": 20}, "defaultParamMap": {"batchSize": 32, "featuresCol": "features", "fitIntercept": true, "labelCol": "label", "learningRate": 0.001, "maxIter": 100, "momentum": 0.9, "numTrainWorkers": 1, "predictionCol": "prediction", "probabilityCol": "probability", "seed": 0, "tol": 1e-06}, "type": "spark_connect", "extra": {"num_features": 30, "num_classes": 2}, "core_model_path": "LogisticRegressionModel.torch"}

1.4 Model format

The model format is decoupled from Spark, so you can load the model with out Spark. The following shows you can directly load this model using PyTorch.

import torch

torch_model = torch.load("/tmp/weichen/lor/lor_model/LogisticRegressionModel.torch")

Evaluator example

from pyspark.ml.connect.evaluation import BinaryClassificationEvaluator 
eva = BinaryClassificationEvaluator(metricName='areaUnderPR')           

aucPR = eva.evaluate(transformed_dataset)
print(f"auc PR: {aucPR}")
auc PR: 0.896164059638977

Evaluate on local dataset

local_transformed_dataset = transformed_dataset.toPandas()
eva.evaluate(local_transformed_dataset)  # transform on local dataset, no spark job spawned
0.896164059638977

Pipeline

from pyspark.ml.connect.feature import StandardScaler
from pyspark.ml.connect.pipeline import Pipeline

scaler = StandardScaler(inputCol="features", outputCol="scaled_features")
lorv2 = LogisticRegression(
    maxIter=200, numTrainWorkers=2, learningRate=0.001, featuresCol="scaled_features"
)

pipeline = Pipeline(stages=[scaler, lorv2])
pipeline_model = pipeline.fit(dataset)
Started distributed training with 2 executor processes /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 1 completes, train loss = 0.6319397952821519 Progress: train epoch 2 completes, train loss = 0.5052119394143423 Progress: train epoch 3 completes, train loss = 0.39726417263348895 Progress: train epoch 4 completes, train loss = 0.3305743816826079 Progress: train epoch 5 completes, train loss = 0.28952894773748183 Progress: train epoch 6 completes, train loss = 0.26226793560716843 Progress: train epoch 7 completes, train loss = 0.2426722248395284 Progress: train epoch 8 completes, train loss = 0.22767846120728386 Progress: train epoch 9 completes, train loss = 0.21567468676302168 Progress: train epoch 10 completes, train loss = 0.20575013756752014 Progress: train epoch 11 completes, train loss = 0.19735078679190743 Progress: train epoch 12 completes, train loss = 0.19011642204390633 Progress: train epoch 13 completes, train loss = 0.18379933304256862 Progress: train epoch 14 completes, train loss = 0.1782214500837856 Progress: train epoch 15 completes, train loss = 0.17325027038653693 Progress: train epoch 16 completes, train loss = 0.16878453228208753 Progress: train epoch 17 completes, train loss = 0.16474516855345833 Progress: train epoch 18 completes, train loss = 0.16106937328974405 Progress: train epoch 19 completes, train loss = 0.1577065313855807 Progress: train epoch 20 completes, train loss = 0.15461528797944388 Progress: train epoch 21 completes, train loss = 0.1517615665992101 Progress: train epoch 22 completes, train loss = 0.14911692920658323 Progress: train epoch 23 completes, train loss = 0.14665746688842773 Progress: train epoch 24 completes, train loss = 0.1443629190325737 Progress: train epoch 25 completes, train loss = 0.14221598621871737 Progress: train epoch 26 completes, train loss = 0.14020179874367183 Progress: train epoch 27 completes, train loss = 0.13830747455358505 Progress: train epoch 28 completes, train loss = 0.1365218518508805 Progress: train epoch 29 completes, train loss = 0.13483511408170065 Progress: train epoch 30 completes, train loss = 0.13323867900504005 Progress: train epoch 31 completes, train loss = 0.13172493957810932 Progress: train epoch 32 completes, train loss = 0.1302871745493677 Progress: train epoch 33 completes, train loss = 0.12891938537359238 Progress: train epoch 34 completes, train loss = 0.12761621673901877 Progress: train epoch 35 completes, train loss = 0.12637287047174242 Progress: train epoch 36 completes, train loss = 0.1251850268907017 Progress: train epoch 37 completes, train loss = 0.12404879596498278 Progress: train epoch 38 completes, train loss = 0.12296064446369807 Progress: train epoch 39 completes, train loss = 0.12191737608777152 Progress: train epoch 40 completes, train loss = 0.12091608511077033 Progress: train epoch 41 completes, train loss = 0.11995410256915623 Progress: train epoch 42 completes, train loss = 0.11902901116344664 Progress: train epoch 43 completes, train loss = 0.11813858896493912 Progress: train epoch 44 completes, train loss = 0.11728079037533866 Progress: train epoch 45 completes, train loss = 0.11645372874206966 Progress: train epoch 46 completes, train loss = 0.11565568877591027 Progress: train epoch 47 completes, train loss = 0.11488505370087093 Progress: train epoch 48 completes, train loss = 0.11414035078552034 Progress: train epoch 49 completes, train loss = 0.11342021491792467 Progress: train epoch 50 completes, train loss = 0.11272336625390583 Progress: train epoch 51 completes, train loss = 0.1120486135284106 Progress: train epoch 52 completes, train loss = 0.11139486895667182 Progress: train epoch 53 completes, train loss = 0.11076110353072484 Progress: train epoch 54 completes, train loss = 0.11014633584353659 Progress: train epoch 55 completes, train loss = 0.10954969003796577 Progress: train epoch 56 completes, train loss = 0.10897033206290668 Progress: train epoch 57 completes, train loss = 0.10840746429231432 Progress: train epoch 58 completes, train loss = 0.10786034994655186 Progress: train epoch 59 completes, train loss = 0.10732828908496433 Progress: train epoch 60 completes, train loss = 0.10681063888801469 Progress: train epoch 61 completes, train loss = 0.10630678426888254 Progress: train epoch 62 completes, train loss = 0.10581614532404476 Progress: train epoch 63 completes, train loss = 0.10533817236622174 Progress: train epoch 64 completes, train loss = 0.10487235585848491 Progress: train epoch 65 completes, train loss = 0.10441822061936061 Progress: train epoch 66 completes, train loss = 0.10397527946366204 Progress: train epoch 67 completes, train loss = 0.1035431217816141 Progress: train epoch 68 completes, train loss = 0.10312132330404387 Progress: train epoch 69 completes, train loss = 0.10270949494507578 Progress: train epoch 70 completes, train loss = 0.10230724844667646 Progress: train epoch 71 completes, train loss = 0.10191426674524943 Progress: train epoch 72 completes, train loss = 0.10153019262684716 Progress: train epoch 73 completes, train loss = 0.10115470820003086 Progress: train epoch 74 completes, train loss = 0.10078752082255152 Progress: train epoch 75 completes, train loss = 0.10042832005355093 Progress: train epoch 76 completes, train loss = 0.10007684760623509 Progress: train epoch 77 completes, train loss = 0.09973283857107162 Progress: train epoch 78 completes, train loss = 0.0993960512181123 Progress: train epoch 79 completes, train loss = 0.0990662272605631 Progress: train epoch 80 completes, train loss = 0.09874314359492725 Progress: train epoch 81 completes, train loss = 0.09842659574415949 Progress: train epoch 82 completes, train loss = 0.09811637219455507 Progress: train epoch 83 completes, train loss = 0.09781225605143441 Progress: train epoch 84 completes, train loss = 0.09751406560341518 Progress: train epoch 85 completes, train loss = 0.0972216236922476 Progress: train epoch 86 completes, train loss = 0.09693476185202599 Progress: train epoch 87 completes, train loss = 0.0966532909207874 Progress: train epoch 88 completes, train loss = 0.09637706105907758 Progress: train epoch 89 completes, train loss = 0.09610592077175777 Progress: train epoch 90 completes, train loss = 0.09583971194095081 Progress: train epoch 91 completes, train loss = 0.09557828928033511 Progress: train epoch 92 completes, train loss = 0.09532153730591138 Progress: train epoch 93 completes, train loss = 0.09506930121117169 Progress: train epoch 94 completes, train loss = 0.09482147006524934 Progress: train epoch 95 completes, train loss = 0.09457790934377247 Progress: train epoch 96 completes, train loss = 0.0943385118411647 Progress: train epoch 97 completes, train loss = 0.09410315545068847 Progress: train epoch 98 completes, train loss = 0.09387173131108284 Progress: train epoch 99 completes, train loss = 0.09364414753185378 Progress: train epoch 100 completes, train loss = 0.09342028945684433 Progress: train epoch 101 completes, train loss = 0.09320006519556046 Progress: train epoch 102 completes, train loss = 0.09298338037398127 Progress: train epoch 103 completes, train loss = 0.09277014310161273 Progress: train epoch 104 completes, train loss = 0.09256026686893569 Progress: train epoch 105 completes, train loss = 0.09235366723603672 Progress: train epoch 106 completes, train loss = 0.09215024651752578 Progress: train epoch 107 completes, train loss = 0.09194996290736729 Progress: train epoch 108 completes, train loss = 0.09175269885195626 Progress: train epoch 109 completes, train loss = 0.09155840178330739 Progress: train epoch 110 completes, train loss = 0.09136700381835301 Progress: train epoch 111 completes, train loss = 0.09117842755383915 Progress: train epoch 112 completes, train loss = 0.09099260510669814 Progress: train epoch 113 completes, train loss = 0.09080947563052177 Progress: train epoch 114 completes, train loss = 0.09062897952066527 Progress: train epoch 115 completes, train loss = 0.09045104351308611 Progress: train epoch 116 completes, train loss = 0.09027562124861611 Progress: train epoch 117 completes, train loss = 0.09010265353653166 Progress: train epoch 118 completes, train loss = 0.08993207746081883 Progress: train epoch 119 completes, train loss = 0.08976384749015172 Progress: train epoch 120 completes, train loss = 0.0895979077451759 Progress: train epoch 121 completes, train loss = 0.08943420896927516 Progress: train epoch 122 completes, train loss = 0.08927270645896594 Progress: train epoch 123 completes, train loss = 0.08911335261331664 Progress: train epoch 124 completes, train loss = 0.08895608286062877 Progress: train epoch 125 completes, train loss = 0.08880088064405653 Progress: train epoch 126 completes, train loss = 0.088647680150138 Progress: train epoch 127 completes, train loss = 0.0884964553018411 Progress: train epoch 128 completes, train loss = 0.08834715435902278 Progress: train epoch 129 completes, train loss = 0.08819974172446463 Progress: train epoch 130 completes, train loss = 0.08805417890350024 Progress: train epoch 131 completes, train loss = 0.08791042698754205 Progress: train epoch 132 completes, train loss = 0.0877684462401602 Progress: train epoch 133 completes, train loss = 0.08762820147805744 Progress: train epoch 134 completes, train loss = 0.08748966827988625 Progress: train epoch 135 completes, train loss = 0.08735280028647846 Progress: train epoch 136 completes, train loss = 0.08721757266256544 Progress: train epoch 137 completes, train loss = 0.08708394442995389 Progress: train epoch 138 completes, train loss = 0.0869519015153249 Progress: train epoch 139 completes, train loss = 0.08682139590382576 Progress: train epoch 140 completes, train loss = 0.08669240482979351 Progress: train epoch 141 completes, train loss = 0.08656490097443263 Progress: train epoch 142 completes, train loss = 0.08643885081013043 Progress: train epoch 143 completes, train loss = 0.08631423488259315 Progress: train epoch 144 completes, train loss = 0.08619101925028695 Progress: train epoch 145 completes, train loss = 0.08606918321715461 Progress: train epoch 146 completes, train loss = 0.0859487002922429 Progress: train epoch 147 completes, train loss = 0.08582954853773117 Progress: train epoch 148 completes, train loss = 0.08571169194247988 Progress: train epoch 149 completes, train loss = 0.08559512429767185 Progress: train epoch 150 completes, train loss = 0.08547981497314242 Progress: train epoch 151 completes, train loss = 0.0853657325108846 Progress: train epoch 152 completes, train loss = 0.08525286697679096 Progress: train epoch 153 completes, train loss = 0.08514119933048885 Progress: train epoch 154 completes, train loss = 0.08503069024946955 Progress: train epoch 155 completes, train loss = 0.08492134345902337 Progress: train epoch 156 completes, train loss = 0.08481312625937992 Progress: train epoch 157 completes, train loss = 0.08470601712663968 Progress: train epoch 158 completes, train loss = 0.08460000736845864 Progress: train epoch 159 completes, train loss = 0.0844950597319338 Progress: train epoch 160 completes, train loss = 0.08439117752843434 Progress: train epoch 161 completes, train loss = 0.08428833012779553 Progress: train epoch 162 completes, train loss = 0.08418651421864827 Progress: train epoch 163 completes, train loss = 0.08408569503161642 Progress: train epoch 164 completes, train loss = 0.08398586304651366 Progress: train epoch 165 completes, train loss = 0.08388699880904621 Progress: train epoch 166 completes, train loss = 0.08378910563058323 Progress: train epoch 167 completes, train loss = 0.08369214460253716 Progress: train epoch 168 completes, train loss = 0.08359610868824853 Progress: train epoch 169 completes, train loss = 0.0835009908510579 Progress: train epoch 170 completes, train loss = 0.08340675756335258 Progress: train epoch 171 completes, train loss = 0.0833134117225806 Progress: train epoch 172 completes, train loss = 0.08322093966934416 Progress: train epoch 173 completes, train loss = 0.08312931574053234 Progress: train epoch 174 completes, train loss = 0.08303854159182972 Progress: train epoch 175 completes, train loss = 0.08294859156012535 Progress: train epoch 176 completes, train loss = 0.08285946978463067 Progress: train epoch 177 completes, train loss = 0.08277113197578324 Progress: train epoch 178 completes, train loss = 0.08268359469042884 Progress: train epoch 179 completes, train loss = 0.08259684302740627 Progress: train epoch 180 completes, train loss = 0.08251085090968344 Progress: train epoch 181 completes, train loss = 0.08242561916510265 Progress: train epoch 182 completes, train loss = 0.08234114199876785 Progress: train epoch 183 completes, train loss = 0.08225738919443554 Progress: train epoch 184 completes, train loss = 0.08217436199386914 Progress: train epoch 185 completes, train loss = 0.08209204715159205 Progress: train epoch 186 completes, train loss = 0.08201044011447164 Progress: train epoch 187 completes, train loss = 0.08192951728900273 Progress: train epoch 188 completes, train loss = 0.08184929109281963 Progress: train epoch 189 completes, train loss = 0.08176972882615195 Progress: train epoch 190 completes, train loss = 0.08169082676370938 Progress: train epoch 191 completes, train loss = 0.08161258945862453 Progress: train epoch 192 completes, train loss = 0.08153499000602299 Progress: train epoch 193 completes, train loss = 0.08145803088943164 Progress: train epoch 194 completes, train loss = 0.08138169265455669 Progress: train epoch 195 completes, train loss = 0.08130597985453075 Progress: train epoch 196 completes, train loss = 0.08123087800211376 Progress: train epoch 197 completes, train loss = 0.08115637550751369 Progress: train epoch 198 completes, train loss = 0.08108246988720363 Progress: train epoch 199 completes, train loss = 0.08100914624002245 Progress: train epoch 200 completes, train loss = 0.08093640001283751 Finished distributed training with 2 executor processes

Pipeline evaluation

pipeline_model.transform(dataset).show()
+--------------------+-----+--------------------+----------+--------------------+ | features|label| scaled_features|prediction| probability| +--------------------+-----+--------------------+----------+--------------------+ |[17.99, 10.38, 12...| 0|[1.09609952943171...| 0|[0.99999916553497...| |[20.57, 17.77, 13...| 0|[1.82821197373437...| 0|[0.99882179498672...| |[19.69, 21.25, 13...| 0|[1.57849920203424...| 0|[0.99998509883880...| |[11.42, 20.38, 77...| 0|[-0.7682333229203...| 0|[0.99682056903839...| |[20.29, 14.34, 13...| 0|[1.74875791001160...| 0|[0.99892044067382...| |[12.45, 15.7, 82....| 0|[-0.4759558742259...| 0|[0.85606336593627...| |[18.25, 19.98, 11...| 0|[1.16987830288857...| 0|[0.99891209602355...| |[13.71, 20.83, 90...| 0|[-0.1184125874734...| 0|[0.92196530103683...| |[13.0, 21.82, 87....| 0|[-0.3198853919133...| 0|[0.98955965042114...| |[12.46, 24.04, 83...| 0|[-0.4731182290929...| 0|[0.99899595975875...| |[16.02, 23.24, 10...| 0|[0.53708343823938...| 0|[0.79597622156143...| |[15.78, 17.89, 10...| 0|[0.46897995504843...| 0|[0.99210041761398...| |[19.17, 24.8, 132...| 0|[1.43094165512052...| 0|[0.99957484006881...| |[15.85, 23.95, 10...| 0|[0.48884347097913...| 0|[0.64049845933914...| |[13.73, 22.61, 93...| 0|[-0.1127372972075...| 0|[0.97818976640701...| |[14.54, 27.54, 96...| 0|[0.11711195856189...| 0|[0.99900609254837...| |[14.68, 20.13, 94...| 0|[0.15683899042327...| 0|[0.94674462080001...| |[16.13, 20.68, 10...| 0|[0.56829753470189...| 0|[0.99975126981735...| |[19.81, 22.15, 13...| 0|[1.61255094362971...| 0|[0.99999868869781...| |[13.54, 14.36, 87...| 1|[-0.1666525547337...| 1|[0.07583434879779...| +--------------------+-----+--------------------+----------+--------------------+ only showing top 20 rows
local_test_dataset = dataset.select("features").toPandas()
pipeline_model.transform(local_test_dataset)  # local inference without spark job spawned

Pipeline model format

pipeline_model.saveToLocal("/tmp/weichen/pipeline/model", overwrite=True)
%sh

ls /tmp/weichen/pipeline/model
metadata.json pipeline_stage_0.StandardScalerModel.sklearn.pkl pipeline_stage_1.LogisticRegressionModel.torch
%sh

cat /tmp/weichen/pipeline/model/metadata.json
{"class": "pyspark.ml.connect.pipeline.PipelineModel", "timestamp": 1692751827273, "sparkVersion": "3.5.0.dev0", "uid": "Pipeline_318b62ac900c", "paramMap": {}, "defaultParamMap": {}, "type": "spark_connect", "stages": [{"class": "pyspark.ml.connect.feature.StandardScalerModel", "timestamp": 1692751827273, "sparkVersion": "3.5.0.dev0", "uid": "StandardScaler_c69602abd980", "paramMap": {"inputCol": "features", "outputCol": "scaled_features"}, "defaultParamMap": {"outputCol": "StandardScaler_c69602abd980__output"}, "type": "spark_connect", "core_model_path": "pipeline_stage_0.StandardScalerModel.sklearn.pkl"}, {"class": "pyspark.ml.connect.classification.LogisticRegressionModel", "timestamp": 1692751828905, "sparkVersion": "3.5.0.dev0", "uid": "LogisticRegression_15cbdd4cf993", "paramMap": {"featuresCol": "scaled_features", "learningRate": 0.001, "maxIter": 200, "numTrainWorkers": 2}, "defaultParamMap": {"batchSize": 32, "featuresCol": "features", "fitIntercept": true, "labelCol": "label", "learningRate": 0.001, "maxIter": 100, "momentum": 0.9, "numTrainWorkers": 1, "predictionCol": "prediction", "probabilityCol": "probability", "seed": 0, "tol": 1e-06}, "type": "spark_connect", "extra": {"num_features": 30, "num_classes": 2}, "core_model_path": "pipeline_stage_1.LogisticRegressionModel.torch"}]}

You can load each stage of the saved pipeline model without a Spark dependency

import pickle

with open("/tmp/weichen/pipeline/model/pipeline_stage_0.StandardScalerModel.sklearn.pkl", "rb") as f:

  # The feature transformers implemented in new designed SparkML
  # are saved as sklearn feature transformers,
  # so that we can load them via pickle and we will get a sklearn feature transformer.
  # This design allow us to use the saved spark pipeline model
  # in an environment without spark
  sk_standard_scaler = pickle.load(f)
import torch

lor_model_in_pipeline = torch.load("/tmp/weichen/pipeline/model/pipeline_stage_1.LogisticRegressionModel.torch")

Cross validation tuning

from pyspark.ml.connect.feature import StandardScaler
from pyspark.ml.connect.pipeline import Pipeline
from pyspark.ml.tuning import ParamGridBuilder
from pyspark.ml.connect.tuning import CrossValidator, CrossValidatorModel

scaler = StandardScaler(inputCol="features", outputCol="scaled_features")
lorv2 = LogisticRegression(numTrainWorkers=2, featuresCol="scaled_features")
pipeline = Pipeline(stages=[scaler, lorv2])

grid2 = ParamGridBuilder().addGrid(lorv2.maxIter, [2, 200]).build()
cv = CrossValidator(
    estimator=pipeline,
    estimatorParamMaps=grid2,
    parallelism=2,
    evaluator=BinaryClassificationEvaluator(),
)
cv_model = cv.fit(dataset)
transformed_dataset = cv_model.transform(dataset)

transformed_dataset.show()
Started distributed training with 2 executor processes Started distributed training with 2 executor processes /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 1 completes, train loss = 0.6323876976966858 Progress: train epoch 2 completes, train loss = 0.5302272992474693 /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 1 completes, train loss = 0.6323876976966858 Progress: train epoch 2 completes, train loss = 0.5302272992474693 Progress: train epoch 3 completes, train loss = 0.4248217599732535 Progress: train epoch 4 completes, train loss = 0.3500929858003344 Progress: train epoch 5 completes, train loss = 0.3013442073549543 Progress: train epoch 6 completes, train loss = 0.2690127342939377 Progress: train epoch 7 completes, train loss = 0.24654610667909896 Progress: train epoch 8 completes, train loss = 0.23011396612439836 Progress: train epoch 9 completes, train loss = 0.217521105493818 Progress: train epoch 10 completes, train loss = 0.20748380890914372 Progress: train epoch 11 completes, train loss = 0.19922558963298798 Progress: train epoch 12 completes, train loss = 0.19225851339953287 Progress: train epoch 13 completes, train loss = 0.1862635804074151 Progress: train epoch 14 completes, train loss = 0.18102408519812993 Progress: train epoch 15 completes, train loss = 0.17638736750398362 Progress: train epoch 16 completes, train loss = 0.1722423072372164 Progress: train epoch 17 completes, train loss = 0.1685055930699621 Progress: train epoch 18 completes, train loss = 0.1651131957769394 Progress: train epoch 19 completes, train loss = 0.16201480690922057 Progress: train epoch 20 completes, train loss = 0.15917017204420908 Progress: train epoch 21 completes, train loss = 0.1565465916480337 Progress: train epoch 22 completes, train loss = 0.1541170903614589 Progress: train epoch 23 completes, train loss = 0.15185916210923875 Progress: train epoch 24 completes, train loss = 0.14975389518908092 Progress: train epoch 25 completes, train loss = 0.14778516548020498 Progress: train epoch 26 completes, train loss = 0.14593920005219324 Progress: train epoch 27 completes, train loss = 0.14420408542667115 Progress: train epoch 28 completes, train loss = 0.1425694791334016 Progress: train epoch 29 completes, train loss = 0.14102634361812047 Progress: train epoch 30 completes, train loss = 0.13956673549754278 Progress: train epoch 31 completes, train loss = 0.13818365122590745 Progress: train epoch 32 completes, train loss = 0.13687087489025934 Progress: train epoch 33 completes, train loss = 0.1356228728379522 Progress: train epoch 34 completes, train loss = 0.13443468511104584 Progress: train epoch 35 completes, train loss = 0.13330188712903432 Progress: train epoch 36 completes, train loss = 0.13222047367266246 Progress: train epoch 37 completes, train loss = 0.13118683653218405 Progress: train epoch 38 completes, train loss = 0.1301977134176663 Progress: train epoch 39 completes, train loss = 0.1292501219681331 Progress: train epoch 40 completes, train loss = 0.12834137678146362 Progress: train epoch 41 completes, train loss = 0.127469003200531 Progress: train epoch 42 completes, train loss = 0.12663074795688903 Progress: train epoch 43 completes, train loss = 0.12582453233855112 Progress: train epoch 44 completes, train loss = 0.1250484756061009 Progress: train epoch 45 completes, train loss = 0.124300836452416 Progress: train epoch 46 completes, train loss = 0.12357995126928602 Progress: train epoch 47 completes, train loss = 0.12288438422339303 Progress: train epoch 48 completes, train loss = 0.12221273673432213 Progress: train epoch 49 completes, train loss = 0.12156372730221067 Progress: train epoch 50 completes, train loss = 0.12093616596290044 Progress: train epoch 51 completes, train loss = 0.12032895748104368 Progress: train epoch 52 completes, train loss = 0.11974108219146729 Progress: train epoch 53 completes, train loss = 0.1191715630037444 Progress: train epoch 54 completes, train loss = 0.11861952074936458 Progress: train epoch 55 completes, train loss = 0.11808411138398307 Progress: train epoch 56 completes, train loss = 0.11756456324032374 Progress: train epoch 57 completes, train loss = 0.11706013019595828 Progress: train epoch 58 completes, train loss = 0.11657013531242098 Progress: train epoch 59 completes, train loss = 0.11609392613172531 Progress: train epoch 60 completes, train loss = 0.11563090341431755 Progress: train epoch 61 completes, train loss = 0.11518048814364842 Progress: train epoch 62 completes, train loss = 0.11474213962044034 Progress: train epoch 63 completes, train loss = 0.11431537355695452 Progress: train epoch 64 completes, train loss = 0.11389968012060438 Progress: train epoch 65 completes, train loss = 0.11349462185587202 Progress: train epoch 66 completes, train loss = 0.11309977514403206 Progress: train epoch 67 completes, train loss = 0.11271473446062633 Progress: train epoch 68 completes, train loss = 0.1123391038605145 Progress: train epoch 69 completes, train loss = 0.1119725342307772 Progress: train epoch 70 completes, train loss = 0.1116146839090756 Progress: train epoch 71 completes, train loss = 0.11126522187675748 Progress: train epoch 72 completes, train loss = 0.1109238224370139 Progress: train epoch 73 completes, train loss = 0.110590218433312 Progress: train epoch 74 completes, train loss = 0.11026411397116524 Progress: train epoch 75 completes, train loss = 0.10994523976530347 Progress: train epoch 76 completes, train loss = 0.10963334781782967 Progress: train epoch 77 completes, train loss = 0.10932819225958415 Progress: train epoch 78 completes, train loss = 0.10902954850878034 Progress: train epoch 79 completes, train loss = 0.10873718985489436 Progress: train epoch 80 completes, train loss = 0.1084509130035128 Progress: train epoch 81 completes, train loss = 0.108170499759061 Progress: train epoch 82 completes, train loss = 0.1078957776938166 Progress: train epoch 83 completes, train loss = 0.10762655841452735 Progress: train epoch 84 completes, train loss = 0.10736265884978431 Progress: train epoch 85 completes, train loss = 0.10710392679486956 Progress: train epoch 86 completes, train loss = 0.1068501781140055 Progress: train epoch 87 completes, train loss = 0.10660127124616078 Progress: train epoch 88 completes, train loss = 0.10635706782341003 Progress: train epoch 89 completes, train loss = 0.10611741031919207 Progress: train epoch 90 completes, train loss = 0.10588216621960912 Progress: train epoch 91 completes, train loss = 0.10565121471881866 Progress: train epoch 92 completes, train loss = 0.10542441638452667 Progress: train epoch 93 completes, train loss = 0.10520166105457715 Progress: train epoch 94 completes, train loss = 0.10498283218060221 Progress: train epoch 95 completes, train loss = 0.10476781002112798 Progress: train epoch 96 completes, train loss = 0.10455649346113205 Progress: train epoch 97 completes, train loss = 0.1043487840465137 Progress: train epoch 98 completes, train loss = 0.10414457906569753 Progress: train epoch 99 completes, train loss = 0.10394377048526492 Progress: train epoch 100 completes, train loss = 0.10374628858906883 Progress: train epoch 101 completes, train loss = 0.1035520216184003 Progress: train epoch 102 completes, train loss = 0.10336091103298324 Progress: train epoch 103 completes, train loss = 0.10317284826721464 Progress: train epoch 104 completes, train loss = 0.10298777158771243 Progress: train epoch 105 completes, train loss = 0.10280559584498405 Progress: train epoch 106 completes, train loss = 0.1026262417435646 Progress: train epoch 107 completes, train loss = 0.10244964222822871 Progress: train epoch 108 completes, train loss = 0.10227573556559426 Progress: train epoch 109 completes, train loss = 0.10210445842572621 Progress: train epoch 110 completes, train loss = 0.10193572086947304 Progress: train epoch 111 completes, train loss = 0.10176948138645717 Progress: train epoch 112 completes, train loss = 0.10160568090421813 Progress: train epoch 113 completes, train loss = 0.10144424704568726 Progress: train epoch 114 completes, train loss = 0.10128513936485563 Progress: train epoch 115 completes, train loss = 0.1011282993214471 Progress: train epoch 116 completes, train loss = 0.10097365613494601 Progress: train epoch 117 completes, train loss = 0.10082117787429265 Progress: train epoch 118 completes, train loss = 0.10067080706357956 Progress: train epoch 119 completes, train loss = 0.10052249633840152 Progress: train epoch 120 completes, train loss = 0.10037620738148689 Progress: train epoch 121 completes, train loss = 0.10023187579853195 Progress: train epoch 122 completes, train loss = 0.10008948296308517 Progress: train epoch 123 completes, train loss = 0.0999489634164742 Progress: train epoch 124 completes, train loss = 0.09981029161385127 Progress: train epoch 125 completes, train loss = 0.0996734191264425 Progress: train epoch 126 completes, train loss = 0.09953830018639565 Progress: train epoch 127 completes, train loss = 0.0994049119097846 Progress: train epoch 128 completes, train loss = 0.09927321385060038 Progress: train epoch 129 completes, train loss = 0.09914315917662211 Progress: train epoch 130 completes, train loss = 0.09901472021426473 Progress: train epoch 131 completes, train loss = 0.09888788099799838 Progress: train epoch 132 completes, train loss = 0.0987625861806529 Progress: train epoch 133 completes, train loss = 0.09863880329898425 Progress: train epoch 134 completes, train loss = 0.09851651319435664 Progress: train epoch 135 completes, train loss = 0.09839567967823573 Progress: train epoch 136 completes, train loss = 0.09827627560922078 Progress: train epoch 137 completes, train loss = 0.09815827384591103 Progress: train epoch 138 completes, train loss = 0.09804164724690574 Progress: train epoch 139 completes, train loss = 0.09792635696274894 Progress: train epoch 140 completes, train loss = 0.09781239447849137 Progress: train epoch 141 completes, train loss = 0.09769972094467708 Progress: train epoch 142 completes, train loss = 0.09758831986359187 Progress: train epoch 143 completes, train loss = 0.09747816622257233 Progress: train epoch 144 completes, train loss = 0.09736922809055873 Progress: train epoch 145 completes, train loss = 0.09726148471236229 Progress: train epoch 146 completes, train loss = 0.09715492810521807 Progress: train epoch 147 completes, train loss = 0.0970495172909328 Progress: train epoch 148 completes, train loss = 0.09694523949708257 Progress: train epoch 149 completes, train loss = 0.09684207450066294 Progress: train epoch 150 completes, train loss = 0.09673999835337911 Progress: train epoch 151 completes, train loss = 0.0966390036046505 Progress: train epoch 152 completes, train loss = 0.09653905406594276 Progress: train epoch 153 completes, train loss = 0.09644013164298874 Progress: train epoch 154 completes, train loss = 0.09634222994957652 Progress: train epoch 155 completes, train loss = 0.09624533195580755 Progress: train epoch 156 completes, train loss = 0.09614940147314753 Progress: train epoch 157 completes, train loss = 0.09605443903378078 Progress: train epoch 158 completes, train loss = 0.09596042547907148 Progress: train epoch 159 completes, train loss = 0.09586733632854053 Progress: train epoch 160 completes, train loss = 0.095775163599423 Progress: train epoch 161 completes, train loss = 0.0956838923905577 Progress: train epoch 162 completes, train loss = 0.0955934998180185 Progress: train epoch 163 completes, train loss = 0.09550397151282855 Progress: train epoch 164 completes, train loss = 0.0954153064106192 Progress: train epoch 165 completes, train loss = 0.09532747630562101 Progress: train epoch 166 completes, train loss = 0.09524047374725342 Progress: train epoch 167 completes, train loss = 0.09515428117343358 Progress: train epoch 168 completes, train loss = 0.09506887782897268 Progress: train epoch 169 completes, train loss = 0.09498427010008267 Progress: train epoch 170 completes, train loss = 0.09490044361778668 Progress: train epoch 171 completes, train loss = 0.09481736857976232 Progress: train epoch 172 completes, train loss = 0.09473503913198199 Progress: train epoch 173 completes, train loss = 0.09465345367789268 Progress: train epoch 174 completes, train loss = 0.09457258667264666 Progress: train epoch 175 completes, train loss = 0.09449243492313794 Progress: train epoch 176 completes, train loss = 0.09441299044660159 Progress: train epoch 177 completes, train loss = 0.0943342282303742 Progress: train epoch 178 completes, train loss = 0.09425616211124829 Progress: train epoch 179 completes, train loss = 0.09417877080185073 Progress: train epoch 180 completes, train loss = 0.09410202556422778 Progress: train epoch 181 completes, train loss = 0.09402592480182648 Progress: train epoch 182 completes, train loss = 0.09395048394799232 Progress: train epoch 183 completes, train loss = 0.09387566149234772 Progress: train epoch 184 completes, train loss = 0.09380146328892026 Progress: train epoch 185 completes, train loss = 0.09372787762965475 Progress: train epoch 186 completes, train loss = 0.09365489919270788 Progress: train epoch 187 completes, train loss = 0.09358251414128713 Progress: train epoch 188 completes, train loss = 0.09351071023515292 Progress: train epoch 189 completes, train loss = 0.09343948534556798 Progress: train epoch 190 completes, train loss = 0.09336883308632034 Progress: train epoch 191 completes, train loss = 0.09329874813556671 Progress: train epoch 192 completes, train loss = 0.09322920707719666 Progress: train epoch 193 completes, train loss = 0.09316021416868482 Progress: train epoch 194 completes, train loss = 0.09309176302381925 Progress: train epoch 195 completes, train loss = 0.09302382809775216 Progress: train epoch 196 completes, train loss = 0.09295642588819776 Progress: train epoch 197 completes, train loss = 0.0928895425583635 Progress: train epoch 198 completes, train loss = 0.09282315682087626 Progress: train epoch 199 completes, train loss = 0.09275728198034423 Progress: train epoch 200 completes, train loss = 0.09269189887813159 Finished distributed training with 2 executor processes Finished distributed training with 2 executor processes Started distributed training with 2 executor processes Started distributed training with 2 executor processes /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 1 completes, train loss = 0.6590341428915659 Progress: train epoch 2 completes, train loss = 0.5829577644666036 Progress: train epoch 3 completes, train loss = 0.4947920888662338 Progress: train epoch 4 completes, train loss = 0.42490673065185547 Progress: train epoch 5 completes, train loss = 0.375466451048851 Progress: train epoch 6 completes, train loss = 0.3409076581398646 Progress: train epoch 7 completes, train loss = 0.31609518826007843 Progress: train epoch 8 completes, train loss = 0.29757701853911084 Progress: train epoch 9 completes, train loss = 0.2832019527753194 Progress: train epoch 10 completes, train loss = 0.2716417983174324 Progress: train epoch 11 completes, train loss = 0.26206406205892563 Progress: train epoch 12 completes, train loss = 0.25393343220154446 Progress: train epoch 13 completes, train loss = 0.24689523875713348 Progress: train epoch 14 completes, train loss = 0.24070714910825095 Progress: train epoch 15 completes, train loss = 0.23519818733135858 Progress: train epoch 16 completes, train loss = 0.23024402062098184 Progress: train epoch 17 completes, train loss = 0.2257517253359159 Progress: train epoch 18 completes, train loss = 0.2216500366727511 Progress: train epoch 19 completes, train loss = 0.2178830479582151 Progress: train epoch 20 completes, train loss = 0.21440602838993073 Progress: train epoch 21 completes, train loss = 0.21118257194757462 Progress: train epoch 22 completes, train loss = 0.20818268011013666 Progress: train epoch 23 completes, train loss = 0.20538119971752167 Progress: train epoch 24 completes, train loss = 0.2027569462855657 Progress: train epoch 25 completes, train loss = 0.2002918248375257 Progress: train epoch 26 completes, train loss = 0.19797028104464212 Progress: train epoch 27 completes, train loss = 0.195778859158357 Progress: train epoch 28 completes, train loss = 0.19370579222838083 Progress: train epoch 29 completes, train loss = 0.19174082577228546 Progress: train epoch 30 completes, train loss = 0.18987486759821573 Progress: train epoch 31 completes, train loss = 0.1880999058485031 Progress: train epoch 32 completes, train loss = 0.18640878051519394 Progress: train epoch 33 completes, train loss = 0.18479513625303903 Progress: train epoch 34 completes, train loss = 0.18325325598319372 Progress: train epoch 35 completes, train loss = 0.18177799135446548 Progress: train epoch 36 completes, train loss = 0.18036471803983053 Progress: train epoch 37 completes, train loss = 0.17900923391183218 Progress: train epoch 38 completes, train loss = 0.17770773420731226 Progress: train epoch 39 completes, train loss = 0.17645674447218576 Progress: train epoch 40 completes, train loss = 0.17525310317675272 Progress: train epoch 41 completes, train loss = 0.17409391701221466 Progress: train epoch 42 completes, train loss = 0.17297654102245966 Progress: train epoch 43 completes, train loss = 0.17189853390057883 Progress: train epoch 44 completes, train loss = 0.17085765053828558 Progress: train epoch 45 completes, train loss = 0.16985182215770087 Progress: train epoch 46 completes, train loss = 0.16887915382782617 Progress: train epoch 47 completes, train loss = 0.1679378549257914 Progress: train epoch 48 completes, train loss = 0.16702628384033838 Progress: train epoch 49 completes, train loss = 0.1661429355541865 Progress: train epoch 50 completes, train loss = 0.16528637210528055 Progress: train epoch 51 completes, train loss = 0.16445529585083327 Progress: train epoch 52 completes, train loss = 0.1636484501262506 Progress: train epoch 53 completes, train loss = 0.1628646937509378 Progress: train epoch 54 completes, train loss = 0.1621029687424501 Progress: train epoch 55 completes, train loss = 0.1613622506459554 Progress: train epoch 56 completes, train loss = 0.16064157957832018 Progress: train epoch 57 completes, train loss = 0.1599400949974855 Progress: train epoch 58 completes, train loss = 0.15925694753726324 Progress: train epoch 59 completes, train loss = 0.15859137227137884 Progress: train epoch 60 completes, train loss = 0.1579426055153211 Progress: train epoch 61 completes, train loss = 0.1573099580903848 Progress: train epoch 62 completes, train loss = 0.156692773103714 Progress: train epoch 63 completes, train loss = 0.15609044830004373 Progress: train epoch 64 completes, train loss = 0.15550237769881883 Progress: train epoch 65 completes, train loss = 0.15492800499002138 Progress: train epoch 66 completes, train loss = 0.15436680739124617 Progress: train epoch 67 completes, train loss = 0.15381830061475435 Progress: train epoch 68 completes, train loss = 0.15328198422988257 Progress: train epoch 69 completes, train loss = 0.1527574323117733 Progress: train epoch 70 completes, train loss = 0.15224421521027884 Progress: train epoch 71 completes, train loss = 0.15174192935228348 Progress: train epoch 72 completes, train loss = 0.15125018234054247 Progress: train epoch 73 completes, train loss = 0.1507686140636603 Progress: train epoch 74 completes, train loss = 0.15029686441024145 Progress: train epoch 75 completes, train loss = 0.14983461300532022 Progress: train epoch 76 completes, train loss = 0.14938153450687727 Progress: train epoch 77 completes, train loss = 0.14893733834226927 Progress: train epoch 78 completes, train loss = 0.1485017016530037 Progress: train epoch 79 completes, train loss = 0.14807438353697458 Progress: train epoch 80 completes, train loss = 0.14765511453151703 Progress: train epoch 81 completes, train loss = 0.14724363386631012 Progress: train epoch 82 completes, train loss = 0.14683969815572104 /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 83 completes, train loss = 0.1464430664976438 Progress: train epoch 84 completes, train loss = 0.14605354890227318 Progress: train epoch 85 completes, train loss = 0.14567089453339577 Progress: train epoch 86 completes, train loss = 0.14529493699471155 Progress: train epoch 87 completes, train loss = 0.1449254465599855 Progress: train epoch 1 comp *** WARNING: max output size exceeded, skipping output. *** gress: train epoch 61 completes, train loss = 0.12282260258992513 Progress: train epoch 62 completes, train loss = 0.12218108276526134 Progress: train epoch 63 completes, train loss = 0.12155614544947942 Progress: train epoch 64 completes, train loss = 0.12094709152976672 Progress: train epoch 65 completes, train loss = 0.12035329515735309 Progress: train epoch 66 completes, train loss = 0.11977413669228554 Progress: train epoch 67 completes, train loss = 0.11920907596747081 Progress: train epoch 68 completes, train loss = 0.11865751941998799 Progress: train epoch 69 completes, train loss = 0.11811899269620578 Progress: train epoch 70 completes, train loss = 0.11759298791488011 Progress: train epoch 71 completes, train loss = 0.11707904810706775 Progress: train epoch 72 completes, train loss = 0.11657672623793285 Progress: train epoch 73 completes, train loss = 0.11608561128377914 Progress: train epoch 74 completes, train loss = 0.11560530339678128 Progress: train epoch 75 completes, train loss = 0.11513541887203853 Progress: train epoch 76 completes, train loss = 0.1146756000816822 Progress: train epoch 77 completes, train loss = 0.11422549436489741 Progress: train epoch 78 completes, train loss = 0.11378478507200877 Progress: train epoch 79 completes, train loss = 0.11335314686099689 Progress: train epoch 80 completes, train loss = 0.11293029164274533 Progress: train epoch 81 completes, train loss = 0.11251593132813771 Progress: train epoch 82 completes, train loss = 0.11210978652040164 Progress: train epoch 83 completes, train loss = 0.11171159520745277 Progress: train epoch 84 completes, train loss = 0.11132111524542172 Progress: train epoch 85 completes, train loss = 0.11093809828162193 Progress: train epoch 86 completes, train loss = 0.11056232576568921 Progress: train epoch 87 completes, train loss = 0.11019356176257133 Progress: train epoch 88 completes, train loss = 0.10983161504069965 Progress: train epoch 89 completes, train loss = 0.10947626829147339 Progress: train epoch 90 completes, train loss = 0.10912733773390453 Progress: train epoch 91 completes, train loss = 0.10878460233410199 Progress: train epoch 92 completes, train loss = 0.10844794909159343 Progress: train epoch 93 completes, train loss = 0.10811714828014374 Progress: train epoch 94 completes, train loss = 0.10779206206401189 Progress: train epoch 95 completes, train loss = 0.10747251287102699 Progress: train epoch 96 completes, train loss = 0.1071583591401577 Progress: train epoch 97 completes, train loss = 0.10684945558508237 Progress: train epoch 98 completes, train loss = 0.10654566437005997 Progress: train epoch 99 completes, train loss = 0.10624682903289795 Progress: train epoch 100 completes, train loss = 0.10595283408959706 Progress: train epoch 101 completes, train loss = 0.10566354667147 Progress: train epoch 102 completes, train loss = 0.10537883018453915 Progress: train epoch 103 completes, train loss = 0.10509860267241795 Progress: train epoch 104 completes, train loss = 0.10482272505760193 Progress: train epoch 105 completes, train loss = 0.10455108309785525 Progress: train epoch 106 completes, train loss = 0.10428359111150105 Progress: train epoch 107 completes, train loss = 0.10402011250456174 Progress: train epoch 108 completes, train loss = 0.10376057898004849 Progress: train epoch 109 completes, train loss = 0.10350488871335983 Progress: train epoch 110 completes, train loss = 0.10325293987989426 Progress: train epoch 111 completes, train loss = 0.1030046430726846 Progress: train epoch 112 completes, train loss = 0.10275990640123685 Progress: train epoch 113 completes, train loss = 0.10251867274443309 Progress: train epoch 114 completes, train loss = 0.10228082786003749 Progress: train epoch 115 completes, train loss = 0.10204631090164185 Progress: train epoch 116 completes, train loss = 0.10181505605578423 Progress: train epoch 117 completes, train loss = 0.1015869602560997 Progress: train epoch 118 completes, train loss = 0.1013619676232338 Progress: train epoch 119 completes, train loss = 0.10114000489314397 Progress: train epoch 120 completes, train loss = 0.10092101867000262 Progress: train epoch 121 completes, train loss = 0.10070492948095004 Progress: train epoch 122 completes, train loss = 0.10049167027076085 Progress: train epoch 123 completes, train loss = 0.10028119509418805 Progress: train epoch 124 completes, train loss = 0.10007342447837193 Progress: train epoch 125 completes, train loss = 0.09986831992864609 Progress: train epoch 126 completes, train loss = 0.09966581066449483 Progress: train epoch 127 completes, train loss = 0.0994658563286066 Progress: train epoch 128 completes, train loss = 0.09926838986575603 Progress: train epoch 129 completes, train loss = 0.09907337464392185 Progress: train epoch 130 completes, train loss = 0.09888073677817981 Progress: train epoch 131 completes, train loss = 0.09869045205414295 Progress: train epoch 132 completes, train loss = 0.09850245403746764 Progress: train epoch 133 completes, train loss = 0.09831671416759491 Progress: train epoch 134 completes, train loss = 0.09813317346076171 Progress: train epoch 135 completes, train loss = 0.09795180770258109 Progress: train epoch 136 completes, train loss = 0.09777254362901051 Progress: train epoch 137 completes, train loss = 0.09759535640478134 Progress: train epoch 138 completes, train loss = 0.09742021933197975 Progress: train epoch 139 completes, train loss = 0.09724707777301471 Progress: train epoch 140 completes, train loss = 0.09707589199145635 Progress: train epoch 141 completes, train loss = 0.09690663653115432 Progress: train epoch 142 completes, train loss = 0.0967392586171627 Progress: train epoch 143 completes, train loss = 0.09657374024391174 Progress: train epoch 144 completes, train loss = 0.0964100460211436 Progress: train epoch 145 completes, train loss = 0.09624812876184781 Progress: train epoch 146 completes, train loss = 0.09608796176811059 Progress: train epoch 147 completes, train loss = 0.09592951151231925 Progress: train epoch 148 completes, train loss = 0.09577275129655997 Progress: train epoch 149 completes, train loss = 0.09561764945586522 Progress: train epoch 150 completes, train loss = 0.09546418177584808 Progress: train epoch 151 completes, train loss = 0.09531231721242268 Progress: train epoch 152 completes, train loss = 0.09516201975444953 Progress: train epoch 153 completes, train loss = 0.09501326208313306 Progress: train epoch 154 completes, train loss = 0.09486602867643039 Progress: train epoch 155 completes, train loss = 0.09472028538584709 Progress: train epoch 156 completes, train loss = 0.09457600737611453 Progress: train epoch 157 completes, train loss = 0.09443316981196404 Progress: train epoch 158 completes, train loss = 0.09429175034165382 Progress: train epoch 159 completes, train loss = 0.09415172847608726 Progress: train epoch 160 completes, train loss = 0.0940130731711785 Progress: train epoch 161 completes, train loss = 0.09387576890488465 Progress: train epoch 162 completes, train loss = 0.09373978711664677 Progress: train epoch 163 completes, train loss = 0.09360510980089505 Progress: train epoch 164 completes, train loss = 0.09347172019382317 Progress: train epoch 165 completes, train loss = 0.0933395754545927 Progress: train epoch 166 completes, train loss = 0.09320868986348312 Progress: train epoch 167 completes, train loss = 0.09307901995877425 Progress: train epoch 168 completes, train loss = 0.0929505533228318 Progress: train epoch 169 completes, train loss = 0.0928232700874408 Progress: train epoch 170 completes, train loss = 0.09269715410967667 Progress: train epoch 171 completes, train loss = 0.0925721786916256 Progress: train epoch 172 completes, train loss = 0.09244834072887897 Progress: train epoch 173 completes, train loss = 0.09232560979823272 Progress: train epoch 174 completes, train loss = 0.09220398403704166 Progress: train epoch 175 completes, train loss = 0.09208341750005881 Progress: train epoch 176 completes, train loss = 0.09196392446756363 Progress: train epoch 177 completes, train loss = 0.0918454738954703 Progress: train epoch 178 completes, train loss = 0.09172805647055308 Progress: train epoch 179 completes, train loss = 0.09161164859930675 Progress: train epoch 180 completes, train loss = 0.09149625276525815 Progress: train epoch 181 completes, train loss = 0.09138182364404202 Progress: train epoch 182 completes, train loss = 0.09126837799946468 Progress: train epoch 183 completes, train loss = 0.09115588602920373 Progress: train epoch 184 completes, train loss = 0.09104434214532375 Progress: train epoch 185 completes, train loss = 0.0909337184081475 Progress: train epoch 186 completes, train loss = 0.09082401792208354 Progress: train epoch 187 completes, train loss = 0.09071521647274494 Progress: train epoch 188 completes, train loss = 0.09060732151071231 Progress: train epoch 189 completes, train loss = 0.09050027839839458 Progress: train epoch 190 completes, train loss = 0.09039412004252274 Progress: train epoch 191 completes, train loss = 0.09028880298137665 Progress: train epoch 192 completes, train loss = 0.0901843352864186 Progress: train epoch 193 completes, train loss = 0.09008069212237994 Progress: train epoch 194 completes, train loss = 0.08997787597278754 Progress: train epoch 195 completes, train loss = 0.08987586324413617 Progress: train epoch 196 completes, train loss = 0.08977464710672696 Progress: train epoch 197 completes, train loss = 0.08967422197262447 Progress: train epoch 198 completes, train loss = 0.08957456735273202 Progress: train epoch 199 completes, train loss = 0.08947567641735077 Progress: train epoch 200 completes, train loss = 0.0893775454411904 Finished distributed training with 2 executor processes Finished distributed training with 2 executor processes Started distributed training with 2 executor processes /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) /databricks/python/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py:171: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.) return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map) Progress: train epoch 1 completes, train loss = 0.6319397952821519 Progress: train epoch 2 completes, train loss = 0.5052119394143423 Progress: train epoch 3 completes, train loss = 0.39726417263348895 Progress: train epoch 4 completes, train loss = 0.3305743816826079 Progress: train epoch 5 completes, train loss = 0.28952894773748183 Progress: train epoch 6 completes, train loss = 0.26226793560716843 Progress: train epoch 7 completes, train loss = 0.2426722248395284 Progress: train epoch 8 completes, train loss = 0.22767846120728386 Progress: train epoch 9 completes, train loss = 0.21567468676302168 Progress: train epoch 10 completes, train loss = 0.20575013756752014 Progress: train epoch 11 completes, train loss = 0.19735078679190743 Progress: train epoch 12 completes, train loss = 0.19011642204390633 Progress: train epoch 13 completes, train loss = 0.18379933304256862 Progress: train epoch 14 completes, train loss = 0.1782214500837856 Progress: train epoch 15 completes, train loss = 0.17325027038653693 Progress: train epoch 16 completes, train loss = 0.16878453228208753 Progress: train epoch 17 completes, train loss = 0.16474516855345833 Progress: train epoch 18 completes, train loss = 0.16106937328974405 Progress: train epoch 19 completes, train loss = 0.1577065313855807 Progress: train epoch 20 completes, train loss = 0.15461528797944388 Progress: train epoch 21 completes, train loss = 0.1517615665992101 Progress: train epoch 22 completes, train loss = 0.14911692920658323 Progress: train epoch 23 completes, train loss = 0.14665746688842773 Progress: train epoch 24 completes, train loss = 0.1443629190325737 Progress: train epoch 25 completes, train loss = 0.14221598621871737 Progress: train epoch 26 completes, train loss = 0.14020179874367183 Progress: train epoch 27 completes, train loss = 0.13830747455358505 Progress: train epoch 28 completes, train loss = 0.1365218518508805 Progress: train epoch 29 completes, train loss = 0.13483511408170065 Progress: train epoch 30 completes, train loss = 0.13323867900504005 Progress: train epoch 31 completes, train loss = 0.13172493957810932 Progress: train epoch 32 completes, train loss = 0.1302871745493677 Progress: train epoch 33 completes, train loss = 0.12891938537359238 Progress: train epoch 34 completes, train loss = 0.12761621673901877 Progress: train epoch 35 completes, train loss = 0.12637287047174242 Progress: train epoch 36 completes, train loss = 0.1251850268907017 Progress: train epoch 37 completes, train loss = 0.12404879596498278 Progress: train epoch 38 completes, train loss = 0.12296064446369807 Progress: train epoch 39 completes, train loss = 0.12191737608777152 Progress: train epoch 40 completes, train loss = 0.12091608511077033 Progress: train epoch 41 completes, train loss = 0.11995410256915623 Progress: train epoch 42 completes, train loss = 0.11902901116344664 Progress: train epoch 43 completes, train loss = 0.11813858896493912 Progress: train epoch 44 completes, train loss = 0.11728079037533866 Progress: train epoch 45 completes, train loss = 0.11645372874206966 Progress: train epoch 46 completes, train loss = 0.11565568877591027 Progress: train epoch 47 completes, train loss = 0.11488505370087093 Progress: train epoch 48 completes, train loss = 0.11414035078552034 Progress: train epoch 49 completes, train loss = 0.11342021491792467 Progress: train epoch 50 completes, train loss = 0.11272336625390583 Progress: train epoch 51 completes, train loss = 0.1120486135284106 Progress: train epoch 52 completes, train loss = 0.11139486895667182 Progress: train epoch 53 completes, train loss = 0.11076110353072484 Progress: train epoch 54 completes, train loss = 0.11014633584353659 Progress: train epoch 55 completes, train loss = 0.10954969003796577 Progress: train epoch 56 completes, train loss = 0.10897033206290668 Progress: train epoch 57 completes, train loss = 0.10840746429231432 Progress: train epoch 58 completes, train loss = 0.10786034994655186 Progress: train epoch 59 completes, train loss = 0.10732828908496433 Progress: train epoch 60 completes, train loss = 0.10681063888801469 Progress: train epoch 61 completes, train loss = 0.10630678426888254 Progress: train epoch 62 completes, train loss = 0.10581614532404476 Progress: train epoch 63 completes, train loss = 0.10533817236622174 Progress: train epoch 64 completes, train loss = 0.10487235585848491 Progress: train epoch 65 completes, train loss = 0.10441822061936061 Progress: train epoch 66 completes, train loss = 0.10397527946366204 Progress: train epoch 67 completes, train loss = 0.1035431217816141 Progress: train epoch 68 completes, train loss = 0.10312132330404387 Progress: train epoch 69 completes, train loss = 0.10270949494507578 Progress: train epoch 70 completes, train loss = 0.10230724844667646 Progress: train epoch 71 completes, train loss = 0.10191426674524943 Progress: train epoch 72 completes, train loss = 0.10153019262684716 Progress: train epoch 73 completes, train loss = 0.10115470820003086 Progress: train epoch 74 completes, train loss = 0.10078752082255152 Progress: train epoch 75 completes, train loss = 0.10042832005355093 Progress: train epoch 76 completes, train loss = 0.10007684760623509 Progress: train epoch 77 completes, train loss = 0.09973283857107162 Progress: train epoch 78 completes, train loss = 0.0993960512181123 Progress: train epoch 79 completes, train loss = 0.0990662272605631 Progress: train epoch 80 completes, train loss = 0.09874314359492725 Progress: train epoch 81 completes, train loss = 0.09842659574415949 Progress: train epoch 82 completes, train loss = 0.09811637219455507 Progress: train epoch 83 completes, train loss = 0.09781225605143441 Progress: train epoch 84 completes, train loss = 0.09751406560341518 Progress: train epoch 85 completes, train loss = 0.0972216236922476 Progress: train epoch 86 completes, train loss = 0.09693476185202599 Progress: train epoch 87 completes, train loss = 0.0966532909207874 Progress: train epoch 88 completes, train loss = 0.09637706105907758 Progress: train epoch 89 completes, train loss = 0.09610592077175777 Progress: train epoch 90 completes, train loss = 0.09583971194095081 Progress: train epoch 91 completes, train loss = 0.09557828928033511 Progress: train epoch 92 completes, train loss = 0.09532153730591138 Progress: train epoch 93 completes, train loss = 0.09506930121117169 Progress: train epoch 94 completes, train loss = 0.09482147006524934 Progress: train epoch 95 completes, train loss = 0.09457790934377247 Progress: train epoch 96 completes, train loss = 0.0943385118411647 Progress: train epoch 97 completes, train loss = 0.09410315545068847 Progress: train epoch 98 completes, train loss = 0.09387173131108284 Progress: train epoch 99 completes, train loss = 0.09364414753185378 Progress: train epoch 100 completes, train loss = 0.09342028945684433 Progress: train epoch 101 completes, train loss = 0.09320006519556046 Progress: train epoch 102 completes, train loss = 0.09298338037398127 Progress: train epoch 103 completes, train loss = 0.09277014310161273 Progress: train epoch 104 completes, train loss = 0.09256026686893569 Progress: train epoch 105 completes, train loss = 0.09235366723603672 Progress: train epoch 106 completes, train loss = 0.09215024651752578 Progress: train epoch 107 completes, train loss = 0.09194996290736729 Progress: train epoch 108 completes, train loss = 0.09175269885195626 Progress: train epoch 109 completes, train loss = 0.09155840178330739 Progress: train epoch 110 completes, train loss = 0.09136700381835301 Progress: train epoch 111 completes, train loss = 0.09117842755383915 Progress: train epoch 112 completes, train loss = 0.09099260510669814 Progress: train epoch 113 completes, train loss = 0.09080947563052177 Progress: train epoch 114 completes, train loss = 0.09062897952066527 Progress: train epoch 115 completes, train loss = 0.09045104351308611 Progress: train epoch 116 completes, train loss = 0.09027562124861611 Progress: train epoch 117 completes, train loss = 0.09010265353653166 Progress: train epoch 118 completes, train loss = 0.08993207746081883 Progress: train epoch 119 completes, train loss = 0.08976384749015172 Progress: train epoch 120 completes, train loss = 0.0895979077451759 Progress: train epoch 121 completes, train loss = 0.08943420896927516 Progress: train epoch 122 completes, train loss = 0.08927270645896594 Progress: train epoch 123 completes, train loss = 0.08911335261331664 Progress: train epoch 124 completes, train loss = 0.08895608286062877 Progress: train epoch 125 completes, train loss = 0.08880088064405653 Progress: train epoch 126 completes, train loss = 0.088647680150138 Progress: train epoch 127 completes, train loss = 0.0884964553018411 Progress: train epoch 128 completes, train loss = 0.08834715435902278 Progress: train epoch 129 completes, train loss = 0.08819974172446463 Progress: train epoch 130 completes, train loss = 0.08805417890350024 Progress: train epoch 131 completes, train loss = 0.08791042698754205 Progress: train epoch 132 completes, train loss = 0.0877684462401602 Progress: train epoch 133 completes, train loss = 0.08762820147805744 Progress: train epoch 134 completes, train loss = 0.08748966827988625 Progress: train epoch 135 completes, train loss = 0.08735280028647846 Progress: train epoch 136 completes, train loss = 0.08721757266256544 Progress: train epoch 137 completes, train loss = 0.08708394442995389 Progress: train epoch 138 completes, train loss = 0.0869519015153249 Progress: train epoch 139 completes, train loss = 0.08682139590382576 Progress: train epoch 140 completes, train loss = 0.08669240482979351 Progress: train epoch 141 completes, train loss = 0.08656490097443263 Progress: train epoch 142 completes, train loss = 0.08643885081013043 Progress: train epoch 143 completes, train loss = 0.08631423488259315 Progress: train epoch 144 completes, train loss = 0.08619101925028695 Progress: train epoch 145 completes, train loss = 0.08606918321715461 Progress: train epoch 146 completes, train loss = 0.0859487002922429 Progress: train epoch 147 completes, train loss = 0.08582954853773117 Progress: train epoch 148 completes, train loss = 0.08571169194247988 Progress: train epoch 149 completes, train loss = 0.08559512429767185 Progress: train epoch 150 completes, train loss = 0.08547981497314242 Progress: train epoch 151 completes, train loss = 0.0853657325108846 Progress: train epoch 152 completes, train loss = 0.08525286697679096 Progress: train epoch 153 completes, train loss = 0.08514119933048885 Progress: train epoch 154 completes, train loss = 0.08503069024946955 Progress: train epoch 155 completes, train loss = 0.08492134345902337 Progress: train epoch 156 completes, train loss = 0.08481312625937992 Progress: train epoch 157 completes, train loss = 0.08470601712663968 Progress: train epoch 158 completes, train loss = 0.08460000736845864 Progress: train epoch 159 completes, train loss = 0.0844950597319338 Progress: train epoch 160 completes, train loss = 0.08439117752843434 Progress: train epoch 161 completes, train loss = 0.08428833012779553 Progress: train epoch 162 completes, train loss = 0.08418651421864827 Progress: train epoch 163 completes, train loss = 0.08408569503161642 Progress: train epoch 164 completes, train loss = 0.08398586304651366 Progress: train epoch 165 completes, train loss = 0.08388699880904621 Progress: train epoch 166 completes, train loss = 0.08378910563058323 Progress: train epoch 167 completes, train loss = 0.08369214460253716 Progress: train epoch 168 completes, train loss = 0.08359610868824853 Progress: train epoch 169 completes, train loss = 0.0835009908510579 Progress: train epoch 170 completes, train loss = 0.08340675756335258 Progress: train epoch 171 completes, train loss = 0.0833134117225806 Progress: train epoch 172 completes, train loss = 0.08322093966934416 Progress: train epoch 173 completes, train loss = 0.08312931574053234 Progress: train epoch 174 completes, train loss = 0.08303854159182972 Progress: train epoch 175 completes, train loss = 0.08294859156012535 Progress: train epoch 176 completes, train loss = 0.08285946978463067 Progress: train epoch 177 completes, train loss = 0.08277113197578324 Progress: train epoch 178 completes, train loss = 0.08268359469042884 Progress: train epoch 179 completes, train loss = 0.08259684302740627 Progress: train epoch 180 completes, train loss = 0.08251085090968344 Progress: train epoch 181 completes, train loss = 0.08242561916510265 Progress: train epoch 182 completes, train loss = 0.08234114199876785 Progress: train epoch 183 completes, train loss = 0.08225738919443554 Progress: train epoch 184 completes, train loss = 0.08217436199386914 Progress: train epoch 185 completes, train loss = 0.08209204715159205 Progress: train epoch 186 completes, train loss = 0.08201044011447164 Progress: train epoch 187 completes, train loss = 0.08192951728900273 Progress: train epoch 188 completes, train loss = 0.08184929109281963 Progress: train epoch 189 completes, train loss = 0.08176972882615195 Progress: train epoch 190 completes, train loss = 0.08169082676370938 Progress: train epoch 191 completes, train loss = 0.08161258945862453 Progress: train epoch 192 completes, train loss = 0.08153499000602299 Progress: train epoch 193 completes, train loss = 0.08145803088943164 Progress: train epoch 194 completes, train loss = 0.08138169265455669 Progress: train epoch 195 completes, train loss = 0.08130597985453075 Progress: train epoch 196 completes, train loss = 0.08123087800211376 Progress: train epoch 197 completes, train loss = 0.08115637550751369 Progress: train epoch 198 completes, train loss = 0.08108246988720363 Progress: train epoch 199 completes, train loss = 0.08100914624002245 Progress: train epoch 200 completes, train loss = 0.08093640001283751 Finished distributed training with 2 executor processes
+--------------------+-----+--------------------+----------+--------------------+ | features|label| scaled_features|prediction| probability| +--------------------+-----+--------------------+----------+--------------------+ |[17.99, 10.38, 12...| 0|[1.09609952943171...| 0|[0.99999916553497...| |[20.57, 17.77, 13...| 0|[1.82821197373437...| 0|[0.99882179498672...| |[19.69, 21.25, 13...| 0|[1.57849920203424...| 0|[0.99998509883880...| |[11.42, 20.38, 77...| 0|[-0.7682333229203...| 0|[0.99682056903839...| |[20.29, 14.34, 13...| 0|[1.74875791001160...| 0|[0.99892044067382...| |[12.45, 15.7, 82....| 0|[-0.4759558742259...| 0|[0.85606336593627...| |[18.25, 19.98, 11...| 0|[1.16987830288857...| 0|[0.99891209602355...| |[13.71, 20.83, 90...| 0|[-0.1184125874734...| 0|[0.92196530103683...| |[13.0, 21.82, 87....| 0|[-0.3198853919133...| 0|[0.98955965042114...| |[12.46, 24.04, 83...| 0|[-0.4731182290929...| 0|[0.99899595975875...| |[16.02, 23.24, 10...| 0|[0.53708343823938...| 0|[0.79597622156143...| |[15.78, 17.89, 10...| 0|[0.46897995504843...| 0|[0.99210041761398...| |[19.17, 24.8, 132...| 0|[1.43094165512052...| 0|[0.99957484006881...| |[15.85, 23.95, 10...| 0|[0.48884347097913...| 0|[0.64049845933914...| |[13.73, 22.61, 93...| 0|[-0.1127372972075...| 0|[0.97818976640701...| |[14.54, 27.54, 96...| 0|[0.11711195856189...| 0|[0.99900609254837...| |[14.68, 20.13, 94...| 0|[0.15683899042327...| 0|[0.94674462080001...| |[16.13, 20.68, 10...| 0|[0.56829753470189...| 0|[0.99975126981735...| |[19.81, 22.15, 13...| 0|[1.61255094362971...| 0|[0.99999868869781...| |[13.54, 14.36, 87...| 1|[-0.1666525547337...| 1|[0.07583434879779...| +--------------------+-----+--------------------+----------+--------------------+ only showing top 20 rows
cv_model.avgMetrics
[0.9231449526537556, 0.9948025498722067]