XGBoost
Trains is now ClearML
This documentation applies to the legacy Trains versions. For the latest documentation, see ClearML.
The xgboost_sample.py example demonstrates integrating Trains into code that trains a network on the scikit-learn iris classification dataset, using XGBoost to do the following:
- load a model (xgboost.Booster.load_model)
- save a model (xgboost.Booster.save_model)
- dump a model to JSON or text file (xgboost.Booster.dump_model)
- plot feature importance (xgboost.plot_importance)
- plot a tree (xgboost.plot_tree)
and scikit-learn to score accuracy (sklearn.metrics.accuracy_score).
Trains automatically logs input model, output model, model checkpoints (snapshots), feature importance plot, tree plot, and output to console.
When the script runs, it creates an experiment named XGBoost simple example
, which is associated with the examples
project.
Plots
The feature importance plot and tree plot appear RESULTS > PLOTS.
Log
All other console output appears in RESULTS > LOG.
Artifacts
Model artifacts associated with the experiment appear in the experiment info panel (in the EXPERIMENTS tab), and in the model info panel (in the MODELS tab).
The experiment info panel shows model tracking, including the model name and design (in this case, no design was stored).
The model info panel contains the model details, including the model design (which is also in the experiment info panel), the class label enumeration, model URL, and framework.