xgb.importance {xgboost}R Documentation

Show importance of features in a model

Description

Create a data.table of the most important features of a model.

Usage

xgb.importance(feature_names = NULL, model = NULL, data = NULL,
  label = NULL, target = function(x) ((x + label) == 2))

Arguments

feature_names

names of each feature as a character vector. Can be extracted from a sparse matrix (see example). If model dump already contains feature names, this argument should be NULL.

model

generated by the xgb.train function.

data

the dataset used for the training step. Will be used with label parameter for co-occurence computation. More information in Detail part. This parameter is optional.

label

the label vetor used for the training step. Will be used with data parameter for co-occurence computation. More information in Detail part. This parameter is optional.

target

a function which returns TRUE or 1 when an observation should be count as a co-occurence and FALSE or 0 otherwise. Default function is provided for computing co-occurences in a binary classification. The target function should have only one parameter. This parameter will be used to provide each important feature vector after having applied the split condition, therefore these vector will be only made of 0 and 1 only, whatever was the information before. More information in Detail part. This parameter is optional.

Details

This function is for both linear and tree models.

data.table is returned by the function. The columns are :

If you don't provide feature_names, index of the features will be used instead.

Because the index is extracted from the model dump (made on the C++ side), it starts at 0 (usual in C++) instead of 1 (usual in R).

Co-occurence count ——————

The gain gives you indication about the information of how a feature is important in making a branch of a decision tree more pure. However, with this information only, you can't know if this feature has to be present or not to get a specific classification. In the example code, you may wonder if odor=none should be TRUE to not eat a mushroom.

Co-occurence computation is here to help in understanding this relation between a predictor and a specific class. It will count how many observations are returned as TRUE by the target function (see parameters). When you execute the example below, there are 92 times only over the 3140 observations of the train dataset where a mushroom have no odor and can be eaten safely.

If you need to remember one thing only: until you want to leave us early, don't eat a mushroom which has no odor :-)

Value

A data.table of the features used in the model with their average gain (and their weight for boosted tree model) in the model.

Examples

data(agaricus.train, package='xgboost')

bst <- xgboost(data = agaricus.train$data, label = agaricus.train$label, max_depth = 2, 
               eta = 1, nthread = 2, nrounds = 2,objective = "binary:logistic")

xgb.importance(colnames(agaricus.train$data), model = bst)

# Same thing with co-occurence computation this time
xgb.importance(colnames(agaricus.train$data), model = bst, data = agaricus.train$data, label = agaricus.train$label)


[Package xgboost version 0.6-0 Index]