π Exercise M4.03#
Now, we tackle a (relatively) realistic classification problem instead of making a synthetic dataset. We start by loading the Adult Census dataset with the following snippet. For the moment we retain only the numerical features.
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
target = adult_census["class"]
data = adult_census.select_dtypes(["integer", "floating"])
data = data.drop(columns=["education-num"])
data
age | capital-gain | capital-loss | hours-per-week | |
---|---|---|---|---|
0 | 25 | 0 | 0 | 40 |
1 | 38 | 0 | 0 | 50 |
2 | 28 | 0 | 0 | 40 |
3 | 44 | 7688 | 0 | 40 |
4 | 18 | 0 | 0 | 30 |
... | ... | ... | ... | ... |
48837 | 27 | 0 | 0 | 38 |
48838 | 40 | 0 | 0 | 40 |
48839 | 58 | 0 | 0 | 40 |
48840 | 22 | 0 | 0 | 20 |
48841 | 52 | 15024 | 0 | 40 |
48842 rows Γ 4 columns
We confirm that all the selected features are numerical.
Define a linear model composed of a StandardScaler
followed by a
LogisticRegression
with default parameters.
Then use a 10-fold cross-validation to estimate its generalization performance
in terms of accuracy. Also set return_estimator=True
to be able to inspect
the trained estimators.
# Write your code here.
What is the most important feature seen by the logistic regression?
You can use a boxplot to compare the absolute values of the coefficients while also visualizing the variability induced by the cross-validation resampling.
# Write your code here.
Letβs now work with both numerical and categorical features. You can reload the Adult Census dataset with the following snippet:
adult_census = pd.read_csv("../datasets/adult-census.csv")
target = adult_census["class"]
data = adult_census.drop(columns=["class", "education-num"])
Create a predictive model where:
The numerical data must be scaled.
The categorical data must be one-hot encoded, set
min_frequency=0.01
to group categories concerning less than 1% of the total samples.The predictor is a
LogisticRegression
with default parameters, except that you may need to increase the number ofmax_iter
, which is 100 by default.
Use the same 10-fold cross-validation strategy with return_estimator=True
as
above to evaluate the full pipeline, including the feature scaling and encoding
preprocessing.
# Write your code here.
By comparing the cross-validation test scores of both models fold-to-fold, count the number of times the model using both numerical and categorical features has a better test score than the model using only numerical features.
# Write your code here.
For the following questions, you can copy and paste the following snippet to
get the feature names from the column transformer here named preprocessor
.
preprocessor.fit(data)
feature_names = (
preprocessor.named_transformers_["onehotencoder"].get_feature_names_out(
categorical_columns
)
).tolist()
feature_names += numerical_columns
feature_names
# Write your code here.
Notice that there are as many feature names as coefficients in the last step of your predictive pipeline.
Which of the following pairs of features is most impacting the predictions of the logistic regression classifier based on the absolute magnitude of its coefficients?
# Write your code here.
Now create a similar pipeline consisting of the same preprocessor as above,
followed by a PolynomialFeatures
and a logistic regression with C=0.01
and
enough max_iter
. Set degree=2
and interaction_only=True
to the feature
engineering step. Remember not to include a βbiasβ feature to avoid
introducing a redundancy with the intercept of the subsequent logistic
regression.
# Write your code here.
Use the same 10-fold cross-validation strategy as above to evaluate this
pipeline with interactions. In this case there is no need to return the
estimator, as the number of features generated by the PolynomialFeatures
step
is much too large to be able to visually explore the learned coefficients of the
final classifier.
By comparing the cross-validation test scores of both models fold-to-fold, count the number of times the model using multiplicative interactions and both numerical and categorical features has a better test score than the model without interactions.
# Write your code here.