Applied AI Internal Demo Project

Car Emissions Study

This subset (9x series) of notebooks was created for publication in a series of blogposts for Applied AI. They are part of a wider project primarily serving as a coherent collection of my preferred techniques for data preparation & analysis and Bayesian inference in Python.

A set of notebooks designed to investigate car emissions data from the PoV of the Volkswagen Emissions Scandal which seems to have meaningfully damaged their sales. The motivation is to investigate the data and see if we can see any unsual behaviour in Volkswagen.

Using data from the UK VCA (Vehicle Type Approval) Car Fuel and Emissions Information for August 2015. Dataset available here and also included in the repo since it's small.

92_FeatureSelectionAndModelEvaluation

Demonstrate linear regression, Lasso regularization for feature selection, and posterior predictive checks

Notes:

  • Python 3.5 project using the latest available PyMC3
  • Developed using ContinuumIO Anaconda distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.
  • If execution becomes unstable or Theano throws weird errors, try clearing the cache $> theano-cache clear and rerunning the notebook.

Package Requirements (shown as a conda-env YAML):

$> less conda_env_pymc3_examples.yml

name: pymc3_examples
channels:
  - defaults
dependencies:
  - python=3.5
  - jupyter
  - ipywidgets
  - numpy
  - scipy
  - matplotlib=1.4.3
  - pandas
  - scikit-learn
  - seaborn
  - patsy  
  - pip

$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3

Setup

Nothing especially interesting in this Setup section, just loading the packages and data we need

In [66]:
## Interactive magics
%matplotlib inline
%qtconsole --colors=linux
In [67]:
# general packages
import warnings
warnings.filterwarnings('ignore')
from io import StringIO
from collections import OrderedDict
from itertools import combinations

# scientific packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from scipy import optimize
from scipy.stats import norm, laplace, ks_2samp
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import RidgeCV
from sklearn.manifold import TSNE
import statsmodels.api as sm

# pymc3 libraries
import pymc3 as pm
import theano as thno
import theano.tensor as T 

from ipywidgets import interactive, fixed

sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 12, 4
np.random.seed(0)
storename = 'data/store_01.h5'

Local Functions

In [68]:
def custom_describe(df, nrows=3, nfeats=20):
    ''' Conv fn: concat transposed topN rows, numerical desc & dtypes '''

    print(df.shape)
    rndidx = np.random.randint(0,len(df),nrows)
    dfdesc = df.describe().T

    for col in ['mean','std']:
        dfdesc[col] = dfdesc[col].apply(lambda x: np.round(x,2))
 
    dfout = pd.concat((df.iloc[rndidx].T, dfdesc, df.dtypes),axis=1, join='outer')
    dfout = dfout.loc[df.columns.values]
    dfout.rename(columns={0:'dtype'}, inplace=True)
    
    # add count nonNAN, min, max for string cols
    dfout['count'] = df.shape[0] - df.isnull().sum()
    dfout['min'] = df.min().apply(lambda x: x[:6] if type(x) == str else x)
    dfout['max'] = df.max().apply(lambda x: x[:6] if type(x) == str else x)
    
    return dfout.iloc[:nfeats,:]


def plot_tsne(dftsne, ft_num, ft_endog='is_vw'):
    ''' Convenience fn: scatterplot t-sne rep with cat or cont color'''
   
    pal = 'cubehelix' 
    leg = True

    if ft_endog in ft_num:
        pal = 'BuPu'
        leg = False
    
    g = sns.lmplot('x', 'y', dftsne.sort(ft_endog), hue=ft_endog
           ,palette=pal, fit_reg=False, size=7, legend=leg
           ,scatter_kws={'alpha':0.7,'s':100, 'edgecolor':'w', 'lw':0.4})
    _ = g.axes.flat[0].set_title('t-SNE rep colored by {}'.format(ft_endog))


def trace_median(x):
    return pd.Series(np.median(x,0), name='median')


def plot_traces(trcs, retain=1000, varnames=None):
    ''' Convenience fn: plot traces with overlaid means and values '''
    nrows = len(trcs.varnames)
    if varnames is not None:
        nrows = len(varnames)
    ax = pm.traceplot(trcs[-retain:], varnames=varnames, figsize=(12,nrows*1.4)
        ,lines={k: v['mean'] for k, v in 
            pm.df_summary(trcs[-retain:],varnames=varnames).iterrows()})

    for i, mn in enumerate(pm.df_summary(trcs[-retain:], varnames=varnames)['mean']):
        ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
                    ,xytext=(5,10), textcoords='offset points', rotation=90
                    ,va='bottom', fontsize='large', color='#AA0022')    

Load Data

In [69]:
store = pd.HDFStore(storename)
df = store['/clean']
store.close()
In [70]:
# custom_describe(df)

Prepare Dataset

This dataset allows us to ask if there's any interesting patterns in NOx emissions: for cars made by the Volkswagen group

  • emissions_nox_mgkm

... according to a handful of exogenous variables:

  • mfr_owner_is_vw, transmission, fuel_type_smpl, is_tdi
  • engine_capacity, metric_combined, metric_extra_urban, metric_urban_cold
Declare feats for use
In [71]:
fts_cat = ['mfr_owner_is_vw','trans','fuel_type','is_tdi']
fts_num = ['metric_combined','metric_extra_urban','metric_urban_cold'
           ,'engine_capacity','emissions_co_mgkm']
ft_endog = 'emissions_nox_mgkm'
Exclude outliers
In [72]:
dfi = df.loc[df['mcd_outlier']==0]
dfi.shape
Out[72]:
(2644, 21)
In [73]:
## also exclude 3 high-value outliers in the endogenous feat

f, ax1d = plt.subplots(1,1,figsize=(12,2))
_ = sns.violinplot(x=ft_endog, data=dfi, ax=ax1d, inner='point')
In [74]:
dfi.loc[dfi[ft_endog]>100,fts_cat+fts_num]
Out[74]:
mfr_owner_is_vw trans fuel_type is_tdi metric_combined metric_extra_urban metric_urban_cold engine_capacity emissions_co_mgkm
1761 False auto diesel False 5.5 4.8 6.5 2143 94
1762 False auto diesel False 4.4 4.4 4.3 2987 248
2072 False manual diesel False 4.8 4.4 5.5 1560 216
In [75]:
dfi = dfi.loc[dfi[ft_endog]<=100].copy()
dfi.shape
Out[75]:
(2641, 21)
Standardize the dataset according to Gelman 1 / (2 * sd)

Divide by 2 standard deviations in order to put the variance of a normally distributed variable nearer to the variance range of a binary variable. See http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf for more info.

In [76]:
dfs = pd.concat((dfi[ft_endog], dfi[fts_cat]
    ,((dfi[fts_num] - dfi[fts_num].mean(0)) / (2 * dfi[fts_num].std(0)))),1)
# custom_describe(dfs)

Quick Data Exploration

To look through every part of the datset is not really the focus here, and indeed I've already pre-processed the data to some degree, adding dereived features, and assigning outlier flags in a process using the Minimum Covariance Determinant (MinCovDet).

Nevertheless, it's useful for the reader to have an idea of the size and shape of the dataset, so let's take a look.

Basic description

In [77]:
custom_describe(dfs, 2)
(2641, 10)
Out[77]:
2631 1662 count mean std min 25% 50% 75% max dtype
emissions_nox_mgkm 30 56 2641 37.27 18.23 1 23 35 52 90 float64
mfr_owner_is_vw False False 2641 0.24 0.43 False 0 0 0 True bool
trans manual auto 2641 NaN NaN auto NaN NaN NaN semiau object
fuel_type petrol diesel 2641 NaN NaN diesel NaN NaN NaN petrol object
is_tdi False False 2641 0.11 0.32 False 0 0 0 True bool
metric_combined 0.0398752 -0.309993 2641 -0.00 0.50 -0.686775 -0.336906 -0.148516 0.17444 2.48895 float64
metric_extra_urban -0.00561065 -0.366042 2641 0.00 0.50 -0.886664 -0.325994 -0.125754 0.194629 2.79774 float64
metric_urban_cold 0.067301 -0.268727 2641 0.00 0.50 -0.67196 -0.319131 -0.151117 0.168109 2.67152 float64
engine_capacity -0.0563211 0.0403511 2641 0.00 0.50 -0.930815 -0.283556 -0.0479873 -0.0396535 2.60105 float64
emissions_co_mgkm -0.150676 -0.307162 2641 0.00 0.50 -0.847749 -0.378292 -0.0902157 0.261877 2.29264 float64

Observe:

  • The dataset is 2641 rows, with 10 features.
  • These are observations of car emissions tests, one row per car.
  • You can read off the basic distributional statistics of the features in the table above. Numeric features have been standardized according to Gelman's 2sd principle.
  • I have selected these particular 10 features to work with. Some are derivatives of original features.

We have the following features:

+ Categoricals:
    + `trans`     - the car transmission, simplified to 'auto', 'semiauto', 'manual'
    + `fuel_type` - the car's power supply, simplified to 'petrol', 'diesel', 'hybrid'

+ Booleans:
    + `mfr_owner_is_vw` - if the parent company of the car manufacturer is Volkswagen
    + `is_tdi`          - (processed feature) if the car engine type is a turbo diesel

+ Numerics:
    + `metric_combined`    - a score for fuel efficiency in combined driving
    + `metric_extra_urban` - a score for fuel efficiency in an extra-urban driving
    + `metric_urban_cold`  - a score for fuel efficiency in an urban setting, cold start
    + `emissions_co_mgkm`  - a count of CO particulates emitted mg/km

+ Numeric endogenous feature:
    + `emissions_nox_mgkm` - a count of NOx particulates emitted mg/km

For the purposes of this Notebook, the final feature mentioned emissions_nox_mgkm will be used as the endogenous / dependent / output feature of the linear models. All other features may be used as exogenous / independent / input features.

1d & 2d Distributions

Count plts of Categorial and Boolean feats
In [78]:
f, ax2d = plt.subplots(2,2, squeeze=False, figsize=(12,6))
for i, ft in enumerate(fts_cat):
    _ = dfs.groupby(ft).size().plot(kind='barh', ax=ax2d[i//2, i%2], title=ft)

Observe:

  • The mfr_owner_is_vw class is surprisingly almost balanced: Volkswagen group own a lot of subsidiary brands inc Audi, Bentley, Lamborghini, Porsche, Seat, Skoda, and of course, Volkswagen.
  • We might be wary of fuel_type and is_tdi which is quite unbalanced
Histograms of Numeric feats
In [79]:
ax = dfs[fts_num+[ft_endog]].hist(bins=50, figsize=(12,2*3))

Observe

  • The emissions features look fairly evenly distributed
  • The engine_capacity contains some outliers: some values out beyond +2, likely high-powered cars. Least-squares models may get adversely affected by this.
  • The metric_* features all seem to have the same distribution: perhaps, as their names suggest, they're connected? Let's take a look...
Pairs-plots of metric_* features
In [80]:
g = sns.PairGrid(dfs[['metric_combined','metric_extra_urban','metric_urban_cold']])
_ = g.map_upper(plt.scatter, linewidths=1, edgecolor="w", s=40, alpha=0.5)
_ = g.map_diag(plt.hist)
_ = g.map_lower(sns.kdeplot)

Observe

  • Yes, these three features all appear to be corrolated: quite strongly in fact.
  • We will have to account for this in the modelling - which assumes features are independent (orthogonal).

2d Distributions of features w.r.t Emissions NOx

pandas and seaborn make comparitive plotting very easy, so lets take a look at how the original features vary vs the exogneous feature emissions_nox_mgkm

Categorical feats
In [81]:
col_wrap = 2
f, ax2d = plt.subplots(nrows=int(np.ceil(len(fts_cat)/col_wrap)), ncols=col_wrap
                       ,squeeze=False, sharey=False, sharex=True, figsize=(12,8))

for i, ft in enumerate(fts_cat):
    ax = sns.violinplot(x=ft_endog, y=ft, data=dfs, hue=ft, ax=ax2d[i//col_wrap,i%col_wrap]
                    ,saturation=0.8, orient='h')
    ax.legend().set_visible(False) 

plt.tight_layout()

Observe:

  • mfr_owner_is_vw appears to have a slightly higher distribution for the endogenous feature emissions_nox_mgkm
  • trans appears fairly mixed: the value auto has a wider variance than manual and semiauto
  • fuel_type is quite interesting: we ought to expect diesel engines to emit more NOx, but there's still some outlying petrol engines that are high emitters. The hybrid class appears bimodal, so we may have to split this out or ignore it in future modelling
  • Similarly, we see that is_tdi == True engines have markedly higher emissions. This parameter is dependent upon, and correlated with fuel_type == diesel, so again, we would have to be careful with any assumptions of linear independence.
Numeric feats
In [82]:
ax = sns.pairplot(dfs, x_vars=fts_num[:3], y_vars=[ft_endog]
            ,size=3, aspect=1, kind="reg"
            ,plot_kws={'scatter_kws':{'alpha':0.3, 's':40, 'lw':1.5, 'edgecolor':'w'}})

ax = sns.pairplot(dfs, x_vars=fts_num[3:], y_vars=[ft_endog]
            ,size=3, aspect=1, kind="reg"
            ,plot_kws={'scatter_kws':{'alpha':0.3, 's':40, 'lw':1.5, 'edgecolor':'w'}})

Observe:

  • I've used seaborn's fit_reg function, which fits and plots and very basic OLS for each fact of the pairplot
  • There appears to be some correlation with emissions_nox_mgkm across all the numeric features, so we ought to at least see something interesting from the linear models

Nd Distribution - tSNE

As I detailed in one of my previous blogposts on t-SNE, this is a technique for non-linear feature reduction: taking datapoints in Nd space and representing them in a 2d manifold.

We then scatterplot this 2d representation and optionally add color to show the distribution of other categorial or numeric features throughout the feature space.

Clusters and denisty patterns are interesting and let us learn more about the separability of the data, and what type of machine learning models (discriminative, generative etc) might be suitable for use.

Note: only the exogenous numeric features are used to create the t-SNE manifold: 'metric_combined','metric_extra_urban','metric_urban_cold','engine_capacity','emissions_co_mgkm'

In [83]:
# tSNE using Barnes-Hut method in scikit-learn 0.17
tsne = TSNE(n_components=2, random_state=0)
%time Z = tsne.fit_transform(dfs[fts_num])

## add feats
dftsne = pd.DataFrame(Z, columns=['x','y'], index=dfs.index)
dftsne[fts_cat] = dfs[fts_cat]
dftsne[fts_num] = dfs[fts_num]
dftsne[ft_endog] = dfs[ft_endog]
CPU times: user 21.2 s, sys: 2.7 s, total: 23.9 s
Wall time: 24.2 s
Interactive scatterplot - select feature for overlay

NOTE: The interactive plot below is designed to let us choose features for overlay - though unfortunately it's not possible to provide this interactivity when the Notebook is statically rendered

In [84]:
interactive(plot_tsne, dftsne=fixed(dftsne), ft_num=fixed(fts_num+[ft_endog])
            ,ft_endog=['mfr_owner_is_vw','mfr_is_vw']+fts_cat+fts_num+[ft_endog])

Observe:

  • The general pattern of the t-SNE representation is quite homogeneous, indicating we have a good spread of numeric values
  • The default overlay feature shown above is mfr_owner_is_vw, which appears to be distributed throughout the space in a slightly 'clumpy' manner: heavy in some clusters, light or absent in others.
  • This indicates that if we were seeking to classify cars according to mfr_owner_is_vw, then a simple GLM might not be most appropriate: perhaps a higher dimensional discriminative model like a Decision Tree or SVM would work.

Just for the the static rendering here, I'll create one more (non-interactive) t-SNE scatterplot to show the distribution of the endogenous feature emissions_nox_mgkm throughout the numeric feature space.

In [85]:
plot_tsne(dftsne, ft_num=fts_num+[ft_endog], ft_endog=ft_endog)

Observe:

  • The scale above ranges from light blue (low) to bright purple (high)
  • We see emissions_nox_mgkm is distributed slightly heterogenously throughout the space - with some clusters showing mostly high values, and some mostly low values
  • Again this indicates that a linear regression model might have some trouble fitting the data, but it's a good place to start before we try e.g. an SVM regression


Linear Regression [OLS]

First we'll create an intentionally basic OLS (Ordinary Least Squares) Regression model, to warm up to using PyMC3 with real data.

See the first blogpost in this series for theoretical details.

$$\bf{y} \sim \mathcal{N}(\beta^{T} \bf{x},\sigma^{2})$$

... where for datapoint $i \in n$:
$y_{i}$ is a sample from a $\mathcal{N}$ormal distribution defined by mean $\mu = \beta^{T} x_{i}$ and variance $\sigma^{2}$

Declare full modelspec

In [86]:
fml_all = '{} ~ '.format(ft_endog) + ' + '.join(fts_num + fts_cat)
fml_all
Out[86]:
'emissions_nox_mgkm ~ metric_combined + metric_extra_urban + metric_urban_cold + engine_capacity + emissions_co_mgkm + mfr_owner_is_vw + trans + fuel_type + is_tdi'
Create design matrices for statsmodels
In [87]:
(mx_en, mx_ex) = pt.dmatrices(fml_all, dfs, return_type='dataframe', NA_action='raise')
# custom_describe(mx_ex, 2, )

Frequentist OLS Regression

For later comparison, first let's use statsmodels to run a Freqentist OLS

In [88]:
smfit = sm.OLS(mx_en, mx_ex).fit()
smfit.summary()
Out[88]:
OLS Regression Results
Dep. Variable: emissions_nox_mgkm R-squared: 0.467
Model: OLS Adj. R-squared: 0.464
Method: Least Squares F-statistic: 209.0
Date: Mon, 29 Feb 2016 Prob (F-statistic): 0.00
Time: 10:40:26 Log-Likelihood: -10585.
No. Observations: 2641 AIC: 2.119e+04
Df Residuals: 2629 BIC: 2.126e+04
Df Model: 11
Covariance Type: nonrobust
coef std err t P>|t| [95.0% Conf. Int.]
Intercept 46.2939 0.668 69.332 0.000 44.985 47.603
mfr_owner_is_vw[T.True] 4.2480 0.883 4.809 0.000 2.516 5.980
trans[T.manual] -1.1535 0.635 -1.816 0.070 -2.399 0.092
trans[T.semiauto] -2.5677 1.001 -2.564 0.010 -4.532 -0.604
fuel_type[T.hybrid] -15.8664 2.114 -7.505 0.000 -20.012 -11.721
fuel_type[T.petrol] -19.3263 1.009 -19.151 0.000 -21.305 -17.348
is_tdi[T.True] 5.7168 1.198 4.772 0.000 3.368 8.066
metric_combined -52.0603 21.036 -2.475 0.013 -93.310 -10.811
metric_extra_urban 19.2034 9.017 2.130 0.033 1.523 36.884
metric_urban_cold 24.2640 12.603 1.925 0.054 -0.449 48.977
engine_capacity 8.6972 1.252 6.947 0.000 6.242 11.152
emissions_co_mgkm 2.3939 0.621 3.853 0.000 1.176 3.612
Omnibus: 10.327 Durbin-Watson: 1.060
Prob(Omnibus): 0.006 Jarque-Bera (JB): 9.358
Skew: 0.100 Prob(JB): 0.00929
Kurtosis: 2.787 Cond. No. 129.

Observe

  • That was easy! statsmodels is great for basic stuff
  • The R-squared of 0.467 isn't too bad considering the possible range (-inf,1) (NOTE I explain a little more about r-squared theory towards the end of this notebook)
  • The condition number of 129 is far above 20, the recommended theshold at which we should consider the effects of multicollinearity

I won't get into the actual interpretation of the coefficient values yet.

NOTE

  • Just in case you missed it, I used patsy above to create 'design matrices' for the data, prior to modelling with statsmodels. This converted the main dataframe to the same 'modelspec' as I will use throughout the Frequentist and Bayesian modelling.
  • The categorical features have been binarised (a.k.a one-hot encoded) and the Intercept coefficient is overloaded with the first value from each categorical feature to allow for proper identifiability: i.e:
    • if a datapoint had raw feature value trans == manual, that is now indicated by a boolean True in the new column trans[T.manual], and a boolean False in the new column trans[T.semiauto]
    • if a datapoint had raw feature value trans == auto, that is now indicated by a boolean False in the new columns trans[T.manual] and trans[T.semiauto]: the Intercept column always has value 1 aka True, meaning that trans == auto is represented by the Intercept.
    • The overloading means that the Intercept represents cars with categorical values trans == auto, mfr_owner_is_vw == False, fuel_type == diesel, and is_tdi == False.
  • Due to our standardization (mean-centering and dividing by the std.dev.) the Intercept also represents the mean of the numeric features.

Bayesian OLS Regression

Okay, time to use pymc3! Lets create the same OLS model, again using:

  • the glm submodule for convenience
  • the NUTS sampler for 'better' convergence

Define and run model

In [89]:
with pm.Model() as mdl_ols:
    
    ## Use GLM submodule for simplified model specification
    ## Betas are Uniform (for OLS)
    ## Likelihood is Normal (with HalfCauchy for error prior)
    
    pm.glm.glm('{} ~ '.format(ft_endog) + ' + '.join(fts_num + fts_cat)
               ,dfs
               ,intercept_prior=pm.Uniform.dist(lower=-1e6, upper=1e6)
               ,regressor_prior=pm.Uniform.dist(lower=-1e6, upper=1e6)
               ,family=pm.glm.families.Normal())

    ## find MAP using Powell
    start_MAP = pm.find_MAP(fmin=optimize.fmin_powell)
    
    ## Sample using NUTS
    trc_ols = pm.sample(2000, start=start_MAP, step=pm.NUTS())
    
    
# convenience: declare Random Variables (RVs) _not_ created by the PyMC3 backend
rvs = [rv.name for rv in mdl_ols.unobserved_RVs]
_ = [rvs.remove(rv.name) for rv in mdl_ols.free_RVs]
Applied interval-transform to Intercept and added transformed Intercept_interval to model.
Applied interval-transform to mfr_owner_is_vw[T.True] and added transformed mfr_owner_is_vw[T.True]_interval to model.
Applied interval-transform to trans[T.manual] and added transformed trans[T.manual]_interval to model.
Applied interval-transform to trans[T.semiauto] and added transformed trans[T.semiauto]_interval to model.
Applied interval-transform to fuel_type[T.hybrid] and added transformed fuel_type[T.hybrid]_interval to model.
Applied interval-transform to fuel_type[T.petrol] and added transformed fuel_type[T.petrol]_interval to model.
Applied interval-transform to is_tdi[T.True] and added transformed is_tdi[T.True]_interval to model.
Applied interval-transform to metric_combined and added transformed metric_combined_interval to model.
Applied interval-transform to metric_extra_urban and added transformed metric_extra_urban_interval to model.
Applied interval-transform to metric_urban_cold and added transformed metric_urban_cold_interval to model.
Applied interval-transform to engine_capacity and added transformed engine_capacity_interval to model.
Applied interval-transform to emissions_co_mgkm and added transformed emissions_co_mgkm_interval to model.
Applied log-transform to sd and added transformed sd_log to model.
 [-----------------100%-----------------] 2000 of 2000 complete in 41.0 sec

Observe

  • PyMC's default behaviour when creating the theano-based model is to be quite verbose, and you can see several printout lines informing us of various transforms added to the model
  • The NUTS sampler ran 2000 iterations, with a single chain, taking under a minute.

View feature coefficients

In [90]:
pm.df_summary(trc_ols[-1000:], varnames=rvs)
Out[90]:
mean sd mc_error hpd_2.5 hpd_97.5
Intercept 46.244592 0.691005 0.030061 44.765587 47.531293
mfr_owner_is_vw[T.True] 4.259546 0.943761 0.027260 2.308817 6.016226
trans[T.manual] -1.111725 0.631056 0.020960 -2.291213 0.197674
trans[T.semiauto] -2.528842 1.010407 0.027052 -4.494458 -0.545857
fuel_type[T.hybrid] -15.877565 2.238481 0.066838 -20.660791 -11.871254
fuel_type[T.petrol] -19.275976 1.015030 0.045213 -21.193061 -17.204596
is_tdi[T.True] 5.709995 1.282720 0.039323 3.230674 8.177947
metric_combined -53.521139 22.330069 1.836654 -95.289728 -13.675488
metric_extra_urban 19.805878 9.706478 0.791371 1.887635 36.715806
metric_urban_cold 25.112368 13.169643 1.071202 0.929582 49.626974
engine_capacity 8.723662 1.296313 0.048746 6.311740 11.279146
emissions_co_mgkm 2.355926 0.618600 0.018136 1.128695 3.493506
sd 13.345757 0.183700 0.006675 12.985165 13.694030

Observe:

  • The above table summarises the final 1000 steps of the traces, giving us the basic statistics of the distributions of the parameter estimates.
  • You can see the mean values are very similar to the statsmodels OLS model, which is good to see, and for reference they shown in the following cell:
In [91]:
## recap on the values from statsmodels OLS
pd.DataFrame.from_csv(StringIO(smfit.summary().tables[1].as_csv())).iloc[:,:1]
Out[91]:
coef
Intercept 46.2939
mfr_owner_is_vw[T.True] 4.2480
trans[T.manual] -1.1535
trans[T.semiauto] -2.5677
fuel_type[T.hybrid] -15.8664
fuel_type[T.petrol] -19.3263
is_tdi[T.True] 5.7168
metric_combined -52.0603
metric_extra_urban 19.2034
metric_urban_cold 24.2640
engine_capacity 8.6972
emissions_co_mgkm 2.3939

View traceplots

Rather than just get point-estimate statistics from the traces, let's take a look at the traceplots using PyMC3's built-in pm.traceplot() function.

As I mentioned in the first blogpost:

  • each feature coefficient is shown on a single row
  • the right-hand-side plot is a simple timeseries of each value on the trace over the 1000 samples
  • the left-hand-side plot is a density plot of the traces (mean shown in red) this is a marginal posterior distribution on each coefficient
In [92]:
plot_traces(trc_ols, retain=1000, varnames=rvs)