Applied AI Internal Demo Project

Car Emissions Study

This subset (9x series) of notebooks was created for publication in a series of blogposts for Applied AI. They are part of a wider project primarily serving as a coherent collection of my preferred techniques for data preparation & analysis and Bayesian inference in Python.

A set of notebooks designed to investigate car emissions data from the PoV of the Volkswagen Emissions Scandal which seems to have meaningfully damaged their sales. The motivation is to investigate the data and see if we can see any unsual behaviour in Volkswagen.

Using data from the UK VCA (Vehicle Type Approval) Car Fuel and Emissions Information for August 2015. Dataset available here and also included in the repo since it's small.

93_HierarchicalLinearRegression

Demonstrate pooling and hierarchical linear regression

Create a set of progressively more complex models, trying to show the effect of manufacturer upon NOx emissions. I'll evaluate the models using WAIC and PPC.


Notes:

  • Python 3.5 project using the latest available PyMC3
  • Developed using ContinuumIO Anaconda distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.
  • If execution becomes unstable or Theano throws weird errors, try clearing the cache $> theano-cache clear and rerunning the notebook.

Package Requirements (shown as a conda-env YAML):

$> less conda_env_pymc3_examples.yml

name: pymc3_examples
channels:
  - defaults
dependencies:
  - python=3.5
  - jupyter
  - ipywidgets
  - numpy
  - scipy
  - matplotlib=1.4.3
  - pandas
  - scikit-learn
  - seaborn
  - patsy  
  - pip

$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3

Setup

Nothing especially interesting in this Setup section, just loading the packages and data we need

In [36]:
## Interactive magics
%matplotlib inline
%qtconsole --colors=linux
In [37]:
import warnings
warnings.filterwarnings('ignore')

# general packages
import sys
import regex as re

# scientific packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from scipy import optimize

# pymc3 libraries
import pymc3 as pm
from pymc3.backends.base import merge_traces
import theano as thno
import theano.tensor as T 

from ipywidgets import interactive, fixed

sys.setrecursionlimit(4000)
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 12, 4
np.random.seed(0)
storename = 'data/store_01.h5'
dfwaic = pd.DataFrame() # setup for WAIC evaluations
Switches for convenience
In [97]:
sample_switches = {'pooled':False, 'unpooled':False, 'unpooled3':False
                   ,'fullyunpooled':True, 'partpooled':True, 'partpooled_mfrowner':True}
Package versions
In [39]:
print('Python: {}'.format(sys.version))
print('Recursion limit {}'.format(sys.getrecursionlimit()))
print('theano: {}'.format(thno.__version__))
print('PyMC3: {}'.format(pm.__version__))
Python: 3.5.1 |Continuum Analytics, Inc.| (default, Dec  7 2015, 11:16:01) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
Recursion limit 4000
theano: 0.8.0rc1
PyMC3: 3.0
In [114]:
# %connect_info

Local Functions

In [106]:
def custom_describe(df, nrows=3, nfeats=20):
    ''' Conv fn: concat transposed topN rows, numerical desc & dtypes '''

    print(df.shape)
    rndidx = np.random.randint(0,len(df),nrows)
    dfdesc = df.describe().T

    for col in ['mean','std']:
        dfdesc[col] = dfdesc[col].apply(lambda x: np.round(x,2))
 
    dfout = pd.concat((df.iloc[rndidx].T, dfdesc, df.dtypes),axis=1, join='outer')
    dfout = dfout.loc[df.columns.values]
    dfout.rename(columns={0:'dtype'}, inplace=True)
    
    # add count nonNAN, min, max for string cols
    dfout['count'] = df.shape[0] - df.isnull().sum()
    dfout['min'] = df.min().apply(lambda x: x[:6] if type(x) == str else x)
    dfout['max'] = df.max().apply(lambda x: x[:6] if type(x) == str else x)
    
    return dfout.iloc[:nfeats,:]


def standardize_2sd(df):
    return (df - df.mean(0)) / (2 * df.std(0))


def strip_derived_rvs(rvs):
    '''Convenience fn: remove PyMC3-generated RVs from a list'''
    ret_rvs = []
    for rv in rvs:
        if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
            ret_rvs.append(rv)     
    return ret_rvs


def trace_median(x):
    return pd.Series(np.median(x,0), name='median')


def plot_traces(trcs, varnames=None):
    ''' Convenience fn: plot traces with overlaid means and values '''

    nrows = len(trcs.varnames)
    if varnames is not None:
        nrows = len(varnames)
    
    ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4)
        ,lines={k: v['mean'] for k, v in 
            pm.df_summary(trcs,varnames=varnames).iterrows()}
        ,combined=True)

    # don't label the nested traces (a bit clumsy this: consider tidying)
    dfmns = pm.df_summary(trcs, varnames=varnames)['mean'].reset_index()
    dfmns.rename(columns={'index':'featval'}, inplace=True)
    dfmns = dfmns.loc[dfmns['featval'].apply(lambda x: re.search('__[1-9]{1,}', x) is None)]
    dfmns['draw'] = dfmns['featval'].apply(lambda x: re.search('__0{1}$', x) is None)
    dfmns['pos'] = np.arange(dfmns.shape[0])
    dfmns.set_index('pos', inplace=True)
    
    for i, r in dfmns.iterrows():
        if r['draw']:
            ax[i,0].annotate('{:.2f}'.format(r['mean']), xy=(r['mean'],0)
                    ,xycoords='data', xytext=(5,10)
                    ,textcoords='offset points', rotation=90
                    ,va='bottom', fontsize='large', color='#AA0022')    


            
def create_smry(trc, dfs, pname='mfr'):
    ''' Conv fn: create trace summary for sorted forestplot '''

    dfsm = pm.df_summary(trc).reset_index()
    dfsm.rename(columns={'index':'featval'}, inplace=True)
    dfsm = dfsm.loc[dfsm['featval'].apply(
        lambda x: re.search('{}__[0-9]+'.format(pname), x) is not None)]

    dfsm.set_index(dfs[pname].unique(), inplace=True)
    dfsm.sort('mean', ascending=True, inplace=True)
    dfsm['ypos'] = np.arange(len(dfsm))
    
    return dfsm

            
                
def custom_forestplot(df, ylabel='mfr', size=8, aspect=0.8, facetby=None):
    ''' Conv fn: plot features from pm.df_summary using seaborn
        Facet on sets of forests for comparison '''
        
    g = sns.FacetGrid(col=facetby, hue='mean', data=df, palette='RdBu_r'
                      ,size=size, aspect=aspect)
    _ = g.map(plt.scatter, 'mean', 'ypos'
                ,marker='o', s=100, edgecolor='#333333', linewidth=0.8, zorder=10)
    _ = g.map(plt.hlines, 'ypos', 'hpd_2.5','hpd_97.5', color='#aaaaaa')

    _ = g.axes.flat[0].set_ylabel(ylabel)
    _ = [ax.set_xlabel('coeff value') for ax in g.axes.flat]
    _ = g.axes.flat[0].set_ylim((-1, df['ypos'].max()+1))
    _ = g.axes.flat[0].set_yticks(np.arange(df['ypos'].max()+1))
    _ = g.axes.flat[0].set_yticklabels(df.index)

Load Data

In [41]:
store = pd.HDFStore(storename)
df = store['/clean']
store.close()

Prepare Dataset

This dataset allows us to ask if there's any interesting patterns in NOx emissions: for cars made by the Volkswagen group

  • emissions_nox_mgkm

... according to a handful of exogenous variables:

  • mfr_owner_is_vw, transmission, fuel_type_smpl, is_tdi
  • engine_capacity, metric_combined, metric_extra_urban, metric_urban_cold
Declare feats for use
In [42]:
fts_cat = ['mfr_owner','mfr','model','trans','fuel_type','is_tdi']
fts_num = ['metric_combined','metric_extra_urban','metric_urban_cold'
           ,'engine_capacity','emissions_co_mgkm']
ft_endog = 'emissions_nox_mgkm'
Exclude outliers
In [43]:
dfi = df.loc[df['mcd_outlier']==0]
dfi.shape
Out[43]:
(2644, 21)
In [44]:
## also exclude 3 high-value outliers in the endogenous feat

f, ax1d = plt.subplots(1,1,figsize=(12,2))
_ = sns.violinplot(x=ft_endog, data=dfi, ax=ax1d, inner='point')
In [45]:
dfi = dfi.loc[dfi[ft_endog]<=100].copy()
dfi.shape
Out[45]:
(2641, 21)
Exclude Hybrids

fuel_type: hybrid contains several fundamentally different types of engine, so for simplicity I'll exclude for this study

In [46]:
dfi.groupby('fuel_type').size()
dfi = dfi.loc[dfi['fuel_type']!='hybrid']
dfi.shape
Out[46]:
(2593, 21)
Standardize the dataset according to Gelman 1 / (2 * sd)

Divide by 2 standard deviations in order to put the variance of a normally distributed variable nearer to the variance range of a binary variable. See http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf for more info.

In [47]:
dfs = pd.concat((dfi[ft_endog], dfi[fts_cat], standardize_2sd(dfi[fts_num])),1)

Basic Description

In [48]:
custom_describe(dfs, 2)
(2593, 12)
Out[48]:
1701 844 count mean std min 25% 50% 75% max dtype
emissions_nox_mgkm 65 36 2593 37.35 17.93 1 23.000000 35.000000 51.000000 90 float64
mfr_owner daimler-ag bmw 2593 NaN NaN aston NaN NaN NaN volksw object
mfr mercedes-benz bmw 2593 NaN NaN abarth NaN NaN NaN volvo object
model cla-coupé, model year 2015 m4 series convertible f83, from september 2014 2593 NaN NaN 1 seri NaN NaN NaN zafira object
trans auto semiauto 2593 NaN NaN auto NaN NaN NaN semiau object
fuel_type diesel petrol 2593 NaN NaN diesel NaN NaN NaN petrol object
is_tdi False False 2593 NaN NaN False NaN NaN NaN True bool
metric_combined -0.393958 0.812767 2593 -0.00 0.50 -0.688935 -0.340326 -0.152613 0.169180 2.47537 float64
metric_extra_urban -0.446265 0.913084 2593 0.00 0.50 -0.886055 -0.326323 -0.126418 0.193428 2.79218 float64
metric_urban_cold -0.342415 0.730494 2593 0.00 0.50 -0.660935 -0.308887 -0.158009 0.160511 2.65838 float64
engine_capacity 0.0432038 0.505497 2593 0.00 0.50 -0.923409 -0.279185 -0.044720 -0.036979 2.5919 float64
emissions_co_mgkm -0.393126 -0.456967 2593 0.00 0.50 -0.847108 -0.382485 -0.088106 0.273661 2.28466 float64

Observe:

  • The dataset is 2593 rows, with 12 features.
  • These are observations of car emissions tests, one row per car.
  • You can read off the basic distributional statistics of the features in the table above. Numeric features have been standardized according to Gelman's 2sd principle.
  • I have selected these particular 10 features to work with. Some are derivatives of original features.

We have the following features:

+ Categoricals:
    + `trans`     - the car transmission, simplified to 'auto', 'semiauto', 'manual'
    + `fuel_type` - the car's power supply, simplified to 'petrol', 'diesel', 'hybrid'
    + `model`     - the car model
    + `mfr`       - the car manufacturer
    + `mfr_owner` - the parent company of the car manufacturer

+ Booleans:
    + `is_tdi`    - (processed feature) if the car engine type is a turbo diesel

+ Numerics:
    + `metric_combined`    - a score for fuel efficiency in combined driving
    + `metric_extra_urban` - a score for fuel efficiency in an extra-urban driving
    + `metric_urban_cold`  - a score for fuel efficiency in an urban setting, cold start
    + `emissions_co_mgkm`  - a count of CO particulates emitted mg/km

+ Numeric endogenous feature:
    + `emissions_nox_mgkm` - a count of NOx particulates emitted mg/km

For the purposes of this Notebook, the final feature mentioned emissions_nox_mgkm will be used as the endogenous / dependent / output feature of the linear models. All other features may be used as exogenous / independent / input features.

Label encode mfr and mfr_owner
In [49]:
le = LabelEncoder()
dfs['mfr_enc'] = le.fit_transform(dfs['mfr'])
dfs['mfr_owner_enc'] = le.fit_transform(dfs['mfr_owner'])

n_mfr_owner_enc = dfs['mfr_owner_enc'].max()+1
n_mfr_enc = dfs['mfr_enc'].max()+1

Choose Features

In the previous Notebook, I used a Lasso model for feature reduction. I'll broadly follow the results of that exercise here and use the following features for modelling, I include emissions_co_mgkm just to demonstrate a continuous feature in there.

Note: I will use this glm model specification for the pooled model. I will have to manually specify the unpooled, partially-pooled and hierarchical models.

endogenous feature: emissions_nox_mgkm

exogenous features: mfr_owner           : multi-class string
                    mfr                 : multi-class string
                    fuel_type           : multi-class string
                    trans               : multi-class string
                    is_tdi              : boolean
                    engine_capacity     : numeric int
                    metric_combined     : numeric int
                    emissions_co_mgkm   : numeric float
Reminder of mfr and mfr owners counts:
In [50]:
print('mfr_owner: {} uniques\nmfr: {} uniques'.format(
        len(dfs['mfr_owner'].unique()), len(dfs['mfr'].unique())))
mfr_owner: 20 uniques
mfr: 38 uniques

Create Modelspecs and Design Matrices

In [52]:
fml_pooled = '{} ~ '.format(ft_endog) + ' + '.join(['fuel_type','trans'
            ,'is_tdi','engine_capacity','metric_combined','emissions_co_mgkm'])
print(fml_pooled)
emissions_nox_mgkm ~ fuel_type + trans + is_tdi + engine_capacity + metric_combined + emissions_co_mgkm
In [53]:
(mx_en_unpooled, mx_ex_unpooled) = pt.dmatrices(fml_pooled, dfs
                        ,return_type='dataframe', NA_action='raise')
mx_ex_unpooled.head()
Out[53]:
Intercept fuel_type[T.petrol] trans[T.manual] trans[T.semiauto] is_tdi[T.True] engine_capacity metric_combined emissions_co_mgkm
0 1.0 1.0 1.0 0.0 0.0 -0.385357 0.088732 -0.166134
1 1.0 1.0 1.0 0.0 0.0 -0.385357 0.222813 -0.166134
2 1.0 1.0 0.0 1.0 0.0 -0.385357 0.035100 0.624789
3 1.0 1.0 0.0 1.0 0.0 -0.385357 0.222813 0.624789
4 1.0 1.0 1.0 0.0 0.0 -0.385357 0.088732 -0.166134

Pooled Model

Pool (ignore) the mfr feature.

$$y \sim \mathcal{N}(\beta^{T} \bf{x},\epsilon)$$

where:
$\beta$ are our coeffs in the linear model
$\bf{x}$ is the vector of features describing each car in the dataset
$\epsilon \sim \mathcal{HalfCauchy}(0, 10)$

I'll attempt to robustly handle outliers this time by using a Student-T distribution for the likelihood, the error-term $\epsilon$ is stochastic noise in the likelihood of that model.

In [54]:
with pm.Model() as mdl_pooled:
  
    ## Betas are Normal (for Ridge)
    ## Likelihood is StudentT for robust regression (by default nu = 1)
    pm.glm.glm(fml_pooled, dfs, family=pm.glm.families.StudentT())
        
    if sample_switches['pooled']:
        trc_pooled = pm.sample(2000, njobs=1, step=pm.NUTS()
                               ,start=pm.find_MAP(fmin=optimize.fmin_powell)
                               ,trace=pm.backends.Text('traces_txt/trc_pooled'))
    else:
        trc_pooled = pm.backends.text.load('traces_txt/trc_pooled')
Applied log-transform to lam and added transformed lam_log to model.
View traces
In [55]:
dfwaic['pooled'] = [pm.stats.waic(model=mdl_pooled, trace=trc_pooled[-1000:])]
rvs_pooled = [rv.name for rv in strip_derived_rvs(mdl_pooled.unobserved_RVs)]
plot_traces(trc_pooled[-1000:], varnames=rvs_pooled)

Observe:

Bear in mind the linear model assumes purely orthogonal parameters, which isn't realistic: there will be some corrolations. Nonetheless, lets look the the outputs:

  • On the intercept are the boolean params: fuel_type[T.diesel], trans[T.auto] and is_tdi[T.False]
  • The other boolean params are given their own expanded parameter, and we see:

    • fuel_type[T.petrol] has a strongly negative effect on emissions (by which I mean it reduces the emissions NOx value)
    • trans[T.manual] has a weak but definitly negative effect on emissions
    • trans[T.semiauto] has a slightly weakly negative effect on emissions, but 0 is well within the CR, so the effect is small and probably not consequential
    • is_tdi[T.True] has a strong positive effect on emissions: I find this a little surprising given that turbo-diesel engines ought to make better usage (more combustion) of the fuel, however, perhaps they're also hotter at this point in time, which apparently increases emissions: http://www.smogtips.com/failed-high-no-nitrous-oxide.cfm
  • The effect of linear scaled params is:
    • engine_capacity has a strongly positive effect on emissions (increasing them), which makes perfect sense: larger engine == more emissions
    • metric_combined has a strongly negative effect on emissions: more efficient cars have lower NOx emissions
    • emissions_co_mgkm has a slightly positive effect on NOx emissions, which makes sense: we might expect emissions of different types to loosely corrolate.
  • The intercept itself is not smooth and appears to have multi-modality. This is great to see since it implies there's more variation in the datset that we haven't yet allowed for in the model. Perhaps this variation can be encoded in the different mfr levels...

Unpooled Model

Include the mfr feature values in the dmatrix. Each mfr value gets a separate intercept with shared slopes.

$$y \sim \mathcal{N}(\beta_{mfr} + \beta^{T} \bf{x},\epsilon)$$

where:
$\beta_{mfr}$ is a separate intercept for each manufacturer
$\beta$ are our (shared) coeffs in the linear model
$\bf{x}$ is the vector of features describing each car in the dataset
$\epsilon \sim \mathcal{HalfCauchy}(0, 10)$

In [56]:
with pm.Model() as mdl_unpooled:
   
    # define priors, use Normal for Ridge (sd=100, weakly informative)
    b0 = pm.Normal('b0_mfr', mu=0, sd=100, shape=n_mfr_enc)
    b1 = pm.Normal('b1_fuel_type[T.petrol]', mu=0, sd=100)
    b2a = pm.Normal('b2a_trans[T.manual]', mu=0, sd=100)
    b2b = pm.Normal('b2b_trans[T.semiauto]', mu=0, sd=100)
    b3 = pm.Normal('b3_is_tdi[T.True]', mu=0, sd=100)
    b4 = pm.Normal('b4_engine_capacity', mu=0, sd=100)
    b5 = pm.Normal('b5_metric_combined', mu=0, sd=100)
    b6 = pm.Normal('b6_emissions_co_mgkm', mu=0, sd=100)    
    
    # define linear model
    yest = ( b0[dfs['mfr_enc']] +
             b1 * mx_ex_unpooled['fuel_type[T.petrol]'] + 
             b2a * mx_ex_unpooled['trans[T.manual]'] +
             b2b * mx_ex_unpooled['trans[T.semiauto]'] +
             b3 * mx_ex_unpooled['is_tdi[T.True]'] +
             b4 * mx_ex_unpooled['engine_capacity'] +
             b5 * mx_ex_unpooled['metric_combined'] +
             b6 * mx_ex_unpooled['emissions_co_mgkm'])

    ## Student T likelihood with fixed degrees of freedom nu
    epsilon = pm.HalfCauchy('epsilon', beta=10)
    likelihood = pm.StudentT('likelihood', nu=1, mu=yest
                             ,sd=epsilon, observed=dfs[ft_endog])
 
    if sample_switches['unpooled']:
        trc_unpooled = pm.sample(2000, njobs=1, step=pm.NUTS()
                            ,start=pm.find_MAP(fmin=optimize.fmin_powell)     
                            ,trace=pm.backends.Text('traces_txt/trc_unpooled'))
    else:
        trc_unpooled = pm.backends.text.load('traces_txt/trc_unpooled')
Applied log-transform to epsilon and added transformed epsilon_log to model.

Sample the Unpooled Model with multiple chains

When models start to get complicated it's generally a good idea to sample them using multiple chains: this goes some way to exploring the space better, reducing variance in the mean estimates and helping to avoid local minima.

Let's sample the Unpooled Model again using 3 chains and compare the results

In [57]:
with mdl_unpooled: 
    
    if sample_switches['unpooled3']:
        trc_unpooled3 = pm.sample(1000, njobs=3, step=pm.NUTS()
                                ,start=pm.find_MAP(fmin=optimize.fmin_powell)    
                                ,trace=pm.backends.Text('traces_txt/trc_unpooled3'))
    else:
        trc_unpooled3 = pm.backends.text.load('traces_txt/trc_unpooled3')
    
View traces
In [58]:
rvs_unpooled = [rv.name for rv in mdl_unpooled.unobserved_RVs if not re.search('_log',rv.name)]
ax = pm.traceplot(trc_unpooled3[-333:], varnames=rvs_unpooled
                  ,figsize=(12,len(rvs_unpooled)*1.4), combined=False)