Poverty prediction using Random Forest

This is a Kaggle competition for “The Inter-American Development Bank” to identify the families for the financial aid. Currently they use Proxy Means Test (PMT) algorithm to verify the income qualification. To improve the current algorithm, IDB is hosting this competition to get advanced machine learning algorithm to improve the performance of PMT.

This is my first Kaggle competiton and coincided with my thought of doing a charity project first. Let us start with cleaning the data.

Import & Clean Data

Let us import the data into pandas dataframe.

In [126]:
import pandas as pd
df = pd.read_csv('train.csv')
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 9557 entries, 0 to 9556
Columns: 143 entries, Id to Target
dtypes: float64(8), int64(130), object(5)
memory usage: 10.4+ MB

Empirical Cumulative Distribution Function (ECDF)

I am datacamp student and my Exploratory analysis always starts with ECDF. Plotting the ECDF is the best way to analyze the distribution of data.

In [127]:
import numpy as np

# Calculate ECDF for a series
def ecdf(data):
    n = len(data)
    x = np.sort(data)
    y = np.arange(1, n+1/n) / n
    return x, y
In [128]:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()

Before plotting ECDF for rent, i would like to check the missing values.

In [129]:
df.v2a1.isnull().sum()
Out[129]:
6860

Let us check how many missing rows are rented houses

In [130]:
df[(df.v2a1.isnull()) & (df['tipovivi3'] == 1)]['v2a1'].sum()
Out[130]:
0.0

There is no rental houses where rental value is null, so let us replace with zero.

In [131]:
df.v2a1.fillna(0, inplace=True)

Plotting the rent by Household category provides us the clear distribution. The below graph shows that one value is too extreme for the given dataset.

In [132]:
x_ep, y_ep = ecdf(df[df['Target']==1].v2a1)
x_mp, y_mp = ecdf(df[df['Target']==2].v2a1)
x_vh, y_vh = ecdf(df[df['Target']==3].v2a1)
x_nh, y_nh = ecdf(df[df['Target']==4].v2a1)

plt.figure(figsize=(15,8))
plt.plot(x_ep, y_ep, marker = '.', linestyle='none')
plt.plot(x_mp, y_mp, marker = '.', linestyle='none')
plt.plot(x_vh, y_vh, marker = '.', linestyle='none')
plt.plot(x_nh, y_nh, marker = '.', linestyle='none', color='y')

plt.legend(('Extreme Poverty', 'Moderate Poverty', 'Vulnerable Household', 'Non-vulnerable Household'))


plt.margins(0.02)
plt.xlabel('Rent')
plt.ylabel('ECDF')
plt.show()

Let us check how many outliers on the rent.

In [133]:
df[df['v2a1'] > 1000000].head()
Out[133]:
Id v2a1 hacdor rooms hacapo v14a refrig v18q v18q1 r4h1 ... SQBescolari SQBage SQBhogar_total SQBedjefe SQBhogar_nin SQBovercrowding SQBdependency SQBmeaned agesq Target
4441 ID_cb5f684a6 2353477.0 0 9 0 1 1 0 NaN 0 ... 361 2601 4 0 0 0.694444 0.0 272.25 2601 4
4442 ID_15c481789 2353477.0 0 9 0 1 1 0 NaN 0 ... 196 3249 4 0 0 0.694444 0.0 272.25 3249 4

2 rows × 143 columns

Looks like there are only 2 rows and let us remove the same.

In [134]:
df = df[df['v2a1'] < 1000000]

Let us clean the data for the remaining feature variables

In [135]:
df.v18q1.fillna(0, inplace=True)
df.meaneduc.fillna(df.SQBmeaned, inplace=True)
In [185]:
df.meaneduc.fillna(0, inplace=True)
In [186]:
df['meaneduc'] =  pd.to_numeric(df['meaneduc'])
df.rez_esc.fillna(0, inplace=True)
In [137]:
df.dependency.fillna(df.SQBdependency, inplace=True)

Setting the Target variable as per head of household for incorrect records

In [138]:
for item in df['idhogar'].unique():
    df_household = df[df['idhogar'] == item]
    head_target = df_household[df_household['parentesco1'] == 1]['Target'].values
    
    for index, row in df_household.iterrows():
        if (row['Target'] != head_target):
            df.loc[df['Id']==row['Id'], 'Target'] = head_target
C:\Users\rgraj\Anaconda3\lib\site-packages\ipykernel_launcher.py:6: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
  

Let us select the features based on Pearson correlation

In [139]:
def pearson_r(x, y):
    corr_mat = np.corrcoef(x, y)
    return corr_mat[0,1]
In [187]:
for col in df.columns:
    if ((df[col].dtype != 'str') & (df[col].dtype != 'object')) :
        print('Column : {0}, Corr : {1}'.format(col, pearson_r(df[col], df.Target)))
Column : v2a1, Corr : 0.1720026962415181
Column : hacdor, Corr : -0.19196909368793066
Column : rooms, Corr : 0.22841563989475472
Column : hacapo, Corr : -0.13780491913462545
Column : v14a, Corr : 0.06328813014421812
Column : refrig, Corr : 0.13022115899871486
Column : v18q, Corr : 0.2402745090617531
Column : v18q1, Corr : 0.20261608928498884
Column : r4h1, Corr : -0.23103736067221975
Column : r4h2, Corr : 0.10694013512874569
Column : r4h3, Corr : -0.03906132581572069
Column : r4m1, Corr : -0.2582892760296603
Column : r4m2, Corr : -0.03499052199798123
Column : r4m3, Corr : -0.17542813008659927
Column : r4t1, Corr : -0.32091526106128115
Column : r4t2, Corr : 0.05438477300671477
Column : r4t3, Corr : -0.14565498051909495
Column : tamhog, Corr : -0.1455189849453768
Column : tamviv, Corr : -0.15685399785899898
Column : escolari, Corr : 0.3075746107943171
Column : rez_esc, Corr : -0.09411898324289676
Column : hhsize, Corr : -0.1455189849453768
Column : paredblolad, Corr : 0.2621738876292128
Column : paredzocalo, Corr : -0.08030033606732571
Column : paredpreb, Corr : -0.10002981226232452
Column : pareddes, Corr : -0.08167881597646261
Column : paredmad, Corr : -0.16675206167047243
Column : paredzinc, Corr : -0.0508343460369106
Column : paredfibras, Corr : -0.03851142086826854
Column : paredother, Corr : -0.0006386176495819844
Column : pisomoscer, Corr : 0.2827175134514963
Column : pisocemento, Corr : -0.2067396130432328
Column : pisoother, Corr : 0.021172270900587275
Column : pisonatur, Corr : -0.05128508814399478
Column : pisonotiene, Corr : -0.11029326423311556
Column : pisomadera, Corr : -0.11826022499984858
Column : techozinc, Corr : 0.031295623662311
Column : techoentrepiso, Corr : 0.019519131809883045
Column : techocane, Corr : -0.04638186372990827
Column : techootro, Corr : 0.03236153972107223
Column : cielorazo, Corr : 0.3081202889388733
Column : abastaguadentro, Corr : 0.07122424476630622
Column : abastaguafuera, Corr : -0.051776188387349126
Column : abastaguano, Corr : -0.06827087501020612
Column : public, Corr : 0.00851044682825132
Column : planpri, Corr : 0.0005389095895422745
Column : noelec, Corr : -0.04055519582104661
Column : coopele, Corr : 0.0034846787165631016
Column : sanitario1, Corr : -0.046974722033775874
Column : sanitario2, Corr : 0.08795468062230216
Column : sanitario3, Corr : -0.045984458159258675
Column : sanitario5, Corr : -0.10381198594966898
Column : sanitario6, Corr : -0.017078990901203607
Column : energcocinar1, Corr : -0.041631641737715684
Column : energcocinar2, Corr : 0.15591868406470757
Column : energcocinar3, Corr : -0.0817591551527599
Column : energcocinar4, Corr : -0.16214217401144998
Column : elimbasu1, Corr : 0.16084040281608353
Column : elimbasu2, Corr : -0.06658699836095165
Column : elimbasu3, Corr : -0.14299319514988412
Column : elimbasu4, Corr : -0.03851142086826859
Column : elimbasu5, Corr : nan
Column : elimbasu6, Corr : 0.0244514778850083
Column : epared1, Corr : -0.2027652711185438
Column : epared2, Corr : -0.17703176456537562
Column : epared3, Corr : 0.2920638441159634
Column : etecho1, Corr : -0.19494478132502852
Column : etecho2, Corr : -0.13664673487342444
Column : etecho3, Corr : 0.2578684139583939
Column : eviv1, Corr : -0.21051510471730692
Column : eviv2, Corr : -0.1788138300474702
Column : eviv3, Corr : 0.29527497771846123
Column : dis, Corr : -0.05607672511151839
Column : male, Corr : 0.03852804994992357
Column : female, Corr : -0.03852804994992357
Column : estadocivil1, Corr : -0.14063230773854726
Column : estadocivil2, Corr : -0.031444408162625614
Column : estadocivil3, Corr : 0.12858730541161056
Column : estadocivil4, Corr : 0.05366061012899286
Column : estadocivil5, Corr : -0.0497117778933083
Column : estadocivil6, Corr : -0.004573968720066776
Column : estadocivil7, Corr : 0.01118326016490358
Column : parentesco1, Corr : 0.037195743081104335
Column : parentesco2, Corr : 0.05592932101165218
Column : parentesco3, Corr : -0.05281738879560658
Column : parentesco4, Corr : -0.01804586747345296
Column : parentesco5, Corr : 0.015411266936890867
Column : parentesco6, Corr : -0.06713061776696
Column : parentesco7, Corr : 0.004101030694303541
Column : parentesco8, Corr : 0.008532291156526432
Column : parentesco9, Corr : 0.010456557299902827
Column : parentesco10, Corr : 0.00910482195404723
Column : parentesco11, Corr : -0.01907376638635399
Column : parentesco12, Corr : 0.017714154362654753
Column : hogar_nin, Corr : -0.32797555348135243
Column : hogar_adul, Corr : 0.16341363211032806
Column : hogar_mayor, Corr : -0.005630895149456886
Column : hogar_total, Corr : -0.1455189849453768
Column : meaneduc, Corr : 0.33583435209337786
Column : instlevel1, Corr : -0.15228558524438543
Column : instlevel2, Corr : -0.16264233653848625
Column : instlevel3, Corr : -0.023368821008901346
Column : instlevel4, Corr : 0.018593501234876783
Column : instlevel5, Corr : 0.07889842584303101
Column : instlevel6, Corr : 0.003570893951034647
Column : instlevel7, Corr : 0.04001737381575219
Column : instlevel8, Corr : 0.21601008928983426
Column : instlevel9, Corr : 0.08305575217960715
Column : bedrooms, Corr : 0.16907820230093243
Column : overcrowding, Corr : -0.29008262240622773
Column : tipovivi1, Corr : -0.004802297777340611
Column : tipovivi2, Corr : 0.14232326553116725
Column : tipovivi3, Corr : 0.005562719886915378
Column : tipovivi4, Corr : -0.11612222351279018
Column : tipovivi5, Corr : -0.09973492040045225
Column : computer, Corr : 0.18440626423057527
Column : television, Corr : 0.1575511234070067
Column : mobilephone, Corr : 0.10360888864860829
Column : qmobilephone, Corr : 0.2029221321673433
Column : lugar1, Corr : 0.17446966967075883
Column : lugar2, Corr : -0.01906166135837275
Column : lugar3, Corr : -0.08400726061894828
Column : lugar4, Corr : -0.0756067883417006
Column : lugar5, Corr : -0.09168172212071324
Column : lugar6, Corr : -0.045700133378206734
Column : area1, Corr : 0.08808506815909632
Column : area2, Corr : -0.08808506815909632
Column : age, Corr : 0.11688072171823843
Column : SQBescolari, Corr : 0.3001801893688292
Column : SQBage, Corr : 0.07382751815698442
Column : SQBhogar_total, Corr : -0.1414425832027652
Column : SQBedjefe, Corr : 0.2478511713952478
Column : SQBhogar_nin, Corr : -0.3105172350296785
Column : SQBovercrowding, Corr : -0.25978171519570353
Column : SQBdependency, Corr : -0.08342907654589149
Column : SQBmeaned, Corr : nan
Column : agesq, Corr : 0.07382751815698442
Column : Target, Corr : 1.0
C:\Users\rgraj\Anaconda3\lib\site-packages\numpy\lib\function_base.py:3183: RuntimeWarning: invalid value encountered in true_divide
  c /= stddev[:, None]
C:\Users\rgraj\Anaconda3\lib\site-packages\numpy\lib\function_base.py:3184: RuntimeWarning: invalid value encountered in true_divide
  c /= stddev[None, :]

Let us select the features which are highly correlated.

In [188]:
from sklearn.model_selection import train_test_split

X = df[['v2a1','rooms','refrig','v18q','v18q1','r4h2', 'escolari', 'paredblolad','pisomoscer','cielorazo','energcocinar2',
         'elimbasu1', 'epared3', 'etecho3','eviv3','estadocivil3','hogar_adul','meaneduc','instlevel8','bedrooms','tipovivi2',
              'computer','television','qmobilephone','lugar1','age']]
y= df['Target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
In [189]:
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100, random_state=0, oob_score=True, n_jobs=-1)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
In [190]:
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
Accuracy: 0.868411867364747
In [191]:
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
             precision    recall  f1-score   support

          1       0.89      0.68      0.77       240
          2       0.81      0.74      0.77       457
          3       0.93      0.63      0.75       392
          4       0.87      0.98      0.92      1776

avg / total       0.87      0.87      0.86      2865

Let us import & clean test data

In [192]:
df_test = pd.read_csv('test.csv')
In [193]:
df_test.v2a1.fillna(0, inplace=True)
In [194]:
df_test.v18q1.fillna(0, inplace=True)
df_test.meaneduc.fillna(df.SQBmeaned, inplace=True)
In [195]:
df_test.meaneduc.fillna(0, inplace=True)
In [196]:
df_test['meaneduc'] =  pd.to_numeric(df_test['meaneduc'])
df_test.rez_esc.fillna(0, inplace=True)
In [197]:
df_test.dependency.fillna(df.SQBdependency, inplace=True)
In [198]:
ids = df_test['Id']
test_features = df_test[['v2a1','rooms','refrig','v18q','v18q1','r4h2', 'escolari', 'paredblolad','pisomoscer','cielorazo','energcocinar2',
         'elimbasu1', 'epared3', 'etecho3','eviv3','estadocivil3','hogar_adul','meaneduc','instlevel8','bedrooms','tipovivi2',
              'computer','television','qmobilephone','lugar1','age']]
In [199]:
test_pred = model.predict(test_features)
In [200]:
submit = pd.DataFrame({'Id' : ids, 'Target' : test_pred})
In [201]:
submit.to_csv('submit.csv', index=False)

1 thought on “Poverty prediction using Random Forest”

Comments are closed.