Project: Identify Customer Segments

In this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.

This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks. The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.

It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. There will be times in the project where you will need to make and justify your own decisions on how to treat the data. These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.

At the end of most sections, there will be a Markdown cell labeled Discussion. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.

In [1]:
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

# magic word for producing visualizations in notebook
%matplotlib inline

Step 0: Load the Data

There are four files associated with this project (not including this one):

  • Udacity_AZDIAS_Subset.csv: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).
  • Udacity_CUSTOMERS_Subset.csv: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).
  • Data_Dictionary.md: Detailed information file about the features in the provided datasets.
  • AZDIAS_Feature_Summary.csv: Summary of feature attributes for demographics data; 85 features (rows) x 4 columns

Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.

To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the .csv data files in this project: they're semicolon (;) delimited, so you'll need an additional argument in your read_csv() call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.

Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.

In [2]:
# Load in the general demographics data.
azdias = pd.read_csv("./Udacity_AZDIAS_Subset.csv", sep=';')

# Load in the feature summary file.
feat_info = pd.read_csv("AZDIAS_Feature_Summary.csv", sep=';')
In [3]:
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
print("The shape of azdias is {}".format(azdias.shape))
print("The shape of feat_info is {}".format(feat_info.shape))
The shape of azdias is (891221, 85)
The shape of feat_info is (85, 4)
In [4]:
azdias.head(n=5)
Out[4]:
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER ... PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_HHZ PLZ8_GBZ ARBEIT ORTSGR_KLS9 RELAT_AB
0 -1 2 1 2.0 3 4 3 5 5 3 ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 -1 1 2 5.0 1 5 2 5 4 5 ... 2.0 3.0 2.0 1.0 1.0 5.0 4.0 3.0 5.0 4.0
2 -1 3 2 3.0 1 4 1 2 3 5 ... 3.0 3.0 1.0 0.0 1.0 4.0 4.0 3.0 5.0 2.0
3 2 4 2 2.0 4 2 5 2 1 2 ... 2.0 2.0 2.0 0.0 1.0 3.0 4.0 2.0 3.0 3.0
4 -1 3 1 5.0 4 3 4 1 3 2 ... 2.0 4.0 2.0 1.0 2.0 3.0 3.0 4.0 6.0 5.0

5 rows × 85 columns

In [5]:
feat_info.head(n=5)
Out[5]:
attribute information_level type missing_or_unknown
0 AGER_TYP person categorical [-1,0]
1 ALTERSKATEGORIE_GROB person ordinal [-1,0,9]
2 ANREDE_KZ person categorical [-1,0]
3 CJT_GESAMTTYP person categorical [0]
4 FINANZ_MINIMALIST person ordinal [-1]

Check how what types and levels of data we have

In [6]:
feat_info.type.unique().tolist()
Out[6]:
['categorical', 'ordinal', 'numeric', 'mixed', 'interval']
In [7]:
print(feat_info.information_level.unique().tolist())
['person', 'household', 'building', 'microcell_rr4', 'microcell_rr3', 'postcode', 'region_rr1', 'macrocell_plz8', 'community']

Tip: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut esc --> a (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and esc --> b adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use esc --> m and to convert to a code cell, use esc --> y.

Step 1: Preprocessing

Step 1.1: Assess Missing Data

The feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the Discussion cell with your findings and decisions at the end of each step that has one!

Step 1.1.1: Convert Missing Value Codes to NaNs

The fourth column of the feature attributes summary (loaded in above as feat_info) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. [-1,0]), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.

As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.

In [8]:
# Identify missing or unknown data values and convert them to NaNs.

# Change feat_info index 
feat_info.set_index("attribute", inplace=True)

To see what are encode as NA

In [9]:
print(feat_info.missing_or_unknown.unique().tolist())
['[-1,0]', '[-1,0,9]', '[0]', '[-1]', '[]', '[-1,9]', '[-1,X]', '[XX]', '[-1,XX]']

Change strings into list

In [10]:
na_dict ={'[-1,0]':[-1, 0], '[-1,0,9]':[-1,0,9], "[0]":[0], "[-1]":[-1], '[]':[], '[-1,9]':[-1,9], 
         '[-1,X]':[-1, "X"], '[XX]':["XX"], '[-1,XX]':[-1, 'XX']}

Use an anonymous function to change values into NA

In [11]:
cols = feat_info.index.tolist()
for col in cols:
    na = na_dict[feat_info.loc[col].missing_or_unknown]
    azdias[col] = azdias[col].map(lambda x: np.nan if x in na else x)

Step 1.1.2: Assess Missing Data in Each Column

How much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's hist() function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)

For the remaining features, are there any patterns in which columns have, or share, missing data?

In [12]:
# Perform an assessment of how much missing data there is in each column of the
# dataset.

# Calculate the number of missing value of each column
missing = azdias.isnull().sum()

# Change the number into percents
missing_per = missing / len(azdias) * 100
In [13]:
missing_per.describe()
Out[13]:
count    85.000000
mean     11.054139
std      16.449815
min       0.000000
25%       0.000000
50%      10.451729
75%      13.073637
max      99.757636
dtype: float64
In [14]:
# Investigate patterns in the amount of missing data in each column.
plt.hist(missing_per, bins=50, facecolor='b', alpha=0.75)
plt.xlabel('Percentage of missing value (%)')
plt.ylabel('Counts')
plt.title('Histogram of missing value counts')
plt.grid(True)
plt.show()

From the plot we can see that most of columns have missing value less than 20%

In [15]:
outlier = sum(missing_per >= 20)
print("There are {} columns which have more 20% missing values".format(outlier))
There are 6 columns which have more 20% missing values
In [16]:
top_6 = missing_per.nlargest(n = 6)
print(top_6)
TITEL_KZ        99.757636
AGER_TYP        76.955435
KK_KUNDENTYP    65.596749
KBA05_BAUMAX    53.468668
GEBURTSJAHR     44.020282
ALTER_HH        34.813699
dtype: float64
In [17]:
top_6.plot.bar(figsize=(8,5))
plt.xlabel('Column name with missing values')
plt.ylabel('Percentage of missing values')
Out[17]:
Text(0,0.5,'Percentage of missing values')
In [18]:
# Remove the outlier columns from the dataset. (You'll perform other data
# engineering tasks such as re-encoding and imputation later.)
azdias_new = azdias.drop(top_6.index, axis = 1)

Discussion 1.1.2: Assess Missing Data in Each Column

The average missing percent is 11.05%. Most of columns have missing value less than 20%, but there are six columns having more than 20% missing values. Those columns are 'TITEL_KZ', 'AGER_TYP', 'KK_KUNDENTYP', 'KBA05_BAUMAX', 'GEBURTSJAHR', ALTER_HH'. So I removed them from the dataset.

Step 1.1.3: Assess Missing Data in Each Row

Now, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.

In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.

  • You can use seaborn's countplot() function to create a bar chart of code frequencies and matplotlib's subplot() function to put bar charts for the two subplots side by side.
  • To reduce repeated code, you might want to write a function that can perform this comparison, taking as one of its arguments a column to be compared.

Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.

In [19]:
# How much data is missing in each row of the dataset?
missing_row = azdias_new.isnull().sum(axis = 1)
plt.hist(missing_row, bins=50, facecolor='b', alpha=0.75)
plt.xlabel('Number of missing value')
plt.ylabel('Counts')
plt.title('Histogram of missing value counts')
plt.grid(True)
plt.show()
In [20]:
# Write code to divide the data into two subsets based on the number of missing
# values in each row.

# If there are more than 10 NA, we put it into high subset. Otherwise, we put it into low subset.

few = azdias_new[missing_row <= 10].reset_index(drop=True)
high = azdias_new[missing_row > 10].reset_index(drop=True)

print("The shape of few dataset is {}".format(few.shape))
print("The shape of high dataset is {}".format(high.shape))
The shape of few dataset is (780153, 79)
The shape of high dataset is (111068, 79)
In [21]:
# choose five columns 
small_6 = missing_per.nsmallest(n = 5).index
small_6
Out[21]:
Index(['ANREDE_KZ', 'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER',
       'FINANZ_ANLEGER'],
      dtype='object')
In [22]:
# Compare the distribution of values for at least five columns where there are
# no or few missing values, between the two subsets.
def compare(compare_col):
    num = len(compare_col)
    fig, ax = plt.subplots(num,2, figsize=(20, 20)) 
    ax[0,0].set_title('Few')
    ax[0,1].set_title('High')
    for i, col in enumerate(compare_col):
        sns.countplot(few[col], ax=ax[i,0])
        sns.countplot(high[col], ax=ax[i,1])
compare(small_6)

Discussion 1.1.3: Assess Missing Data in Each Row

From the histogram plot we can see that fewer rows have more than 10 missing values compared to other rows. Thus, we set 10 as our threshold and divide the dataset into two subsets.

To see if the distribution of data values on columns that are missing very little data are similar or different between the two groups, we choose some columns to make barplots to compare them. Then we find out that the distribution are different. This may cause problem and bias in the following analysis if we just delete the high subset, so we'll revisit these data later on. Now we continue our analysis for using just the few subset.

Step 1.2: Select and Re-Encode Features

Checking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (feat_info) for a summary of types of measurement.

  • For numeric and interval data, these features can be kept without changes.
  • Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).
  • Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.

In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.

Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!

In [23]:
# How many features are there of each data type?

types = feat_info.type.unique().tolist()

# Discard columns with a large number of missing values
new_feat = feat_info.drop(top_6.index, axis = 0)
for i in types:
    num = sum(new_feat.type == i)
    print("{} : {}".format(i, num))
categorical : 18
ordinal : 49
numeric : 6
mixed : 6
interval : 0

Step 1.2.1: Re-Encode Categorical Features

For categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:

  • For binary (two-level) categoricals that take numeric values, you can keep them without needing to do anything.
  • There is one binary variable that takes on non-numeric values. For this one, you need to re-encode the values as numbers or create a dummy variable.
  • For multi-level categoricals (three or more values), you can choose to encode the values using multiple dummy variables (e.g. via OneHotEncoder), or (to keep things straightforward) just drop them from the analysis. As always, document your choices in the Discussion section.
In [24]:
# Assess categorical variables: which are binary, which are multi-level, and
# which one needs to be re-encoded?

cate = new_feat.index[new_feat.type == "categorical"].tolist()

few[cate].head(n = 6)
Out[24]:
ANREDE_KZ CJT_GESAMTTYP FINANZTYP GFK_URLAUBERTYP GREEN_AVANTGARDE LP_FAMILIE_FEIN LP_FAMILIE_GROB LP_STATUS_FEIN LP_STATUS_GROB NATIONALITAET_KZ SHOPPER_TYP SOHO_KZ VERS_TYP ZABEOTYP GEBAEUDETYP OST_WEST_KZ CAMEO_DEUG_2015 CAMEO_DEU_2015
0 2 5.0 1 10.0 0 5.0 3.0 2.0 1.0 1.0 3.0 1.0 2.0 5 8.0 W 8 8A
1 2 3.0 1 10.0 1 1.0 1.0 3.0 2.0 1.0 2.0 0.0 1.0 5 1.0 W 4 4C
2 2 2.0 6 1.0 0 NaN NaN 9.0 4.0 1.0 1.0 0.0 1.0 3 1.0 W 2 2A
3 1 5.0 5 5.0 0 10.0 5.0 3.0 2.0 1.0 2.0 0.0 2.0 4 1.0 W 6 6B
4 2 2.0 2 1.0 0 1.0 1.0 4.0 2.0 1.0 0.0 0.0 2.0 4 1.0 W 8 8C
5 2 5.0 4 12.0 0 1.0 1.0 2.0 1.0 1.0 1.0 0.0 1.0 4 1.0 W 4 4A
In [25]:
multi_list = []
binary_list = []
for col in cate:
    if len(few[col].unique()) > 2:
        multi_list += [col]
    else:
        binary_list += [col]

print(binary_list) 
print()
print(multi_list)
['ANREDE_KZ', 'GREEN_AVANTGARDE', 'SOHO_KZ', 'OST_WEST_KZ']

['CJT_GESAMTTYP', 'FINANZTYP', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'VERS_TYP', 'ZABEOTYP', 'GEBAEUDETYP', 'CAMEO_DEUG_2015', 'CAMEO_DEU_2015']

Refer to Data_Dictionary.md and check the dataframe, we know that

  • Binary: ANREDE_KZ, GREEN_AVANTGARDE, SOHO_KZ, VERS_TYP, OST_WEST_KZ(non-numeric values)
  • Multi: CJT_GESAMTTYP, FINANZTYP, GFK_URLAUBERTYP, LP_FAMILIE_FEIN, LP_FAMILIE_GROB, LP_STATUS_FEIN, LP_STATUS_GROB, NATIONALITAET_KZ, SHOPPER_TYP, SHOPPER_TYP, ZABEOTYP, GEBAEUDETYP, CAMEO_DEUG_2015, CAMEO_DEU_2015
In [26]:
# Re-encode categorical variable(s) to be kept in the analysis.

# Change non-numeric values
map_dict = {"O":0, "W":1}
few['OST_WEST_KZ'] = few['OST_WEST_KZ'].map(map_dict)


# As customers dataset have different multilevels, we drop this feature.
few = few.drop("GEBAEUDETYP", axis = 1)
multi_list.remove("GEBAEUDETYP")
In [27]:
# OneHotEncoder
few = pd.get_dummies(few, prefix = multi_list, columns = multi_list)
In [28]:
few.head(n=5)
Out[28]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... CAMEO_DEU_2015_7E CAMEO_DEU_2015_8A CAMEO_DEU_2015_8B CAMEO_DEU_2015_8C CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E
0 1.0 2 1 5 2 5 4 5 0 3.0 ... 0 1 0 0 0 0 0 0 0 0
1 3.0 2 1 4 1 2 3 5 1 3.0 ... 0 0 0 0 0 0 0 0 0 0
2 4.0 2 4 2 5 2 1 2 0 2.0 ... 0 0 0 0 0 0 0 0 0 0
3 3.0 1 4 3 4 1 3 2 0 3.0 ... 0 0 0 0 0 0 0 0 0 0
4 1.0 2 3 1 5 2 2 5 0 3.0 ... 0 0 0 1 0 0 0 0 0 0

5 rows × 188 columns

Discussion 1.2.1: Re-Encode Categorical Features

There are 18 categorical features (after dropping some columns). We keep all the categorical features, but we do some transformation

  • Change 'OST_WEST_KZ' into numeric value "O":0, "W":1
  • Change all multi-level categoricals into multiple dummy variables

Step 1.2.2: Engineer Mixed-Type Features

There are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:

  • "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.
  • "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values).
  • If you decide to keep or engineer new features around the other mixed-type features, make sure you note your steps in the Discussion section.

Be sure to check Data_Dictionary.md for the details needed to finish these tasks.

PRAEGENDE_JUGENDJAHRE

Dominating movement of person's youth (avantgarde vs. mainstream; east vs. west)

  • 1: 40s - war years (Mainstream, E+W)
  • 2: 40s - reconstruction years (Avantgarde, E+W)
  • 3: 50s - economic miracle (Mainstream, E+W)
  • 4: 50s - milk bar / Individualisation (Avantgarde, E+W)
  • 5: 60s - economic miracle (Mainstream, E+W)
  • 6: 60s - generation 68 / student protestors (Avantgarde, W)
  • 7: 60s - opponents to the building of the Wall (Avantgarde, E)
  • 8: 70s - family orientation (Mainstream, E+W)
  • 9: 70s - peace movement (Avantgarde, E+W)
  • 10: 80s - Generation Golf (Mainstream, W)
  • 11: 80s - ecological awareness (Avantgarde, W)
  • 12: 80s - FDJ / communist party youth organisation (Mainstream, E)
  • 13: 80s - Swords into ploughshares (Avantgarde, E)
  • 14: 90s - digital media kids (Mainstream, E+W)
  • 15: 90s - ecological awareness (Avantgarde, E+W)

We recode this below: 40s: 1, 50s: 2, 60s: 3, 70s: 4, 80s: 5, 90s: 6 Mainstream: 0, Avantgarde: 1

In [29]:
decade_dict = {1:1, 2:1, 3:2, 4:2, 5:3, 6:3}
In [30]:
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
mix = new_feat.index[new_feat.type == "mixed"].tolist()

decade_dict = {1:1, 2:1, 3:2, 4:2, 5:3, 6:3, 7:3, 8:4, 9:4, 10:5, 11:5, 12:5, 13:5, 14:6, 15:6}
move_dict = {1:0, 2:1, 3:0, 4:1, 5:0, 6:1, 7:1, 8:0, 9:1, 10:0, 11:1, 12:0, 13:1, 14:0, 15:1}

few["decade"] = few["PRAEGENDE_JUGENDJAHRE"].map(decade_dict)
few["movement"] = few["PRAEGENDE_JUGENDJAHRE"].map(move_dict)
few.drop("PRAEGENDE_JUGENDJAHRE", axis = 1)

few[["decade", "movement"]].head(n = 6)
Out[30]:
decade movement
0 6.0 0.0
1 6.0 1.0
2 4.0 0.0
3 4.0 0.0
4 2.0 0.0
5 5.0 0.0

CAMEO_INTL_2015

German CAMEO: Wealth / Life Stage Typology, mapped to international code

  • 11: Wealthy Households - Pre-Family Couples & Singles
  • 12: Wealthy Households - Young Couples With Children
  • 13: Wealthy Households - Families With School Age Children
  • 14: Wealthy Households - Older Families & Mature Couples
  • 15: Wealthy Households - Elders In Retirement
  • 21: Prosperous Households - Pre-Family Couples & Singles
  • 22: Prosperous Households - Young Couples With Children
  • 23: Prosperous Households - Families With School Age Children
  • 24: Prosperous Households - Older Families & Mature Couples
  • 25: Prosperous Households - Elders In Retirement
  • 31: Comfortable Households - Pre-Family Couples & Singles
  • 32: Comfortable Households - Young Couples With Children
  • 33: Comfortable Households - Families With School Age Children
  • 34: Comfortable Households - Older Families & Mature Couples
  • 35: Comfortable Households - Elders In Retirement
  • 41: Less Affluent Households - Pre-Family Couples & Singles
  • 42: Less Affluent Households - Young Couples With Children
  • 43: Less Affluent Households - Families With School Age Children
  • 44: Less Affluent Households - Older Families & Mature Couples
  • 45: Less Affluent Households - Elders In Retirement
  • 51: Poorer Households - Pre-Family Couples & Singles
  • 52: Poorer Households - Young Couples With Children
  • 53: Poorer Households - Families With School Age Children
  • 54: Poorer Households - Older Families & Mature Couples
  • 55: Poorer Households - Elders In Retirement
In [31]:
# Investigate "CAMEO_INTL_2015" and engineer two new variables.

# Define a function to engineer two new variables
def wealth(x):
    if x==x:
        x = int(x)
        if x // 10 ==1:
            return 1
        if x // 10 ==2:
            return 2
        if x // 10 ==3:
            return 3
        if x // 10 ==4:
            return 4
        if x // 10 ==5:
            return 5
    
def life_stage(x):
    if x==x:
        x = int(x)
        if x % 10 ==1:
            return 1
        if x % 10 ==2:
            return 2
        if x % 10 ==3:
            return 3
        if x % 10 ==4:
            return 4
        if x % 10 ==5:
            return 5
In [32]:
few["wealth"] = few["CAMEO_INTL_2015"].apply(wealth)
few["life_stage"] = few["CAMEO_INTL_2015"].apply(life_stage)
few.drop("CAMEO_INTL_2015", axis = 1)
few[["wealth", "life_stage"]].head(n = 6)
Out[32]:
wealth life_stage
0 5.0 1.0
1 2.0 4.0
2 1.0 2.0
3 4.0 3.0
4 5.0 4.0
5 2.0 2.0
In [33]:
few = few.drop(mix, axis =1)

Discussion 1.2.2: Engineer Mixed-Type Features

We engineer two mixed-type features, CAMEO_INTL_2015 and PRAEGENDE_JUGENDJAHRE. Then we drop other mixed-type features.

  • CAMEO_INTL_2015 -> wealth and life stage
  • PRAEGENDE_JUGENDJAHRE -> decade and movement

Step 1.2.3: Complete Feature Selection

In order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:

  • All numeric, interval, and ordinal type columns from the original dataset.
  • Binary categorical features (all numerically-encoded).
  • Engineered features from other multi-level categorical features and mixed features.

Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from the subset with few or no missing values.

In [34]:
# If there are other re-engineering tasks you need to perform, make sure you
# take care of them here. (Dealing with missing data will come in step 2.1.)
few.shape
Out[34]:
(780153, 186)
In [35]:
# Do whatever you need to in order to ensure that the dataframe only contains
# the columns that should be passed to the algorithm functions.
few.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 780153 entries, 0 to 780152
Columns: 186 entries, ALTERSKATEGORIE_GROB to life_stage
dtypes: float64(40), int64(23), uint8(123)
memory usage: 466.5 MB
In [36]:
few.describe()
Out[36]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E decade movement wealth life_stage
count 777528.000000 780153.000000 780153.000000 780153.000000 780153.000000 780153.000000 780153.000000 780153.000000 780153.000000 745629.000000 ... 780153.000000 780153.000000 780153.000000 780153.000000 780153.000000 780153.000000 753679.000000 753679.000000 776497.000000 776497.000000
mean 2.797778 1.521235 3.050657 2.711548 3.439027 2.838339 2.634099 3.144031 0.220073 2.203840 ... 0.022370 0.026133 0.035248 0.031857 0.036407 0.007892 4.323676 0.227804 3.274299 2.870714
std 1.019078 0.499549 1.378001 1.486898 1.376730 1.473251 1.393676 1.398751 0.414296 0.755139 ... 0.147884 0.159532 0.184407 0.175618 0.187301 0.088486 1.458356 0.419416 1.465495 1.487881
min 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 0.000000 1.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 1.000000 1.000000
25% 2.000000 1.000000 2.000000 1.000000 2.000000 1.000000 1.000000 2.000000 0.000000 2.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 3.000000 0.000000 2.000000 1.000000
50% 3.000000 2.000000 3.000000 3.000000 4.000000 3.000000 2.000000 3.000000 0.000000 2.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 4.000000 0.000000 4.000000 3.000000
75% 4.000000 2.000000 4.000000 4.000000 5.000000 4.000000 4.000000 4.000000 0.000000 3.000000 ... 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 6.000000 0.000000 5.000000 4.000000
max 4.000000 2.000000 5.000000 5.000000 5.000000 5.000000 5.000000 5.000000 1.000000 3.000000 ... 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 6.000000 1.000000 5.000000 5.000000

8 rows × 186 columns

In [37]:
few.head(n = 5)
Out[37]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E decade movement wealth life_stage
0 1.0 2 1 5 2 5 4 5 0 3.0 ... 0 0 0 0 0 0 6.0 0.0 5.0 1.0
1 3.0 2 1 4 1 2 3 5 1 3.0 ... 0 0 0 0 0 0 6.0 1.0 2.0 4.0
2 4.0 2 4 2 5 2 1 2 0 2.0 ... 0 0 0 0 0 0 4.0 0.0 1.0 2.0
3 3.0 1 4 3 4 1 3 2 0 3.0 ... 0 0 0 0 0 0 4.0 0.0 4.0 3.0
4 1.0 2 3 1 5 2 2 5 0 3.0 ... 0 0 0 0 0 0 2.0 0.0 5.0 4.0

5 rows × 186 columns

In [38]:
few.isnull().sum()
Out[38]:
ALTERSKATEGORIE_GROB      2625
ANREDE_KZ                    0
FINANZ_MINIMALIST            0
FINANZ_SPARER                0
FINANZ_VORSORGER             0
FINANZ_ANLEGER               0
FINANZ_UNAUFFAELLIGER        0
FINANZ_HAUSBAUER             0
GREEN_AVANTGARDE             0
HEALTH_TYP               34524
RETOURTYP_BK_S            3834
SEMIO_SOZ                    0
SEMIO_FAM                    0
SEMIO_REL                    0
SEMIO_MAT                    0
SEMIO_VERT                   0
SEMIO_LUST                   0
SEMIO_ERL                    0
SEMIO_KULT                   0
SEMIO_RAT                    0
SEMIO_KRIT                   0
SEMIO_DOM                    0
SEMIO_KAEM                   0
SEMIO_PFLICHT                0
SEMIO_TRADV                  0
SOHO_KZ                      0
ANZ_PERSONEN                 0
ANZ_TITEL                    0
HH_EINKOMMEN_SCORE           0
W_KEIT_KIND_HH           56282
                         ...  
CAMEO_DEU_2015_5A            0
CAMEO_DEU_2015_5B            0
CAMEO_DEU_2015_5C            0
CAMEO_DEU_2015_5D            0
CAMEO_DEU_2015_5E            0
CAMEO_DEU_2015_5F            0
CAMEO_DEU_2015_6A            0
CAMEO_DEU_2015_6B            0
CAMEO_DEU_2015_6C            0
CAMEO_DEU_2015_6D            0
CAMEO_DEU_2015_6E            0
CAMEO_DEU_2015_6F            0
CAMEO_DEU_2015_7A            0
CAMEO_DEU_2015_7B            0
CAMEO_DEU_2015_7C            0
CAMEO_DEU_2015_7D            0
CAMEO_DEU_2015_7E            0
CAMEO_DEU_2015_8A            0
CAMEO_DEU_2015_8B            0
CAMEO_DEU_2015_8C            0
CAMEO_DEU_2015_8D            0
CAMEO_DEU_2015_9A            0
CAMEO_DEU_2015_9B            0
CAMEO_DEU_2015_9C            0
CAMEO_DEU_2015_9D            0
CAMEO_DEU_2015_9E            0
decade                   26474
movement                 26474
wealth                    3656
life_stage                3656
Length: 186, dtype: int64

Step 1.3: Create a Cleaning Function

Even though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.

In [39]:
def clean_data(df):
    """
    Perform feature trimming, re-encoding, and engineering for demographics
    data
    
    INPUT: Demographics DataFrame
    OUTPUT: Trimmed and cleaned demographics DataFrame
    """
    
    # Put in code here to execute all main cleaning steps:
    # convert missing value codes into NaNs, ...
    for col in cols:
        na = na_dict[feat_info.loc[col].missing_or_unknown]
        df[col] = df[col].map(lambda x: np.nan if x in na else x)
    
    
    # remove selected columns and rows, ...
    df = df.drop(top_6.index, axis = 1)
    missing_row = df.isnull().sum(axis = 1)
    df = df[missing_row <= 10].reset_index(drop=True)
    
    # select, re-encode, and engineer column values.
    df['OST_WEST_KZ'] = df['OST_WEST_KZ'].map(map_dict)
    df = df.drop("GEBAEUDETYP", axis = 1)
    df = pd.get_dummies(df, prefix = multi_list, columns = multi_list)
    df["decade"] = df["PRAEGENDE_JUGENDJAHRE"].map(decade_dict)
    df["movement"] = df["PRAEGENDE_JUGENDJAHRE"].map(move_dict)
    df["wealth"] = df["CAMEO_INTL_2015"].apply(wealth)
    df["life_stage"] = df["CAMEO_INTL_2015"].apply(life_stage)
    df = df.drop(mix, axis =1)
    
    
    # Return the cleaned dataframe.
    return df
    
    
In [40]:
clean_data(azdias).shape
Out[40]:
(780153, 186)

Step 2: Feature Transformation

Step 2.1: Apply Feature Scaling

Before we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the API reference page for sklearn to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:

  • sklearn requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to your data, make sure that you've cleaned the DataFrame of the remaining missing values. This can be as simple as just removing all data points with missing data, or applying an Imputer to replace all missing values. You might also try a more complicated procedure where you temporarily remove missing values in order to compute the scaling parameters before re-introducing those missing values and applying imputation. Think about how much missing data you have and what possible effects each approach might have on your analysis, and justify your decision in the discussion section below.
  • For the actual scaling function, a StandardScaler instance is suggested, scaling each feature to mean 0 and standard deviation 1.
  • For these classes, you can make use of the .fit_transform() method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project.
In [41]:
from sklearn.preprocessing import Imputer, StandardScaler
In [42]:
# If you've not yet cleaned the dataset of all NaN values, then investigate and
# do that now.
fill_na = Imputer(strategy = "most_frequent", missing_values = "NaN", axis = 0)
few_impute = fill_na.fit_transform(few)
In [43]:
# Apply feature scaling to the general population demographics data.
scaler = StandardScaler()
few_scale = scaler.fit_transform(few_impute)
few_scale = pd.DataFrame(few_scale, columns=list(few))
few_scale.head(n=5)
Out[43]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E decade movement wealth life_stage
0 -1.767651 0.958395 -1.488140 1.539080 -1.045251 1.467273 0.980071 1.326876 -0.531199 1.006277 ... -0.151267 -0.163813 -0.191144 -0.181397 -0.194377 -0.08919 1.105284 -0.531199 1.170995 -1.249732
1 0.198089 0.958395 -1.488140 0.866538 -1.771610 -0.569041 0.262544 1.326876 1.882535 1.006277 ... -0.151267 -0.163813 -0.191144 -0.181397 -0.194377 -0.08919 1.105284 1.882535 -0.874275 0.763856
2 1.180959 0.958395 0.688928 -0.478545 1.133828 -0.569041 -1.172510 -0.817895 -0.531199 -0.316158 ... -0.151267 -0.163813 -0.191144 -0.181397 -0.194377 -0.08919 -0.259737 -0.531199 -1.556031 -0.578536
3 0.198089 -1.043411 0.688928 0.193996 0.407468 -1.247812 0.262544 -0.817895 -0.531199 1.006277 ... -0.151267 -0.163813 -0.191144 -0.181397 -0.194377 -0.08919 -0.259737 -0.531199 0.489238 0.092660
4 -1.767651 0.958395 -0.036761 -1.151087 1.133828 -0.569041 -0.454983 1.326876 -0.531199 1.006277 ... -0.151267 -0.163813 -0.191144 -0.181397 -0.194377 -0.08919 -1.624758 -0.531199 1.170995 0.763856

5 rows × 186 columns

Discussion 2.1: Apply Feature Scaling

We fill all NaN with mode. Then we do feature scaling (scaling each feature to mean 0 and standard deviation 1)

Step 2.2: Perform Dimensionality Reduction

On your scaled data, you are now ready to apply dimensionality reduction techniques.

  • Use sklearn's PCA class to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, you should not set any parameters (so all components are computed) or set a number of components that is at least half the number of features (so there's enough features to see the general trend in variability).
  • Check out the ratio of variance explained by each principal component as well as the cumulative variance explained. Try plotting the cumulative or sequential values using matplotlib's plot() function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project.
  • Once you've made a choice for the number of components to keep, make sure you re-fit a PCA instance to perform the decided-on transformation.
In [44]:
from sklearn.decomposition import PCA
In [45]:
# Apply PCA to the data.
pca = PCA()
pca.fit(few_scale)
Out[45]:
PCA(copy=True, iterated_power='auto', n_components=None, random_state=None,
  svd_solver='auto', tol=0.0, whiten=False)
In [46]:
# Investigate the variance accounted for by each principal component.
plt.bar(range(len(pca.explained_variance_ratio_)), pca.explained_variance_ratio_)
plt.title("Variance explained by each component")
plt.xlabel("Principal component")
plt.ylabel("Ratio of variance explained")
plt.show()
In [47]:
plt.plot(range(len(pca.explained_variance_ratio_)),np.cumsum(pca.explained_variance_ratio_), '-')
plt.title("Cumulative Variance Explained")
plt.xlabel("Number of Components")
plt.ylabel("Ratio of variance explained")
plt.show()
In [48]:
# Re-apply PCA to the data while selecting for number of components to retain.
pca_80 = PCA(n_components=80)
azdias_pca = pca_80.fit_transform(few_scale)
In [49]:
sum(pca_80.explained_variance_ratio_)
Out[49]:
0.79232210759369148

Discussion 2.2: Perform Dimensionality Reduction

I choose 80 components, capturing more than 79% of the variance.

In [49]:
len(pca_80.components_)
Out[49]:
80

Step 2.3: Interpret Principal Components

Now that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.

As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.

  • To investigate the features, you should map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. Use the data dictionary document to help you understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate.
  • You should investigate and interpret feature associations from the first three principal components in this substep. To help facilitate this, you should write a function that you can call at any time to print the sorted list of feature weights, for the i-th principal component. This might come in handy in the next step of the project, when you interpret the tendencies of the discovered clusters.
In [50]:
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
# HINT: Try defining a function here or in a new cell that you can reuse in the
# other cells.

def pca_results(full_dataset, pca, num):
    '''
    Create a DataFrame of the PCA results
    Includes dimension feature weights and explained variance
    Visualizes the PCA results
    '''
    # Dimension indexing
    dimensions = ['Dimension {}'.format(i) for i in range(1,len(pca.components_)+1)]

    # PCA components
    components = pd.DataFrame(np.round(pca.components_, 4), columns = full_dataset.keys())
    components.index = dimensions

    # PCA explained variance
    ratios = pca.explained_variance_ratio_.reshape(len(pca.components_), 1)
    variance_ratios = pd.DataFrame(np.round(ratios, 4), columns = ['Explained Variance'])
    variance_ratios.index = dimensions

    # Create a bar plot visualization
    fig, ax = plt.subplots(2,1,figsize = (20,14))
    plt.subplots_adjust(hspace=0.45)
    fig.suptitle("{} Componet Explained Variance {:.4f}".format(num,pca.explained_variance_ratio_[num-1]),fontsize=20)

    # Plot the feature weights as a function of the components
    weight = components.iloc[num - 1]
    pos = weight[weight > 0] 
    pos.sort_values(ascending=False).plot(kind = "bar", ax = ax[0])
    neg = weight[weight < 0] 
    neg.sort_values().plot(kind = "bar", ax = ax[1])
    ax[0].set_ylabel("Feature Weights")
    ax[1].set_ylabel("Feature Weights")
    print(pos.sort_values(ascending=False)[0:5])
    print(neg.sort_values()[0:5])
In [51]:
pca_results(few_scale, pca_80, 1)
LP_STATUS_GROB_1.0    0.1983
HH_EINKOMMEN_SCORE    0.1875
wealth                0.1861
PLZ8_ANTG3            0.1825
PLZ8_ANTG4            0.1749
Name: Dimension 1, dtype: float64
FINANZ_MINIMALIST   -0.1966
MOBI_REGIO          -0.1933
KBA05_ANTG1         -0.1845
PLZ8_ANTG1          -0.1830
KBA05_GBZ           -0.1816
Name: Dimension 1, dtype: float64
In [52]:
pca_results(few_scale, pca_80, 2)
ALTERSKATEGORIE_GROB    0.2298
FINANZ_VORSORGER        0.2157
ZABEOTYP_3              0.1998
SEMIO_ERL               0.1794
SEMIO_LUST              0.1599
Name: Dimension 2, dtype: float64
decade                  -0.2279
FINANZ_SPARER           -0.2224
SEMIO_REL               -0.2137
FINANZ_UNAUFFAELLIGER   -0.2132
SEMIO_TRADV             -0.2058
Name: Dimension 2, dtype: float64
In [53]:
pca_results(few_scale, pca_80, 3)
SEMIO_VERT     0.3194
SEMIO_FAM      0.2603
SEMIO_SOZ      0.2582
SEMIO_KULT     0.2506
FINANZTYP_5    0.1365
Name: Dimension 3, dtype: float64
ANREDE_KZ    -0.3453
SEMIO_KAEM   -0.3147
SEMIO_DOM    -0.2827
SEMIO_KRIT   -0.2652
SEMIO_ERL    -0.2075
Name: Dimension 3, dtype: float64

Discussion 2.3: Interpret Principal Components

We can see that several features have high weight in first component. In the first component, wealth has positive weight(high wealth score means poor under this condition) and low financial interest (FINANZ_MINIMALIST) have negative weight. It also makes sense in the real world.

Step 3: Clustering

Step 3.1: Apply Clustering to General Population

You've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.

  • Use sklearn's KMeans class to perform k-means clustering on the PCA-transformed data.
  • Then, compute the average difference from each point to its assigned cluster's center. Hint: The KMeans object's .score() method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding.
  • Perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. However, each additional cluster provides a smaller net benefit. Use this fact to select a final number of clusters in which to group the data. Warning: because of the large size of the dataset, it can take a long time for the algorithm to resolve. The more clusters to fit, the longer the algorithm will take. You should test for cluster counts through at least 10 clusters to get the full picture, but you shouldn't need to test for a number of clusters above about 30.
  • Once you've selected a final number of clusters to use, re-fit a KMeans instance to perform the clustering operation. Make sure that you also obtain the cluster assignments for the general demographics data, since you'll be using them in the final Step 3.3.
In [ ]:
# Over a number of different cluster counts...


    # run k-means clustering on the data and...
    
    
    # compute the average within-cluster distances.
   
    
In [54]:
from sklearn.cluster import KMeans
In [55]:
def K_score(data, n):
    kmeans = KMeans(n_clusters = n)
    model = kmeans.fit(data)
    score = np.abs(model.score(data))
    return score
In [56]:
# Investigate the change in within-cluster distance across number of clusters.
# HINT: Use matplotlib's plot function to visualize this relationship.
scores = []
clusters = list(range(1,12))
for k in clusters:
    print(k)
    scores.append(K_score(azdias_pca, k))
1
2
3
4
5
6
7
8
9
10
11
In [57]:
plt.plot(clusters, scores, linestyle='-', marker='o')
plt.xlabel('K')
plt.ylabel('Score')
Out[57]:
Text(0,0.5,'Score')
In [58]:
# Re-fit the k-means model with the selected number of clusters and obtain
# cluster predictions for the general population demographics data.
kmeans = KMeans(n_clusters = 5)
k_model = kmeans.fit(azdias_pca)
In [59]:
labels = k_model.predict(azdias_pca)

Discussion 3.1: Apply Clustering to General Population

I decided to choose 5 because after 5 clusters, the average distance decrease is obiviously smaller than before.

Step 3.2: Apply All Steps to the Customer Data

Now that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.

  • Don't forget when loading in the customers data, that it is semicolon (;) delimited.
  • Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the clean_data() function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.)
  • Use the sklearn objects from the general demographics data, and apply their transformations to the customers data. That is, you should not be using a .fit() or .fit_transform() method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data.
In [60]:
# Load in the customer demographics data.
customers = pd.read_csv("./Udacity_CUSTOMERS_Subset.csv", sep=';')
In [61]:
customers = clean_data(customers)
In [62]:
customers.shape
Out[62]:
(139068, 186)
In [64]:
# Apply preprocessing, feature transformation, and clustering from the general
# demographics onto the customer data, obtaining cluster predictions for the
# customer demographics data.
customers_new = fill_na.transform(customers)
customers_new = scaler.fit_transform(customers_new)
customers_new = pd.DataFrame(customers_new, columns=list(customers))
customers_new.head(n=5)
Out[64]:
ALTERSKATEGORIE_GROB ANREDE_KZ FINANZ_MINIMALIST FINANZ_SPARER FINANZ_VORSORGER FINANZ_ANLEGER FINANZ_UNAUFFAELLIGER FINANZ_HAUSBAUER GREEN_AVANTGARDE HEALTH_TYP ... CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E decade movement wealth life_stage
0 0.700795 -0.702090 0.719536 -0.512607 0.512420 -0.597750 0.159814 -0.565048 0.996755 -1.263772 ... -0.125902 -0.085532 -0.081158 -0.08394 -0.118849 -0.102073 -0.660930 0.996755 -1.137068 -0.272431
1 0.700795 1.424318 0.719536 -0.512607 0.512420 -0.597750 2.232557 0.956068 0.996755 0.023559 ... -0.125902 -0.085532 -0.081158 -0.08394 -0.118849 -0.102073 -0.660930 0.996755 0.282441 0.472529
2 0.700795 -0.702090 0.719536 -0.512607 0.512420 0.403598 -0.876558 -0.565048 -1.003256 0.023559 ... -0.125902 -0.085532 -0.081158 -0.08394 -0.118849 -0.102073 -1.395039 -1.003256 -0.427314 0.472529
3 -0.679750 -0.702090 -1.246115 -0.512607 -0.680895 2.406293 3.268929 -0.565048 -1.003256 1.310889 ... -0.125902 -0.085532 -0.081158 -0.08394 -0.118849 -0.102073 0.807289 -1.003256 0.992196 -1.762351
4 -0.679750 -0.702090 0.719536 -0.512607 0.512420 -0.597750 0.159814 0.195510 0.996755 1.310889 ... -0.125902 -0.085532 -0.081158 -0.08394 -0.118849 -0.102073 -0.660930 0.996755 0.282441 0.472529

5 rows × 186 columns

In [65]:
customers_pca = pca_80.transform(customers_new)
In [66]:
customers_labels = k_model.predict(customers_pca)

Step 3.3: Compare Customer Data to Demographics Data

At this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is.

Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.

Take a look at the following points in this step:

  • Compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. Seaborn's countplot() or barplot() function could be handy.
    • Recall the analysis you performed in step 1.1.3 of the project, where you separated out certain data points from the dataset if they had more than a specified threshold of missing values. If you found that this group was qualitatively different from the main bulk of the data, you should treat this as an additional data cluster in this analysis. Make sure that you account for the number of data points in this subset, for both the general population and customer datasets, when making your computations!
  • Which cluster or clusters are overrepresented in the customer dataset compared to the general population? Select at least one such cluster and infer what kind of people might be represented by that cluster. Use the principal component interpretations from step 2.3 or look at additional components to help you make this inference. Alternatively, you can use the .inverse_transform() method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly.
  • Perform a similar investigation for the underrepresented clusters. Which cluster or clusters are underrepresented in the customer dataset compared to the general population, and what kinds of people are typified by these clusters?
In [69]:
# Compare the proportion of data in each cluster for the customer data to the
# proportion of data in each cluster for the general population.
general_prop = []
customers_prop = []
cluster = [i for i in range(5)]
for i in range(5):
    general_prop.append((labels == i).sum()/len(labels))
    customers_prop.append((customers_labels == i).sum()/len(customers_labels))


df_cluster = pd.DataFrame({'cluster' : cluster, 'prop_general' : general_prop, 'prop_customers':customers_prop})

df_cluster.plot(x='cluster', y = ['prop_general', 'prop_customers'], kind='bar', figsize=(9,6))
plt.ylabel('proportion of persons in each cluster')
plt.show()
In [78]:
# What kinds of people are part of a cluster that is overrepresented in the
# customer data compared to the general population?
centroid_2 = scaler.inverse_transform(pca_80.inverse_transform(k_model.cluster_centers_[2]))
over = pd.Series(data = centroid_2, index=list(customers))
over.head(n=5)
Out[78]:
ALTERSKATEGORIE_GROB    3.788174
ANREDE_KZ               1.286280
FINANZ_MINIMALIST       5.325907
FINANZ_SPARER           0.927187
FINANZ_VORSORGER        4.922096
dtype: float64
In [77]:
# What kinds of people are part of a cluster that is underrepresented in the
# customer data compared to the general population?
centroid_1 = scaler.inverse_transform(pca_80.inverse_transform(k_model.cluster_centers_[1]))
under = pd.Series(data = centroid_1, index=list(customers))
under.head(n=5)
Out[77]:
ALTERSKATEGORIE_GROB    2.816068
ANREDE_KZ               1.380176
FINANZ_MINIMALIST       3.065930
FINANZ_SPARER           2.371348
FINANZ_VORSORGER        3.785510
dtype: float64
In [79]:
pd.concat([over, under], axis=1)
Out[79]:
0 1
ALTERSKATEGORIE_GROB 3.788174 2.816068
ANREDE_KZ 1.286280 1.380176
FINANZ_MINIMALIST 5.325907 3.065930
FINANZ_SPARER 0.927187 2.371348
FINANZ_VORSORGER 4.922096 3.785510
FINANZ_ANLEGER 0.995054 2.404932
FINANZ_UNAUFFAELLIGER 1.611189 2.643530
FINANZ_HAUSBAUER 1.697277 3.588243
GREEN_AVANTGARDE 1.410267 0.303163
HEALTH_TYP 1.854121 2.124162
RETOURTYP_BK_S 4.073390 3.294563
SEMIO_SOZ 4.360658 4.675642
SEMIO_FAM 3.483524 4.698356
SEMIO_REL 2.598246 4.462494
SEMIO_MAT 2.928081 4.478332
SEMIO_VERT 5.636351 4.742353
SEMIO_LUST 6.077389 4.632943
SEMIO_ERL 5.196568 3.845021
SEMIO_KULT 3.549235 4.835379
SEMIO_RAT 2.398453 3.963907
SEMIO_KRIT 3.669264 3.774397
SEMIO_DOM 4.072575 4.188668
SEMIO_KAEM 3.448402 3.890891
SEMIO_PFLICHT 2.415825 4.293316
SEMIO_TRADV 2.543023 3.971526
SOHO_KZ 0.011622 0.008798
ANZ_PERSONEN 2.932183 1.913366
ANZ_TITEL 0.041726 0.014942
HH_EINKOMMEN_SCORE 1.023557 4.491783
W_KEIT_KIND_HH 3.969683 4.472539
... ... ...
CAMEO_DEU_2015_5A 0.007500 0.015354
CAMEO_DEU_2015_5B 0.012604 0.006035
CAMEO_DEU_2015_5C 0.007414 0.003631
CAMEO_DEU_2015_5D 0.035361 0.033336
CAMEO_DEU_2015_5E 0.006897 0.003254
CAMEO_DEU_2015_5F 0.010506 0.003827
CAMEO_DEU_2015_6A 0.001895 0.006850
CAMEO_DEU_2015_6B 0.052467 0.057468
CAMEO_DEU_2015_6C 0.017562 0.017168
CAMEO_DEU_2015_6D 0.008259 0.009689
CAMEO_DEU_2015_6E 0.006694 0.019881
CAMEO_DEU_2015_6F 0.006351 0.005372
CAMEO_DEU_2015_7A 0.007285 0.038813
CAMEO_DEU_2015_7B 0.006302 0.040606
CAMEO_DEU_2015_7C 0.003577 0.014180
CAMEO_DEU_2015_7D 0.001700 0.007564
CAMEO_DEU_2015_7E 0.004706 0.006069
CAMEO_DEU_2015_8A -0.009131 0.078653
CAMEO_DEU_2015_8B -0.000240 0.055043
CAMEO_DEU_2015_8C -0.000960 0.037106
CAMEO_DEU_2015_8D 0.000237 0.019582
CAMEO_DEU_2015_9A -0.005008 0.027722
CAMEO_DEU_2015_9B -0.008079 0.031320
CAMEO_DEU_2015_9C -0.006866 0.029118
CAMEO_DEU_2015_9D -0.006070 0.042653
CAMEO_DEU_2015_9E 0.001683 0.006366
decade 2.334033 4.160503
movement 1.410267 0.303163
wealth 1.403066 3.642325
life_stage 4.030825 2.625410

186 rows × 2 columns

Discussion 3.3: Compare Customer Data to Demographics Data

(Double-click this cell and replace this text with your own text, reporting findings and conclusions from the clustering analysis. Can we describe segments of the population that are relatively popular with the mail-order company, or relatively unpopular with the company?)

From the clustering analysis, we can see that:

  • Cluster 1 is underrepresented and cluster 2 is overrepresented.
  • By comparing cluster 1 and cluster we can know that:
    • Customers tend to be in good economic conditions (Over_wealth: 1.4 vs. Under: 3.6)
    • Customers tend to be in the late life stage (life_stage over: 4.03 vs. under 2.63)
    • Customers tend to be less materialistic and more dreamful

Congratulations on making it this far in the project! Before you finish, make sure to check through the entire notebook from top to bottom to make sure that your analysis follows a logical flow and all of your findings are documented in Discussion cells. Once you've checked over all of your work, you should export the notebook as an HTML document to submit for evaluation. You can do this from the menu, navigating to File -> Download as -> HTML (.html). You will submit both that document and this notebook for your project submission.

In [ ]: