End To End Company Predictive Marketing Using RFM (Recency Frequency Monetary) Behavioral Based Clustering Algorithm(part 1).
Unsupervised Machine Learning | Object Oriented Programming | Python | Advanced Level | Maximilien Kpizingui | Smart.ai Submission United Kingdom 2022
Digital transformation including web, email, mobile, social, location technologies combined with technologies to store, process, and extract information has significantly changed our world. Nowadays, every entrepreneur in the process of business development is faced with the question how to make his client more loyal and not let him go to a competitor. In this light, predictive marketing is the approach which restores the bridge by bringing human sensibility into our digital world by focusing on the consumers to understand what they did, what they will do next and which product they are likely to buy. In the following, we are going to apply predictive marketing to segment and to cluster customer behavior on Shopify using recency frequency monetary clustering algorithm.
Contents
1 - What is predictive marketing
2 - What is RFM analysis and why it is useful
3 - What is the difference between Clustering and segmentation
4- Different types of clustering
5- Diving into the algorithm with ML object oriented programming in python
1 - What is predictive marketing
To understand predictive marketing in my humble opinion, it would be better to know what is predictive analytics. In machine learning or big data term, predictive analytics is a combination of mathematical and statistical techniques to recognize similarity in data or to make predictions about the future. In this sense, when predictive analytics is applied to marketing, it can predict customer behaviors, classify customers into segments, recommend a set of products to customers etc. So predictive marketing under the hood of predictive analysis helps companies to optimize their marketing strategy to acquire new customers, to grow customer lifetime value (revenue generated by purchased products) and to retain more customers over a period of time. However, someone reading this blog may ask does predictive marketing is going to replace marketers with robots? The answer is no, the use of predictive marketing is to empower human intelligence with machine learning to increase customer lifetime value. For instance, Amazon has been using predictive analytics for long. Play close attention to the recommendations that appear under a product you are thinking of adding into your cart is part of what makes Amazon such successful e-commerce platform today.
2 - What is RFM analysis and why it is useful?
To grasp RFM analysis, let us first curl around customer segmentation. By the way, customer segmentation is the process of dividing customers into groups based on common characteristics so companies can market to each group effectively and appropriately. With this understanding of customer segmentation, recency frequency monetary analysis on the other hand is a behavior-based approach consisting of grouping customers into segments. It groups the customers on the basis of their previous purchase transactions. How recently(recency), how often(frequency), and how much(monetary) did a customer buy products or services. Basically, RFM model ranks customer on a scale of 1 to 5. The higher the customer ranking, the more likely he will do business again with the firm. Furthermore, It gives organizations a sense of how much revenue comes from previous customers. So it help marketers to leverage their marketing strategy to keep high value customer and medium value customer loyal to them and to move targeted low value customer segment into high value customer through promotion , ads and discount on products.
3 - What is the difference between Clustering and segmentation
Clustering is the automated machine learning powered version of segmentation. Clustering is a powerful tool that allows us to discover personas or communities within your customer base. Segmentation on the other hand, is the process in which you segment customers to identify homogeneous groups that exist within your customer base which can be used to optimize and differentiate marketing actions or product strategy.
4- Different types of clustering
The most frequent types of clustering use by data analysts are Product-based clusters, brand-based clusters, and behavior-based clusters.
- Product based Clusters
Product based or category based clustering models group customers based on what types or categories of products they tend to prefer and what types of products customers tend to buy together.
- Brand based Clusters
Brand-based clusters tell you what brands people are most likely to buy. It groups customers together that prefer a group of brands more than other. For instance, you will be able to identify which customers are likely to be interested when a specific brand releases new products.
- Behavior based Clusters
A behavior based clustering model groups customers based on how they will behave while purchasing. Do they use the website or the call center? Are they discount addicts? How frequently do they buy? How much do they spend? How much time will pass before they purchase again?
5- Diving into the algorithm with ML OOP in python
The use of OOP(Object Oriented Programming) is entirely optional in Machine Learning as we already have libraries like Scikit-learn and TensorFlow from where we can easily use algorithms. If you are new to python reading this article don't worry, please pause at this point, google OOP in python and come back to understand the following.
- a) Importing the libraries
from sklearn.preprocessing import MinMaxScaler
from yellowbrick.cluster import KElbowVisualizer
from matplotlib.gridspec import GridSpec
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import plotly.express as px
import seaborn as sns
import datetime as dt
import pandas as pd
import numpy as np
import os
- b) Defining a class, containing a function to preprocess the dataset
class SomeModel():
def __init__(self):
pass
def get_preprocessing(self, a, b , c,d):
# removing duplicated index and dropping nan values
X= pd.read_csv(d).drop_duplicates(keep="first")
X=X[pd.notnull(X[a])]
X=X[pd.notnull(X[b])]
X=X[pd.notnull(X[c])]
return X
- Checking the output of this call. I use (....) to respect indentation while type this code on my blog which should be remove while testing this code in your environment with four spaces
if __name__ == '__main__':
....model_instance = SomeModel()
....print(model_instance.get_preprocessing('location_country','referrer_source','referrer_name','shopify_dataseller1.csv'))
- c) Defining a function to get RFM modelling
def get_rfm_modeling( self, a,b,c,d):
# function to return the rfm dataframe
preprocessed_df = self.get_preprocessing( a, b, c,d)
df_recency = preprocessed_df.groupby(by='location_country',as_index=False)['total_sessions'].sum()
df_recency.columns = ['location_country', 'Recency']
frequency_df = preprocessed_df.drop_duplicates().groupby( by=['location_country'], as_index=False)['total_conversion'].count()
frequency_df.columns = ['location_country', 'Frequency']
preprocessed_df['Total'] =preprocessed_df['total_conversion']*preprocessed_df['total_carts']
monetary_df = preprocessed_df.groupby(by='location_country', as_index=False)['Total'].sum()
monetary_df.columns = ['location_country', 'Monetary']
rf_df = df_recency.merge(frequency_df, on='location_country')
rfm_df = rf_df.merge(monetary_df, on='location_country')
return rfm_df
- Checking the output
if __name__ == '__main__':
....model_instance = SomeModel()
....print(model_instance.get_rfm_modeling("location_country","referrer_source","referrer_name",'shopify_dat a_seller1.csv'))
- d) Defining two functions to get the R_score
def R_score(self,var,p,d):
# recency score on 2h activity high value, more logs on the platform
if var <= d[p][0.25]:
return 1
elif var <= d[p][0.50]:
return 2
elif var <= d[p][0.75]:
return 3
else:
return 4
- e) Defining a function to get FM_score
def FM_score(self,var,p,d):
#Frequency and Monetary score (Positive Impact : Higher the value, better the customer)
if var <= d[p][0.25]:
return 4
elif var <= d[p][0.50]:
return 3
elif var <= d[p][0.75]:
return 2
else:
return 1
- f) Defining a function to get the RFM score
def get_rfmscore(self,a,b,c,d):
#Segmentation: Here, we will divide the data set into 4 parts based on the quantiles.
rfm_df = self.get_rfm_modeling(a,b,c,d)
quantiles =rfm_df.drop('location_country',axis = 1).quantile(q = [0.25,0.5,0.75])
rfm_df['R_score'] = rfm_df['Recency'].apply(self.R_score,args = ('Recency',quantiles,))
rfm_df['F_score'] = rfm_df['Frequency'].apply(self.FM_score,args = ('Frequency',quantiles,))
rfm_df['M_score'] = rfm_df['Monetary'].apply(self.FM_score,args = ('Monetary',quantiles,))
#Now we will create : RFMGroup and RFMScore
rfm_df['RFM_Group'] = rfm_df['R_score'].astype(str) + rfm_df['F_score'].astype(str) + rfm_df['M_score'].astype(str)
#Score
rfm_df['RFM_Score'] = rfm_df[['R_score','F_score','M_score']].sum(axis = 1)
rfm_df['R_rank'] = rfm_df['Recency'].rank(ascending=False)
rfm_df['F_rank'] = rfm_df['Frequency'].rank(ascending=True)
rfm_df['M_rank'] = rfm_df['Monetary'].rank(ascending=True)
# normalizing the rank of the customers
rfm_df['R_rank_norm'] = (rfm_df['R_rank']/rfm_df['R_rank'].max())*100
rfm_df['F_rank_norm'] = (rfm_df['F_rank']/rfm_df['F_rank'].max())*100
rfm_df['M_rank_norm'] = (rfm_df['F_rank']/rfm_df['M_rank'].max())*100
rfm_df.drop(columns=['R_rank', 'F_rank', 'M_rank'], inplace=True)
rfm_df['RFM_Score'] = 0.15*rfm_df['R_rank_norm']+0.28 * rfm_df['F_rank_norm']+0.57*rfm_df['M_rank_norm']
rfm_df['RFM_Score'] *= 0.05
rfm_df = rfm_df.round(2)
return rfm_df
- Checking the output
if __name__ == '__main__':
....model_instance = SomeModel()
....print(model_instance.get_rfmscore("location_country", "referrer_source", "referrer_name",'shopify_data_seller1.csv'))
- g) Function to perform customer segmentation
def get_customerSegment(self,a,b,c,d):
rfm_df = self.get_rfmscore(a,b,c,d)
rfm_df["Customer_segment"] = np.where(rfm_df['RFM_Score'] > 4.5, "Top Customers",
(np.where(rfm_df['RFM_Score'] > 4,"High value Customer",
(np.where(rfm_df['RFM_Score'] >= 3,"Medium Value Customer",
np.where(rfm_df['RFM_Score'] > 1.6,'Low Value Customers', 'Low Customers'))))))
return rfm_df
- Getting the ouput
if __name__ == '__main__':
....model_instance = SomeModel()
....print(model_instance.get_customerSegment("location_country", "referrer_source", "referrer_name",'shopify_data_seller1.csv'))
- h) Function to remove negative, zero and skew the data
def right_treat(self,var):
# First will focus on the negative and zero from the dataset before the transformation.
if var <= 0:
return 1
else:
return var
def get_screwLogTransform(self,a,b,c,d):
rfm_df = self.get_customerSegment(a,b,c,d)
#skewness transform
rfm_df['Recency'] = rfm_df['Recency'].apply(lambda x : self.right_treat(x))
rfm_df['Monetary'] = rfm_df['Monetary'].apply(lambda x : self.right_treat(x))
#Log Transformation
log_RFM_data = rfm_df[['Recency','Frequency','Monetary']].apply(np.log,axis = 1).round(4)
return log_RFM_data
- i) Function to find the maximum number of cluster using the elbow technique
def plotClusteringElbow():
# After plotting, we found elbow at k=3. We will use this value in training our model
x = scaledLogTransform()
model = KMeans()
visualizer =KElbowVisualizer(model, k=(1,9))
visualizer.fit(x)
return visualizer.show()
- Output the plot
- j ) Training the model using K-means clustering algorithm
def train(self,a,b,c,d):
scaled_data = self.scaledLogTransform(a,b,c,d)
KM_clust = KMeans(n_clusters= 3, init = 'k-means++',max_iter = 1000)
KM_clust.fit(scaled_data)
return KM_clust
- k) defining a function to display the result
def get_results(self,a,b,c,d):
model=self.train(a,b,c,d)
rfm_df = self.get_customerSegment(a,b,c,d)
rfm_df['Cluster'] = model.labels_
rfm_df['Cluster'] = 'Cluster' + rfm_df['Cluster'].astype(str)
new_rfm_df = rfm_df[["location_country","Customer_segment", "Cluster"]]
return new_rfm_df.tail(25)
- Displaying the last 25 rows of the dataframe
if __name__ == '__main__':
....model_instance = SomeModel()
....print(model_instance.train("location_country","referrer_source", "referrer_name",'shopify_data_seller1.csv'))
....print(model_instance.get_results("location_country", "referrer_source","referrer_name",'shopify_data_seller1.csv'))
Conclusion
We have reached to the end of this post. From the above data, we can think of your customer cluster as a physical swimming pool. The pool is filled with money spent by active customers of your brands. High value customers are those customers who have spent money with you in the past frequently over a period of time. Higher value customers spend more money and they fill the pool up faster than medium value customers. Low value and low customers are seasoning customers, their purchasing power is small and it takes them years to fill the pool of water. On the other hand, water is draining as customers leave you or stop spending money with you. Therefore, different strategy should be implemented by the marketers to retain their customer. For instance, high value customers can be contacted by call centers whereas medium value customers received an email and low-value customers a text message. Not only limited to this, promotion can be rolled out to move low value and low customers into medium pool; special discount can be offered to high and medium value customer on some products or services to retain their loyalty to your brand.
If you want want to contribute or you find any errors in this article please do leave me a comment.
You can reach me out on any of the matrix decentralized server. Here is my Element ID @maximilien:matrix:org
If you use one on the mastodon decentralized server, here is my ID @maximilien@qoto.org