Saturday, February 9, 2019

Frequency Profiling using Pandas

This blog contains introduction to frequency profiling. This is common place with data engineers who do ETL. It is also used at the start of every exploratory data analysis.

The blog consists of the following attachments:
  1. Jupyter Notebook https://drive.google.com/file/d/1mL3GoVQjGTTUeWMmwDKtytGNRF0evE6D/view?usp=sharing
  2. Html doc:   https://drive.google.com/file/d/1ExuLhp1CgCjE0DCTR_G09LnD63I_Txp4/view?usp=sharing

Pivoting using Pandas

This blog contains introduction to pivoting using pandas. It gives different scenarios of creating summaries using pivot table functions

The blog consists of the following attachments:
  1. Jupyter Notebook https://drive.google.com/file/d/1Vsui7dQH9pJN1unw6oz47hZH1WZifObQ/view?usp=sharingW7mTz2k8hnx8XjcW4JL5JeoPw/view?usp=sharing
  2. Html doc: https://drive.google.com/file/d/1DFcwajEK2qqYXcKXjd_srQsjkYvuBGON/view?usp=sharing

Combining and Merging Datasets

This blog contains introduction to merging data frames. It gives different scenarios of using keys from different data frames.

The blog consists of the following attachments:
  1. Jupyter Notebook https://drive.google.com/file/d/100RXan-W7mTz2k8hnx8XjcW4JL5JeoPw/view?usp=sharing
  2. Html doc: https://drive.google.com/file/d/1t0J8QBD7kNb-DZDevPs-y36imb0G4I6g/view?usp=sharing

Hierarchical Indexing

This blog contains introduction to Hierarchical Indexing. It starts with challenges associated with multi level group by and how to tackle it using hierarchical indexing. 

The blog consists of the following attachments:
  1. Jupyter Notebook https://drive.google.com/file/d/1ruXC1cGMbx_hDQisz8oe3drdAzYFrpo9/view?usp=sharing
  2. Html doc: https://drive.google.com/file/d/1Kf8qHLYY_ihos_z5WS8fpyTUwacXldnD/view?usp=sharing

Introduction to Series

This blog contains introduction to series and all the functions associated with it. It starts with how to store data, indexing, data manipulation, handling NaN, etc. It is very important to understand series as it forms the basis to understand pandas data frames.

The blog consists of the following attachments:
  1. Jupyter Notebook https://drive.google.com/file/d/1lV07jr9RHCzd9e_d1ukj3hcficx2M2R8/view?usp=sharing
  2. Html doc: https://drive.google.com/file/d/16bPtQUuOJoXqoi8P5tJsB--GDnygintV/view?usp=sharing

Tuesday, January 15, 2019

Data manipulation with Python Part 2

In this post we would look at another Data Manipulation scenario. The nature of the task as well as resources have been summarized in the following sections:

Problem Statement: There is a sports accessories company ABC that sells sports gear across the globe. The data has fields such as Revenue, Quantity, Gross Margin, Order Method, Time, Country etc spread from 2012 to 2014 across 4 Quarters (Q1 through Q4). Based on the Global Outlook and growth forecasts made by economist at ABC, the company has decided to sell total units equal to 211,555,475 in 2015. However, this is an overall number and Product Managers for individual countries don't know how this number would drill down to individual Product. Your job as a Business Analyst is to help the Product Managers get the numbers so that they can plan effectively

Data Download Link

Learning Objectives:
  1. Using Pandas for data manipulation
    • Getting records and column names
    • Handle NA
    • Using Groupby to sum up numbers at different levels
  2. Using 'apply' function on certain columns
  3. Using lambda along with apply to get Percentage Share
  4. % formatting 
Please download the jupyter file from the following link

Saturday, January 5, 2019

Data Manipulation with Python

In any Data Science or Business Analytics project, exploring the data is an important aspect It helps us to deep dive into the data to get numerical summaries and various hidden trends that are otherwise obscure to start with.Most of the times, a thorough analysis of data is sufficient enough to facilitate decision making. Hence it is imperative to know the basic tools to analyse data and interpret results. With high adoption of Python as a preferred Data Science tool,  we will look at a small case study about a Product based company and how python can be used to answer questions related to typical data exploration exercise .

The data considered here consists of Quantity sold, Revenue and Gross Margin information for various sports product sold in different countries. Other Features are given in the form of Product Information,Order Method and type of retailer. This is spread across 4 Quarters from 2012 to 2014

To fully understand the data, the following things will be explored

Task 0: How many missing/NA/Null values are present in the data

Task 1: Which order method is most widely used

Task 2:Which Product has the max units sold on an overall basis

Task 3:Which country has the max value of Gross Margin per unit Quantity

Task 4:Which product line under 'Web' Method type has highest Unit sold for 2012

Task 5:Is this also true for 2013 and 2014

Task 6:Which product has seen the max decrease in Revenue from 2012 to 2014

Task 7:In Product line, Personal Accessories has seen a decline in Gross Margin

# Identify which Product has contributed most to this

# Identify which country has contributed most to this

Task 8: Identify top products for each country with respect to Revenue from 2012 to 2014


Download the file: https://docs.google.com/spreadsheets/d/1aRD2-xZrxcARSHZAi-93fWQBuhr6D8TOgkBaT_umkvg/edit#gid=165274715

# importing pandas
# setting up directory

import pandas as pd
dr="Your computer path where file is saved"

# Reading the csv file into the console. I am using Spyder to analyse data. The analyses can be carried out using Anaconda also

df=pd.read_csv(dr+"\\Gross_Margin_Data.csv")

# Total rows and column
df.shape # Has 88475 rows and 11 columns
df.columns

# Inspecting first few records
df.head()

Task 0: How many missing/NA/Null values are present in the data
df.isna().sum()

Figure 1





















# There are 581 NA entries in the Gross margin column
# These should be replaced by the mean of the Gross Margin column
# So that it doesnt affect result from any other analysis

df['Gross margin']=df['Gross margin'].fillna(df['Gross margin'].mean())
df.isna().sum()# Now there are no NA values in Gross Margin column


Figure 2




















# Task1: Which Order Method is most prevalent
df['Order method type'].value_counts()# Web Method

Figure 3


Task 2: Which Product has the maximum units sold
df['Product'].value_counts().head() # Polar Sun has the highest unit sold

Figure 4






















Task 3: Which country has the max value of Gross Margin per unit Quantity
# Here we need to aggregate the data at a country level

df1=df.groupby('Retailer country')['Gross margin','Quantity'].sum()
df1.columns # Only Gross Margin and Quantity are present as columns

# We  need to reset the index of df1 so that Retailer Country also becomes a column
df1=df1.reset_index()

# Now creating a column of Gross Margin/Quantity
df1['Unit_Measure']=df1['Gross margin']/df1['Quantity']
df1.sort_values('Unit_Measure',ascending=False).head(10)

Figure 5





















# We can see that Denmark has the highest value of Gross Margin/Unit Quantity sold

Task 4: Which product under 'Web' Method type has highest Unit sold for 2012
Before we start on this we need to first know which is the most preferred method of ordering products

df['Order method type'].value_counts() # Web is the most preferred method

# Lets see which product under Web has the most Sales

df[df['Order method type']=='Web'].groupby(['Product'])['Quantity'].sum().sort_values(ascending=False).head(5)

Figure 6






















# Product by the name Zone is most preferentially ordered using Web medium 

# Lets verify if this is true for the year 2012 as well
df['Year'].head()

df[(df['Order method type']=='Web') & (df['Year']==2012)].groupby(['Year','Product'])['Quantity'].sum().sort_values(ascending=False).head(5)

Figure 7















# For 2012 as well 'Zone'product has the most Sales in 2012

Task 5:Does Product named Zone has the most Sales under Web Method of Ordering in 2013 and 2014 also
df2=df[df['Order method type']=='Web'].groupby(['Year','Product'])['Quantity'].sum()

df3=df2.reset_index()
df3=df3.sort_values(['Year','Quantity'],ascending=[True,False])
df3.groupby(['Year']).head(2) # Gives the top 2 products sold for each year under Web Method

Figure 8





















# As we can see that for 2012 and 2013, Zone has the most Sales, but for 2014
# Granite Carabiner has the most Sales
# So Zone doesnt have most Sales for Web Orders from 2012 to 2014


Task 6: Which product has seen the max decrease in Revenue from 2012 to 2014
df4=df.groupby(['Product','Year'])['Revenue'].sum().reset_index()
df4.shape

# We need to remove the value corresponding to 2013 as only 2012 and 2014 are required
t_val=[2012,2014]
df5=df4[df4['Year'].isin(t_val)]
df5.shape

# Now lets pivot df4 based on column 'Year'
p = df5.pivot(index='Product', columns='Year', values='Revenue')
p.tail(16)# There are some NaN in p

Figure 9





















p.isna().sum()

Figure 10
















# We see that there are certain NA/Null values in p
# These are the values for which are the no entries in df4
# These entries have to be replaced with 0

p[2012]=p[2012].fillna(0)
p[2014]=p[2014].fillna(0)

# Checkingt to see if there is still any NA/Null values in p
p.isna().sum()

# Creating another column called Perc_Change
p['Perc_Change']=(p[2014]-p[2012])/p[2012]
p.head()

# Sorting p to get the highest percentage decrease
p.sort_values('Perc_Change').head(5)

Figure 11





















# So Sun Blocker has seen the maximum decrease in Revenue from 2012 to 2014

Task 7:In Product line, Personal Accessories has seen a decline in Gross Margin

# Task 7a:Identify which Product has contributed most to this

# Task 7b::Identify which country has contributed most to this

Task 7a :Identify which Product has contributed most to this

# Lets first check what is the overall percentage decrease from 2013 to 2014
df7=df[(df['Product line']=='Personal Accessories') & df['Year'].isin(['2013','2014'])]
df7.groupby(['Year'])['Gross margin'].sum()

Figure 12


# We can see that there is approx 35% dip in the gross margin

# The decline in Gross Margin can be because of the following things
# The Top Product with max Gross Margin in 2013 had a dip in 2014
# Lets check this point

df8=df7.groupby(['Product','Year'])['Gross margin'].sum().reset_index()
d8.head()

Figure 13





















df9=df8.pivot(index='Product',columns='Year',values='Gross margin').sort_values(2013,ascending=False)
d9.head()

Figure 14





















df10=df9.fillna(0)

# Let create a new column indicating percentage decrease
df10['Perc_change']=(df10[2013]-df10[2014])/df10[2013]
df10.head()

Figure 15


















# This gives the products that had the max gross margin in 2013

# but saw a significant dip (top 3 product had a dip of more than 50%)

# Lets also check the outlook in comparison to 2012

df11=df[(df['Product line']=='Personal Accessories')]
df11.groupby(['Year'])['Gross margin'].sum()

Figure 16















# Gross Margin of 2012 and 2013 are comparable so there has been really
# significant drop in Gross Margin in 2014

Task 7b :Identify which country has contributed most to this

df12=df7.groupby(['Retailer country','Year'])['Gross margin'].sum().reset_index()

df13=df12.pivot(index='Retailer country',columns='Year',values='Gross margin').sort_values(2013,ascending=False)

df14=df13.fillna(0)

# Let create a new column indicating percentage decrease
df14['Perc_change']=(df14[2013]-df14[2014])/df14[2013]
df14.head()

Figure 17
















# Top 5 countries have seen a dip in Gross Margin
# Now we are in a position to comment on the product as well as countries
# that have contributed to the decrease in Gross Margin

Task 8: Identify top products for each country with respect to Revenue from 2012 to 2014

df15=df.groupby(['Retailer country','Product','Year'])['Revenue'].sum().reset_index()
df16=df15.sort_values(['Retailer country','Year','Revenue'],ascending=False)
df17=df16.groupby(['Retailer country','Year']).head(2)
df17.head(20)

Figure 18





















# df17 can now be exported to a csv file to prepare reports














Web Scraping Tutorial 4- Getting the busy information data from Popular time page from Google

Popular Times Popular Times In this blog we will try to scrape the ...