Powered By Blogger

Sunday, August 29, 2010

Promotional Measurement - Horizontal vs. Vertical Analysis


Using Horizontal Analysis to Better Understand Promotional Test Results: Get the Complete Picture

Measurement of Strategic Marketing Promotions should include a comprehensive view (horizontal), along with the traditional source coded analysis (vertical).  A Marketing Program should not be rolled out before you have a complete understanding of how customers react.

When you run a promotion and measure the performance, do you include all customer activity, or only the activity associated with the promotion code?   The easiest and most common method is to measure only those purchases generated by the promotion codes.  But, by only considering these purchases in the program’s assessment, you may only be measuring a small portion of the customer’s overall purchases.  In addition, customer activity before and after the promotion may be critical to your determination of the program’s effectiveness.  Customers may also have purchase activity during a promotion that is not associated with the specific promotion.  This can also materially change the determination of a program’s effectiveness. 

Including all activity, regardless of when or how the purchase was made, increases the accuracy of measurement of customer behavior.   Analyzing customer activity horizontally typically requires more up-front work.  The complete view of all customer purchases through horizontal analysis greatly improves the chances of identifying programs that make real significant changes of customer behavior.  

Vertical analysis traditionally includes only purchases within the promotion activity timeframe.  These are called source-coded orders, as a special promotion code is associated with the purchase.  Asking the customer to give a special code or providing a coupon at the POS accomplishes this.  Horizontal includes not only the vertical purchases, but also all other purchases within the promotion, along with pre-promotion and post-promotion purchases.  Let’s examine a specific example to see the implications of these two analytical approaches.

A direct marketing company sold products through multiple sales divisions to businesses across the United States. One of its divisions, which sold a highly seasonal product line, was testing a Marketing Program designed to increase the number of customers returning to purchase the product by offering an “early order discount”.  The test results indicated that the number of customers returning to purchase was significantly higher for the customer group receiving an early order discount when compared to a similar group that did not receive the early discount.  The analysis raised several questions including:
  • Did the results look at all customer activity (horizontal), or only the orders whose promotion codes matched the test promotions (vertical)?
  • If the test results were so positive, why did overall existing customers return at the same rate as the prior year and not some higher rate?
  • Did the early discount test group place a higher percentage of their orders earlier than the non-discount group?
  • Should the Early Discount Program be rolled-out to all customer next year?

The table below shows the coded results for the test.

Early Discount Test Results - Source Code based






Description



Data
Discount
No Discount


 Group Size
7,000
7,050


Total Orders
816
401


 Response Rate
11.7%
5.7%


Sales
$303,270
$167,640



With a lift of 105% (11.7% versus 5.7%) in response rate, the sales team felt the program was a huge success.  The results were suggesting that a significantly higher proportion of customers who received the early order discount had ordered.  Even though the company had lower margin on each order to the discount group, the sales team believed the margin loss was more than made up by the overall increase in business.  But the results were based on source code results only.  The company was a direct marketer and each mail piece sent to customers had a unique source code that was printed on the order form or requested from the customer by the telephone sales representative at the time the customer placed their order.  The sales team believed that such a large lift in response had to be real and significant.  But what was not known was if there was any systematic bias in the way this information was captured from the customers that may have skewed the results of this test.  Only a horizontal view would answer the questions and give accurate answers to such an important questions.

Customer numbers were obtained for all individuals in each of the two groups from the mail house that had processed the promotional mailing test.  The activity of each of those customers for the specific seasonal product line was extracted from the order entry system.  The data was then summarized into the following report.





Early Discount Test Results - Horizontal based

Group



Data
Discount
No Discount


 Group Size
7,000
7,050


Total Order
4,129
4,035


 Response Rate
59.0%
57.2%


Sales
$1,594,035
$1,600,931



This process yielded very different results.  The response rate lift was now only 3.1% as compared to the 105% in the coded analysis, and the sales generated per person mailed was almost a dead heat.  These results explain why the division had not seen an overall increase in the buyer retention rate.  The promotion had not generated much incremental buyers, and it certainly did not generate any incremental sales.  And the ROI was lower for the discount group as a result of the reduced margin.

These results were coupled with one-on-one interviews with customers of the division.  When customers were asked what factors were most important in their decision to purchase, it was product and service attributes that were universally cited by customers as most important, and pricing was only occasionally mentioned.

The timing of the order flow between the two groups was also examined using a horizontal view to understand if the early order discount had moved the proportion of early orders for the discount group versus the non-discount group.  The results are shown below.

Timing of Buyers between Discount an No-Discount Groups










Month






















Group
Data
1
2
4
5
6
7
8
9
10
11
12
No Discount
Buyers in Month
42
18
2
3
32
222
288
468
932
1,500
483

Cumulative Buyers
42
60
62
65
97
319
607
1,075
2,007
3,507
3,990

Cuml. Pct.
1.1%
1.5%
1.6%
1.6%
2.4%
8.0%
15.2%
26.9%
50.3%
87.9%
100.0%
Discount        
Sum of Buyers
56
23
3
2
22
196
272
503
936
1,483
543

Cumulative
56
79
82
84
106
302
574
1,077
2,013
3,496
4,039

Cuml. Pct.
1.4%
2.0%
2.0%
2.1%
2.6%
7.5%
14.2%
26.7%
49.8%
86.6%
100.0%


The early ordering discount had no effect on moving orders up for the group. 

After review of the horizontal results, the decision was made not to roll out the early order discount program.  If the horizontal results had not been generated, the company would have probably rolled the program out, and would have been wondering why more customers were not coming back to place orders the following year.

This situation brings to light another key ingredient for any Marketing Program analysis.  When testing new Marketing Programs, be sure you know what the performance hurdles are up front, and make sure you have the means with which to measure those hurdles.  Many companies develop the hurdles and measurement mechanisms after the test is over.  There is usually a heated debate about the results and how to interpret them, and many times the roll out decision is made with incomplete information since a deadline is at hand.  Don’t let this happen at your company.  If you spend the time to set up measures and hurdle rates, you will make better and timelier decisions about your marketing programs.