Measurements & Marketing

“JD Powers is the root of all evil.”

I heard this comment as I was having dinner with a very good friend, a former colleague and superb service manager at a Toyota automotive dealership (I will refer to him as DC for the rest of this article).  DC was unique in that he had previously worked for Toyota as an OEM representative so he was not blinded by only one point of view.  I didn’t immediately know how to respond and as I tried to formulate the words, I finally came up with, “what did they do to give you that feeling?”

DC shared that the surveys created and administered by JD Powers, in an attempt to quantify customer satisfaction and vehicle quality had become tools that in his most recent experience, were being used punitively against dealership employees by leadership teams at dealerships who were trying to gain financial prizes or benefits from the manufacturers for having achieved the pinnacle of customer satisfaction.

Sometimes in our quest to create the campaign that will turn a company around single-handedly, we lose sight of what the true aim of a campaign is and while we measure results, they aren’t necessarily the ones that we need to be monitoring.

Recently I participated in the Google AdWords International Student competition.  Undergraduate and Graduate students competed with a pre-assigned budget of $200.  In AdWords there are two pieces; 1. Brainstorm the key words that you think will be used in a Google search of your business and 2. Create a three-line ad that will be displayed at the top right corner of a search page and be clicked on, leading a potential buyer to your website.  The optimal outcome is to capture a new sale.

As I was writing this article, I went back to all of the reports to try and tell you how successful we had been.  The problem was I couldn’t figure them out!  The reports look just great.  There are maps of the world and things that show you where the views came from, which is very helpful.  I just got lost in the trying to find which key word generated the most response.  I have an email into our team captain who I am hopeful will help me remember how to read them all.  What AdWords did not do, was trigger a conversion in the case we tested.  The business did not receive the phone call or e-mail requesting a call to discuss the service or ask for a bid.

The million dollar or euro question now is, what kept the conversion from happening?  Was it the website, was it the service, does AdWords work better with products than services, what?

This is a real dilemma for all business owners but small ones in particular.  I was grateful it had been “play” money and not real advertising dollars that were being invested.  It was a learning experience.  This was a potentially new channel of advertising but left me feeling a bit empty at the lack of results (no sale, no results in my opinion).

Overwhelming a business owner or department manager with beautiful reports that may take hours or days to create and decipher may not be real helpful.  Keeping the statistics meaningful should be a core focus.  Turning around and then using these types of metrics for performance appraisal purposes needs to be carefully managed.  Often these types of performance metrics are used to award bonuses, base pay, potential for advancement and so on. But what if they are not even the metrics that would be most effective to track?  What if, horror of horrors, employees have discovered  how to manipulate the results to their benefit as my friend DC expressed had been his experience in some cases?

I’m still waiting for the Google report that summarizes why the key words we chose didn’t generate a sale.  In the end that’s what we should be interested in, making the conversion!  Aiming to make sure metrics are aligned with business objectives and not just randomly spending advertising dollars because you are responsible for making sure the Internet channel is covered should be the focus.  I have a cousin at Microsoft who is known to agonize for days with team members over one word on a survey to make sure the right or perhaps it’s really, the metrics being sought after by management, are achieved.  He is crazy busy during the end of year wrap up with executive management calling and quizzing him over and over on metrics because, you guessed it, the big bonuses are tied into the final results.

Bringing Back Focus
Going back to DC’s original comments, perhaps the “root of all evil” is really when we lose focus on our customers and what really effects our profits.

When measurements are not aligned properly with overall objectives, revenue targets, job responsibilities and accountability of the positions, they can become meaningless.

It’s always prudent to measure some tasks and customer perceptions but it’s being able to differentiate and challenge the counter-productive measurements that takes courage and the owner and management team who are willing to take a hard look at what they are doing and why they are doing it.

Link to the original article in Profit Magazine Measurements & Marketing-1

Disclaimer: All insights and opinions are my own and have been formed over many years of practical business and personal experiences; as both an employee and a business owner. They do not represent the views of  my current employer. MM

Experimental Design in Advertising

There are two status quos from my perspective in traditional advertising today. The first is that many companies retain an agency that provides multiple concepts for review. The other is that we have fallen into the conventional methodology of in-series campaign design. Michele Morrisette

Before staring my own firm, I spent many years in a large corporation where each year, we were given annual sales objectives and a corresponding marketing budget. Being located in the head office, I served the various regional offices in Asia-Pacific, Middle East/North Africa, Israel, Caribbean and Sub-Sahara Africa.

What drove me crazy for almost 16 years was that there was very little, to no accountability for the resources that were allocated to each region. We had anecdotal information that told us if a campaign had been successful or not and then there was the actual sale of a product or service that might indicate a campaign had been successful. However, there were always factors such as weather and economic conditions that may have influenced campaign results but their impact could not necessarily be quantified. It was always, “let’s move on to the next campaign”.

As employees in a company, I believe that we have the responsibility to manage marketing budgets as if the funds were coming from our personal household accounts. Would you spend money at the market hoping that your family will eat food that you randomly picked off the shelf without any consideration for their personal tastes or even allergies? Probably not.

In many cases, that is what businesses do every day; spend money in the hope that their product will yield additional response rates that will be converted into sales. The aim of starting my own firm was to find something new, something innovative that would really change the way we do business. Where the results could be better quantified to understand what worked and what didn’t work in traditional marketing campaigns.

My journey led me to the discovery of experimental design of advertisements. Beginning in 2003, there had been some buzz about what a California based engineer was doing by taking Taguchi Application Design of Experiments into marketing and advertising campaigns with phenomenal success rates. Most people in marketing and advertising had no idea who or what Taguchi was, including me.

Did you know that if you choose just 15 different elements in a marketing campaign to test, with two options each, you could test the equivalent of 32,768 possible combinations using only 16 tests? How cool is that?!

A new world opened up and I felt as though I had perhaps reached nirvana from a marketer’s perspective. We could test up to 32,768 possibilities in one campaign and have statistically valid responses at the end of the test.

Knowing that experimental design can help optimize an advertising campaign, why would agencies and organizations not embrace creating multiple designs and testing a sample audience to arrive at an optimal design and then deploy that message to the mass target audience?

There are several reasons. From an agency point of view it is frightening to think that their creativity might be challenged. Creative people don’t necessarily want to know the quantitative results of their campaign. While they want to please the client, they also have an objective of aiming to win the coveted creative awards that are so highly regarded in the industry.

From an Ad agency point of view, accountability might limit their creativity. From a company point of view, many decision makers are like I used to be, a bit naive and are not familiar with the concept. They just allow the agency to guide them along believing, that based on the vast sums of funds spent, accountability will be part of the process.

Another issue is that there are a wide range of available techniques; from the very basic method of assigning a priority to key words for internet and web landing page use, to very intensive on-sight brainstorming and facilitation sessions with internal and external customer input. The channel of advertising is also very critical.
While these techniques can be applied to newspaper and television campaigns, the more immediate return of results and investment will be seen in direct mail and Internet advertising campaigns.
The results can be staggering.

In a study run at Dell in 2004 the focus was on internal employees and the company e-mail ad sent to this group.An 18-test layout was created and in this case, the 520% increased sales results speak for themselves[1]:
• Click Through Rate increase: 5.2 times
• 7.1 times more sales per e-mail
• Annual sales before optimization: $8,900,000
• Annual sales after optimization:  $63,100,000

It also requires both agency and client embracing a new way of doing business. Instead of perceiving the testing as a threat, it should be perceived as a tool that can help an agency provide those quantitative instead of just qualitative results. In fact, a really progressive agency might see the potential application when aiming for that creative award.

Instead of quietly submitting the one brilliant idea, that the client considered to be too risky, to one paper for the purpose of making it eligible for judging, why not show the client how it can be tested and take away the fear factor and openly test the concept?

I believe the first agency willing to embrace the experimental design concept will achieve not only higher client satisfaction but also be able to add to their bragging rights. Now they can claim that along with being the most creative agency, they are also the one that is looking out for the client’s best interests by ensuring that the optimal campaign is deployed.

Link to the original article in Profit Magazine Experimental Design in Advertising-1

Disclaimer: All insights and opinions are my own and have been formed over many years of practical business and personal experiences; as both an employee and a business owner. They do not represent the views of  my current employer. Michele Morrisette