Give your business a boost: Chris Robinson of Boost Evaluation on the importance of well-designed surveys and evaluation strategies : EASTWEST Public Relations

Jim James
8 min readApr 28, 2021

--

Chris Robinson, the founder of Boost Marketing Ltd. (a.k.a Boost Awards), has been helping companies with their awards entries. But another part of his business is centered on evaluating what customers think about their clients — a service that they offer through their subsidiary, Boost Evaluation.

When Boost Awards began in 2006, they found out that for a company to win awards, they had to research and conduct well-designed surveys. You can’t simply win an award with lovely purple prose. You have to back up your claims with evidence of satisfied customers, staff, suppliers, and community.

As they started before the recession, they found that a lot of people in the industry didn’t want to be seen hiring a marketing agency. Hence, the birth of Boost Evaluation. It’s a new legal entity that marries up what their people specialize in: working on award entries and conducting surveys and research.

The Importance of Listening

Traditionally, for companies to prove their worth, they do a survey and cherry-pick and focus on the best news and testimonials. They package it up without distorting the truth and then compile it into a narrative, which can be a case study or an award entry.

However, what people often forget to do is to listen. An important aspect of evaluation and especially relevant on podcasts about marketing and PR; listening helps in improving strategies and making important decisions. But as companies become so busy reflecting on how awesome their past was, they forget to improve their future.

The entire awards industry has been doing things the conventional way, guessing and firing from the hip. So when Boost Evaluation ran a survey (they got 330 people who entered awards and asked them to state what they really want for the awards industry; they also got several award organisers on a call and relayed what the respondents said), it became an epiphany for the award organisers and to the whole industry.

Boost Evaluation then worked with a recruitment company that stopped entering awards. Instead, the company focused their energy on a customer survey. They got out, worked with training departments, and assessed what people wanted to do (e.g., digital learning). Chris found out, in the process, that companies have to reinvent themselves, come up with entirely new propositions, and learn from the process — they don’t need to merely go back to the status quo.

How Evaluation is Done

Sometimes, people tend to use the telephone by default — an evaluation mechanism that is actually quite intrusive.

An easier way to begin evaluation is through a digital survey like Laser, , and Survey Monkey. With these platforms, you can be less intrusive by starting with a survey and then enquiring if the respondent is available to be contacted by phone.

To get as many responses as possible, incentivising a survey is key. When you incentivise, you won’t just get a higher response rate but a more balanced response. When you don’t offer incentives, you will end up experiencing the Maritime effect, wherein people respond prompted by a strong view (either they love you or hate you). In the end, you will get an imbalanced response.

Forum panel conversations are great as well. Zoom meetings proved to be helpful in hosting individual telephone calls.

However, it’s important to always bear in mind that a survey has to have two aspects: Qualitative, which are stories and anecdotes that add color to your research; and qualitative, which are numbers and figures.

When evaluating, you must also be familiar with these two more jargons: objective measurement and subjective measurement.

The latter is composed of questions that can only be measured through a conversation or survey. It focuses on feelings, emotions, and the respondents’ sense of the future. Meanwhile, the former are objects of data that defy interpretation. These are data that can be obtained from Google Analytics or Trustpilot scores.

If you’re coming up with an evaluation strategy, start with a discussion of your hypotheses (What are you trying to prove? What is something you don’t know that you need your responders to vote on?). Then, divide your list into qualitative and quantitative, subjective and objective. These are four jargons that must be incorporated into the evaluation process. Otherwise, if you simply dive in and write a survey, you might get responses that would make you think that you should have asked a different question or have phrased it differently, or have asked through a vote or a scale.

Countering Survey Dropouts

In conducting surveys, dropout is a major problem. There have been questions about some sort of magic number of questions or the duration of the survey.

Bain & Company once published a book called “The Ultimate Question” and their philosophy is that it should be just one popular question. For instance, “On a scale of 0 to 10, would you recommend us to your friends?” This is called the net promoter score.

However, asking one question is overdoing it a bit.

For example, if a huge survey will be done by facilities management on thought leadership, they would have the respondents say whether they have done it or not. If they did not do it, they will be answering questions about barriers, excitement, and ROI. If they did it, they will be reflecting on what worked and what didn’t. It’s a multi-path survey that’s 10 minutes on average.

When doing such surveys, Boost Evaluation recommends setting expectations. For instance, your respondents will have a 10-minute survey but in return, they will have a chance to win an iPad. Instead of offering a small chance and a big prize, you can also provide a high chance of a small prize, such as 10 30-pound Amazon vouchers or free cups of coffee for everyone.

Business-to-business companies, including a lot of The UnNoticed Podcast listeners, tend to fall into the annual and bi-annual surveys. This different approach to evaluation is called continuous customer monitoring.

For instance, if you have a massive call center in Dublin and you want to track daily the net promoter score on a call-to-call basis — by call center agent, by department — then you would need to send out text messages to consumers featuring a question or two before asking if you can ask a few more. Typically, there will be a 90% drop-off rate. But what’s important is that you’ve captured the critical 10%.

Making Surveys More Accurate

When determining the correct sample size, mathematical models can be used. You can input the variables and it will tell you the degree of accuracy or the sample size required to hit your target degree of accuracy.

In statistics, whether your sample size is 10 or 10,000, you’re going to get a wobble — the response will wobble on what was and what wasn’t and it will become smaller and smaller over time. If you have a binary question (e.g., Will you recommend our product to a friend?), the wobble will last longer because one person can influence it.

In B2B, the common sample size is 50. The trick to making it more accurate is to have a 10-point scale. If you use this scale, such as a net promoter score, the wobble will settle down a lot faster and will stabilize quicker.

If you want to split the data into new customers and long-term customers, you must look at the smallest group. A survey wouldn’t be sound unless you get at least 10 responses. However, in PR principles, a magazine or a newspaper would not be interested in publishing such statistics. In the end, the techniques you can use to make surveys more accurate depends on the context.

On Anonymity

The UnNoticed Podcast has three different audience groups: the external customers, the staff, and the suppliers. When conducting surveys in these groups, anonymity is a point of debate.

One Swiss telecom company is quite upfront and says that their survey is not an anonymous one. It’s part of their culture. However, in a lot of organisations where there might be a union, anonymity is essential. If you’re doing a compliance-related survey (which Boost Evaluation is doing on behalf of a client), you wouldn’t want to give that impression that the respondent’s supervisor will know if he or she talks about the company at a dinner party. In staff surveys, anonymity is indeed a big thing.

For customer surveys, it’s a different thing. Boost Evaluation encourages people to include a box that asks respondents for consent for their testimonials to be quoted on a company homepage.

At Boost Evaluation, you can be confident that your responses will not be shared with anyone within the business because they’re using a third party.

Still, anonymity remains a debate. If you don’t do anonymity, you lose the ability to send out reminders or attribute problems correctly. There’s also this gray area wherein, for instance, in a staff survey, you can know which department your respondents are from but not their names.

The Cost of Surveys and Evaluations

The pricing for surveys is flexible. In Survey Monkey , you can avail free licenses once you sign up. But it will be peppered with advertisements and there will be a limited number of people you can send your surveys to.

Meanwhile, a whole grading scale is also available. If you’re happy to pay 200 or 300 pounds, you can get a decent license wherein you can lose the Survey Monkey branding and input a lot of questions. As it is an American System, you also have to adhere to American CAN-SPAM laws and avoid spamming thousands of people. There are also other more economical alternatives.

If you’re in business, it’s important to listen to and evaluate your customers. Equally, you’re also encouraged to sit down with your team and your partners to get some guidance on what you’d do moving forward after getting out of lockdown.

Originally published at https://eastwestpr.com on April 28, 2021.

--

--

Jim James
Jim James

Written by Jim James

Champion of the UnNoticed Entrepreneur

No responses yet