There are many ways to find out about things. Research is obviously part of that. And research likes to use quantitative measures in order to maximise objectivity, even if these measures don’t give you much meaning.
Let’s look at hit rates on a website – a metric commonly used for “statistical purposes”. What does it mean? Well, it means that a website or page view has been looked at a certain amount of times. The inference is that the more hits you have the better must be the result. But what is the result?
If the intended result is to have as many hits as possible since one assumes hit rates equate with “eyeballs”, then surely high hit rate numbers are great. But is this the result an organisation really wants from it’s website? What happens as a consequence of the “eyeballs” is the question I really want to get an answer to. In reality, high hit rates could indicate a bad website. Your website visitors and customers are clicking away, frustrated by their inability to reach an outcome they want to achieve. Just get those click rates up and everything will be fine….hmmmmm.
Let’s do a survey then. A survey is actually pretty limiting. A questionnaire is bounded by the construction of the questions and limited answer options. In nearly all cases, one could answer a question yes or no, depending on the particular circumstances at some point in time. Surveys also don’t do a great job in measuring continuous change over time. And conducting surveys or focus groups with large numbers of people are often difficult and time consuming, certinly if a continuous process is required.
Yet these methods are still held to be superior to more qualitative approaches to research. However, if you actually asked your website users what they thought of the website, perhaps they might tell you that it takes a lot of clicking to get through to complete the task at hand. They might tell you that your website is poorly organised, with lousy navigation and confusing labels. They might tell you that the photos on the home page add nothing to their customer experience. They might tell you that your website could better… for them. And if you have a continuous dialogue with them, they will be even more insightful as to how to improve or validate what you are trying to do. Observation at point of impact is a good way of thinking about this.
I can see some meaning from getting those kind of responses! Click rate numbers – forget it. Now I have real information that can make an impact to the people I say (and the organisation) I am trying to serve.
So what we are interested in finding out is impact. What is the impact that occurs from what we are doing? This is different to outcome. Click rates are an outcome. Obtaining continuous feedback to ensure satisfied customers buy from you, recommend you, and stay loyal, is another.
Now what if we could get this feedback quickly, continually over time, on a large scale, context-sensitive, and in a way where the person giving us the information gives it in terms of how it affects them, and not through some intermediary or stilted survey method?
I set the scene this way to introduce some thoughts from a presentation at the ANU yesterday by Dave Snowden, special guest speaker at the ACT-KM forum. Dave talked about a number of current projects he was working on. The common element from his talk was the importance of determining impact and how then to take relevant action as a consequence.
I will use the example of the Liverpool Slavery Museum in the UK from Dave’s talk yesterday; albeit the Children of the world project was for me the most fascinating.
One could count the number of people going through the museum each month and year. The numbers might indicate level of popularity but one can’t be sure. At best, they show that “x” number of people came and paid “y” number of UK pounds to do so. One could do a simple accounting calculation at this point and perhaps leave it at that.
But what if you wanted to know what effect the museum had on people? What if you wanted to know how successful the museum was in educating visitors about slavery, or in providing a unique experience? What really was the impact of the museum visit?
[It is of course true that if you don’t want to know about your customers’ experiences and are happy with just throughput figures – akin to an assembly line – then impact will have very little interest for you. The process will be sufficient].
Dave told us how there are computer screens and keyboards at the museum where people can record their experiences and feelings about the museum exhibits. People can nominate any of the individual exhibits to make a comment or express a feeling. The people making these comments are then able to “index” or tag their comments using terms chosen freely that signify meaning to them. Nobody is interpretating what they say and adding any bias. At the same time, this information capture is continuous and provides for scale, something a static survey couldn’t do. The museum now has thousands of narrative fragments “indexed” by the individuals themselves. This information is aggregated and patterns observed. These patterns may suggest a change to a particular exhibit, or perhaps some alteration to how a museum education officer conducts a group tour.
In the Liverpool Museum case, they have both quantitative information (number of visitors and monies received) AND what impact the museum had on the visitors.
Yet still there are detractors:
- stories (narrative fragments is the preferred method used by Dave Snowden) are not real facts
- some people may just write junk and not tell the truth
- it’s all so subjective
All three statements might be true. The point is, if we want to measure impact, then we need to know what people think and what effect something had on them. And we need to know what they think, not what what we might guess at. The capture and aggregation of narrative fragments is a good way of doing this. “Junk” can be easily discarded but sometimes “junk” may be of interest as a weak signal, something we should pay attention to. Where you want to establish an impact on people, of course there is subjectivity. However, how the narrative fragments are captured, aggregated and used is quite a rigorous and objective method in itself.
Lastly, no matter what the method, unless people use the tools correctly and respond appropriately, no research activity will have any validity.