Category Archives: Research

What you say is not what you do

There is a bit of ruckus at the moment about the power of Australia’s supermarket duopoly – Coles and Woolworths.

In the past the criticism was that the two supermarket chains had too much market power – over 80% of the Australian market. That percentage probably remains the same today despite all the brouhaha about market dominance over the past decade (i.e. there were lots of protestations at all levels of the community, and a number of government inquiries, but there has been little tangible action to reduce this market dominance).

The main brunt of the criticism relates to market concentration (the duopoly has reduced competition in the market) and has too much market power on the buying side (the duopoly can squeeze suppliers to almost unsustainable levels). In addition, supermarkets can cross-subsidise their products when it suits them, thereby using their market power to artificially lower prices in “competitive” products.

In 2008 there was the Australian Competition and Consumer Commission (ACCC) inquiry into the competitiveness of retail prices for standard groceries. In September 2002 there was the Report to the Senate by the Australian Competition and Consumer Commission on prices paid to suppliers by retailers in the Australian grocery industry.

One of the interesting snippets of information from these public inquiries is that there was evidence that showed a difference in pricing at the supermarkets depending on whether the duopoly was in the one location and where the duopoly had a third supermarket in competition in the one geographic location. In the scenario where a location had three competing supermarkets, the Coles and Woolworths retail prices were generally lower than at locations where it was just Coles and Woolworths in competition. Well, as Michael Porter identified, businesses try to avoid price competition wheneve they can because it directly affects margins.

The impact on suppliers is clear enough. It was loud and clear when I worked at Rabobank throughout the first half of the naughties. I would hear how the supermarkets were screwing agricultural suppliers through reduced prices and increased compliance costs. For example, one banana producer told me that the bananas had to be packed in a box in a very specific way otherwise Woolworths would not accept delivery.

Nowadays, farmers have the same concerns but there are increasing demands from the duopoly concerning on-farm activities. Recently, one berry producer told me that having a dog on a berry farm was unacceptable because the dog may have been washed in a chemical bath that could get onto the berry fruit!

The supermarkets say that driving down consumer prices shows that a competitive market exists. Driving down the retail cost of milk to one dollar a litre makes a lot of sense if one wants to sell lots of milk but milk has a relatively inelastic demand – the lowering of the price does not necessarily see an increase in consumption. For the duopoly, however, a low price for a food staple like milk makes a lot of sense because it attracts shoppers to the supermarket rather than the corner store. If a shopper perceives the saving on milk is large enough, the shopper will alter his/her shopping behaviour to shop at the duopoly at the expense of other food retail providers and small businesses. Instead of going to the local convenience store to pick up milk and some ancillary groceries, the shopper will concentrate their total grocery shopping activity to the supermarket.  The duopoly wants consumers to stop buying any skerrick of grocery items from alternative convenience stores and grocery retailers. The milk war is less about increasing consumer demand for milk, but increasing the market power of the duopoly.

Currently, there is a lot of concern over the duopoly supermarket chains driving down supplier margins even further through “home brands” (also called private labels).  This article and this one sum up the private label issue nicely.

Everyone is out there saying how dreadful it is that the supermarket duopoly can do all these terrible things. However, the supermarket duopoly reduces prices on grocery items at the checkout for consumers (the same consumers who are equally screaming about the high cost of living).

A recent poll in the Sydney Morning Herald found that over 70% of people are against home brands because they limit variety (i.e. consumer choice). There is plenty of chatter to indicate that a similar percentage (or more) of people think that the supermarket duopoly has too much power.

But what does the behaviour say? Talk is cheap when there is no direct and tangible linkage to benefits or costs (i.e. there is no benefit or sanction as a consequence of our response to a survey or to give an opinion). A poll or a survey asks us what we think and we say so. We really believe what we say as well – Coles and Woolworths are bad.

However, it is likely that the very same people do their weekly grocery shopping at Coles or Woolworths. Mums and dads have Coles and/or Woolworths shares as an investment; either directly or via a superannuation fund. Our actions really do speak louder than words.

Whilst the supermarket duopoly is an important economic and marketing case study, the implications of saying one thing and doing another are huge. Are opinion polls really worth anything at all? The monthly tabloid treats of political opinion polls tell us the Gillard government will be wiped out if an election was held today – but it’s not. The next federal election (the real poll where an outcome actually happens) isn’t for another couple of years. Opinion and speculation are now touted as fact in the media. However, these same opinion-makers are not held accountable when the future unfolds in real-time and they are proved wrong.

If we are to make any sense of opinions linked to action, we need to actually examine the behaviours. This applies equally to marketing, economics, and knowledge management. It’s the logic behind behavioural economics, real-life behavioural research, and user experiences. Mark Hurst’s Good Experience is a good example of looking at what actually happens as distinct from what reportedly happens.  It’s the logic that we need to apply in our knowledge management research as well.

Advertisements

On conferences

Conferences are events that I generally support because of the learning and conversations that take place.  I consider conferences to be an integral part of knowledge management, especially the person-to-person interactions that occur between sessions and at meal breaks.  The networking opportunities are also important.

But I am wondering if conferences are really all they are cracked up to be.  I heard on the news last night that there is going to be a big conference  in Canada in the coming weeks to discuss donor response to the disaster emergency in Haiti.  And in the world of international development there are always plenty of conferences taking place around the world.  Are conferences the right forum to discuss disaster relief and emergency aid when people still don’t have access to aid, food and shelter in Haiti even now?

My questioning about conferences has triggered some thoughts about networks.  The world wide web is a network of computers. An organisation is a network of functions performed by individuals, some of whom will form personal networks in order to do their jobs, and become more effective in their work.  So why aren’t networks sufficient to act in times of crises, or at other times for that matter, instead of formal conferences?  Conferences may act as a catalyst for the creation of networks, but at what point should networks replace conferences?

Whilst I have given examples from international development, the questions are just as valid for other subjects and issues.  I’d like to hear what people think about this conference issue, and whether there is any scope for networks to take over.

On organisational network analysis

I arranged for Cai from Optimice to come into AusAID today to give a short presentation on organisational network analysis (ONA).  Some people may also refer to ONA as social network analysis (SNA).

I had previously talked with Cai and Laurie from Optimice at the recent KM Australia conference in Sydney.  Cai had offered then to do a presentation for me on his next trip to Canberra.  And today was the day.

The interesting thing for me about using ONA was in the visualisation of data and the direction and intensity of relationships. My interest is largely directed at the information relationships between people, as well as the relationship between people and knowledge objects.  At the same time, some consultants have just finished a draft report on the thematic networks and I was thinking that the report could have been improved with some good organisational network analysis using the Optimice product.

In addition, the online team in Communications were interested in the mapping possibilities tied to internet/intranet content management and the broader communication issues between head office and overseas posts.

Suffice to say, I am keen to try out some ONA with my own workplace responsibilities in information and  knowledge services.  ONA might not have all the answers, but the visualisation of the data and relationships would be a great starting point for deeper research and analysis.

On judgement

The KM Australia conference is over for another year.  There were some great presentations and I took plenty of notes.  Thanks to everyone involved.  In particular, I want to thank Aimee Rootes from Ark Group.  Aimee was always helpful and pleasant, and went out of her way to find people when I couldn’t find them.

I am not going to launch into my notes from the conference just yet.  I do, however, want to tease out something that Dave Snowden mentioned in his presentation on Wednesday morning.  Dave said that “judgement is what KM is about”.  He reconfirmed the importance of judgement by saying that people in organisations “need to be allowed to make good judgements”.  This was not the central thrust of his presentation but it was important to me.

Judgement should be about choice.  On the one hand, knowledge managers need to judge what elements from their knowledge management armoury is appropriate for what problem (or opportunity) and in what context.  Sometimes knowledge managers have a tool box approach wherein everything in the tool box must be used, irrespective of the need.  A KM tool box requires judgement as to what is appropriate for the task at hand.

But I think Dave was referring to allowing people in organisations to make judgements.  And judgements must be made when dealing with complex environments.  How can KM emerge or be successfully facilitated in an environment in which judgement (by the very people KM is supposed to help) is so completely hindered that standard drone-like thinking pervades everything one does?  An organisational culture needs to support and enhance good judgement.

In my current role I am looking at the way in which our information services go beyond just supplying information and research to people within the organisation.  This is indeed part of our function and we can say we have supplied x number of items and had hundreds of people read our material.  But this is not enough.

I am working on delving more deeply into how people use the information and research my team supply; for what purpose, and most importantly, for what outcome.  I will leave impact to later on – first things first!  I am basically seeking to discover the knowledge trail from where my team gets involved, as part of a much wider process, and where that fits in to give people the capability and confidence to make decisions, or in reality, make judgements about what to do.

If judgement is what KM is about, then I want my team and the services we provide now and into the future, to enhance the capacity and capabilities of people to make good judgements.

On outcomes and impact

There are many ways to find out about things. Research is obviously part of that. And research likes to use quantitative measures in order to maximise objectivity, even if these measures don’t give you much meaning.

Let’s look at hit rates on a website – a metric commonly used for “statistical purposes”. What does it mean? Well, it means that a website or page view has been looked at a certain amount of times. The inference is that the more hits you have the better must be the result. But what is the result?

If the intended result is to have as many hits as possible since one assumes hit rates equate with “eyeballs”, then surely high hit rate numbers are great. But is this the result an organisation really wants from it’s website? What happens as a consequence of the “eyeballs” is the question I really want to get an answer to. In reality, high hit rates could indicate a bad website. Your website visitors and customers are clicking away, frustrated by their inability to reach an outcome they want to achieve. Just get those click rates up and everything will be fine….hmmmmm.

Let’s do a survey then. A survey is actually pretty limiting.  A questionnaire is bounded by the construction of the questions and limited answer options. In nearly all cases, one could answer a question yes or no, depending on the particular circumstances at some point in time. Surveys also don’t do a great job in measuring continuous change over time. And conducting surveys or focus groups with large numbers of people are often difficult and time consuming, certinly if a continuous process is required.

Yet these methods are still held to be superior to more qualitative approaches to research. However, if you actually asked your website users what they thought of the website, perhaps they might tell you that it takes a lot of clicking to get through to complete the task at hand. They might tell you that your website is poorly organised, with lousy navigation and confusing labels. They might tell you that the photos on the home page add nothing to their customer experience. They might tell you that your website could better… for them. And if you have a continuous dialogue with them, they will be even more insightful as to how to improve or validate what you are trying to do. Observation at point of impact is a good way of thinking about this.

I can see some meaning from getting those kind of responses! Click rate numbers – forget it. Now I have real information that can make an impact to the people I say (and the organisation) I am trying to serve.

So what we are interested in finding out is impact. What is the  impact that occurs from what we are doing? This is different to outcome. Click rates are an outcome. Obtaining continuous feedback to ensure satisfied customers buy from you, recommend you, and stay loyal, is another.

Now what if we could get this feedback quickly, continually over time, on a large scale, context-sensitive, and in a way where the person giving us the information gives it in terms of how it affects them, and not through some intermediary or stilted survey method?

I set the scene this way to introduce some thoughts from a presentation at the ANU yesterday by Dave Snowden,  special guest speaker at the ACT-KM forum. Dave talked about a number of current projects he was working on. The common element from his talk was the importance of  determining impact and how then to take relevant action as a consequence.

I will use the example of the Liverpool Slavery Museum in the UK from Dave’s talk yesterday; albeit the Children of the world project was for me the most fascinating.

One could count the number of people going through the museum each month and year. The numbers might indicate level of popularity but one can’t be sure. At best, they show that “x” number of people came and paid “y” number of UK pounds to do so.  One could do a simple accounting calculation at this point and perhaps leave it at that.

But what if you wanted to know what effect the museum had on people? What if you wanted to know how successful the museum was in educating visitors about slavery, or in providing a unique experience? What really was the impact of the museum visit?

[It is of course true that if you don’t want to know about your customers’ experiences and are happy with just throughput figures – akin to an assembly line – then impact will have very little interest for you. The process will be sufficient].

Dave told us how there are computer screens and keyboards at the museum where people can record their experiences and feelings about the museum exhibits. People can nominate any of the individual exhibits to make a comment or express a feeling. The people making these comments are then able to “index” or tag their comments using terms chosen freely that signify meaning to them. Nobody  is interpretating what they say and adding any bias. At the same time, this information capture is continuous and provides for scale, something a static survey couldn’t do. The museum now has thousands of narrative fragments “indexed” by the individuals themselves. This information is aggregated and patterns observed. These patterns may suggest a change to a particular exhibit, or perhaps some alteration to how a museum education officer conducts a group tour.

In the Liverpool Museum case, they have both quantitative information (number of visitors and monies received) AND what impact the museum had on the visitors.

Yet still there are detractors:

  • stories (narrative fragments is the preferred method used by Dave Snowden) are not real facts
  • some people may just write junk and not tell the truth
  • it’s all so subjective

All three statements might be true. The point is, if we want to measure impact, then we need to know what people think and what effect something had on them. And we need to know what they think, not what what we might guess at. The capture and aggregation of narrative fragments is a good way of doing this. “Junk” can be easily discarded but sometimes “junk” may be of interest as a weak signal, something we should pay attention to. Where you want to establish an impact on people, of course there is subjectivity. However, how the narrative fragments are captured, aggregated and used is quite a rigorous and objective method in itself.

Lastly, no matter what the method, unless people use the tools correctly and respond appropriately, no research activity will have any validity.

On a new city and a new job

It has been a while since my last post. I have been finishing up my work at the Fred Hollows Foundation and preparing for my move to Canberra with a newjob at AusAID. AusAID is the Australian federal government’s overseas development agency.

I am in the midst of the move fom Sydney to Canberra, with limited internet opportunities. More news soon.

What I can report on is that the decision to start a new job in a new city was a lot more difficult to make than I would have thought, given my preference for Canberra over gridlocked Sydney any time!

Stay tuned and stay patient. I will be back on track in the coming week or so.

On narrative capture and drought

Having followed complexity theory and narrative for some time in the knowledge management literature, and enriched by the Cognitive Edge accreditation course I undertook this year, I have become more attuned to opportunities where narrative capture and sensemaking can be used to provide meaningful information for organisational development and as a guide for government policy.

I was therefore interested to read today about a report on Australian drought policy in which people’s stories made a significant contribution to the government’s understanding of the social impact of drought. The ABC reports that Agriculture Minister, Tony Burke, said that “many people told the report’s authors that it was the first time they had been asked about the way the drought was affecting them personally”. The personal stories are what gives meaning to the problems the report is supposed to identify and inform policy about.

Now, having worked previously at Parliament House in information and research for almost six years, I am well versed in the ways in which Standing Committees and Senate Reports are researched, including the level of community consultation and public submissions involved. Yet, the input from people is relatively small in number given the scale of the issues and the size of the impacted populations. And not all views are expressed and many people are not heard at all.

Imagine a government report looking at the social impact of drought that had used systematic narrative capture, personal tagging, and sensemaking software as the basis of the primary research. Imagine aggregating thousands of narrative fragments from all around drought-stricken Australia and using the aggregated information to have emergent themes become visible at different levels of intensity. The emergent themes are those that individuals within rural communities have identified by themselves and are therefore more representative of what is really happening than anything else.

From a policy perspective these themes are significant. Agriculture Minister Tony Burke recognises the policy need when he says in the same ABC news story: “the report shows the policies to support farmers and communities through drought still need a lot of work”. If government policy is really trying to get to the heart of the social problems facing drought-stricken families and communities, then hearing from these people in massive numbers and having them self-identify their concerns and problems is critical to finding a solution.

I listened to Dave Snowden at the act-km conference in Canberra last week talking about how seriously we should take people’s stories – or narrative fragments as he prefers to call them. Dave blogs about this narrative issue when he says: “there is far more value in listening to stories, and gathering fragmented anecdotes than in telling stories. Meaning comes from fine granularity information objects (OK it’s jargon but it makes a point) and their interaction with my current reality. Not from some leader telling me a story (the other name for that is propaganda). Narrative work is a about meaning making, not about story-telling (which has a double meaning in English)”.

Clearly, the real strength of narrative is about meaning! And narrative capture, self-tagging of content, and aggregation is a method that leads to emergent issue identification that provides the meaning from which good policy can be developed for effective decision-making.

I believe that mass narrative capture and self-tagging of content can be aggregated in such a way that important thematic elements become visible to improve decision-making and problem-solving. At the very least, emergent themes bring “weak signals” to the surface for further questioning (often the weak signals go unnoticed during other forms of primary research).

Agriculture Minister Tony Burke would do well to consider this type of analysis for future research in policy development and implementation, especially in relation to social impacts.