people signals


talent leadership change innovation

What kills prediction markets

Recently I have been amazed again at the accuracy of prediction markets – and at their uselessness and irrelevance.

In the last weeks we had a few major decision coming up in Germany. The first one was who will win the soccer Bundesliga; the second who will be Germany’s Next Topmodel. Both competitions were very tight calls and had huge media attention. It was no suprise then that there were numerous polls on who will win these races. In the first case, it has been the closest race in recent Bundesliga history. Two games before the end, there were 3 teams that were in close reach and could all win one of Germany’s most important trophies. In the leading spot was Schalke 04, the team with the biggest budget of the three and close runner-ups in recent years. Second came VFB Stuttgart, the surprise team of the season with young players, relatively small budget and not a cup in 15 years. Third came Werder Bremen, the team that dominated the first half of the season and also the Bundesliga champ in recent years. Close match, open outcome. What would the masses predict?

Germany’s biggest newspaper (Bild Zeitung) opened a poll on who would come in first. As people voted they seemed to favor the underdog: VFB Stuttgart. Two weeks later, Germany had a new national champion – the young, inexperienced players from Stuttgart. Wow! That was a great call – no clarity in that decision and the masses were correct. James Surowiecki must have liked that.

The second show-down came with the casting show of Germany’s Next Topmodel. Heidi Klum took the Tyra-Banks role and eliminated a beautiful ambitious girl each week. The final was made up of three hopefuls: Hana (the dark-haired czech with Angelina-lips and previous model experience), Ani (the blonde who worked in her parents boutique) and Barbara (the redhead studying math in Bavaria). Who would wear the crown at the end of this competition?

Again, Bild set up a poll. To my suprise (and some other bloggers as I have read), the predicted Barbara as the winner. Last weeks show came down then with a big suprise: Barbara won the competition. Another Surowiecki-moment!

Both calls have been sort of odd to me. There was nothing clear in any of those calls. But both times the masses trumped the experts. There were two additional observations though that kind of killed the Surowiecki-glory of those polls:

What lacked in both polls was the number of participants. In the Bundesliga draw there were 2000 people engaged and in the Topmodel vote about 1000. That is almost nothing. The Bundesliga is followed closely by maybe 20% of the populiation (would result in 15 million individuals). .The Bild Zeitung is the biggest daily publication in Germany and is filled with Bundesliga news daily. Of all those people only 2000 voted. I am too lazy to do the math, but it doesn’t strike me as a lot.

The picture fort he Topmodel-competition is similar. In pre-final episodes they had a market share of 25%, which is around 3 million people. Of all those media-savvy young people who blog and youtube about this event, on 1000 cast their vote in prediciting the outcome. These marginal percentages in participation are similar to what I have witnessed with the pilot at our company. At ouf the 125 people signed up for the market, only 4-5 really traded. This is a lowsy participation.

It is all the more striking since the accuracy proves so true. Also at the internal market, the active people were quiet good and the returns were nice. People genuienly like the idea of bottom-up information gathering and no-bullshit predicitons. But then no one participated. If I look on the web at prediction scenarios, they don’t look much better: bizpredict is lame etc. So even though the results are beautiful, they are obviously of no use to the people.

Which brings me to the final killer on the use of prediciton markets: irrelevance. What could Werder Bremen do about being traded as a non-winner? What could Schalke or Hana do about it? In a company we might say that these information can filter in to the correction process or uncover problems early. May be. But so far the prediciton markets don’t have a mechanism to feed up the right ideas to address a loosing trade in the market. And that is a problem. A big one. Transparency only helps if it can trigger some correction actions. And if the masses are not involved in solving the problem then the ball is back in the hand of the few experts.

While Surowiecki seems relevant to our thoughts on how to gather information and our ideology of the positive effect of involving people, the reality shows that it is no easy step to do what matters most in business: being useful and relevant.


Filed under: change, organization, prediction markets

coaching ourselves: business learning 2.0

I recently stumbled across an interview with Phil LeNir about “coaching ourselves”. This whole peer-coaching and learning-from-each-other always seemed interesting to me. Big companies spend big bucks on big training programs that often don’t have a big return. Sure, they evaluate right after the training how people liked it etc. While these figure might make for a nice PowerPoint-justification of the budget, I often have the feeling that the Dollars are not really turned into value as they should.

Part of the problem might be the unreal setting of those workshop-programs. Away from the people I work with. Concepts that don’t easily fit my reality. Little follow-up after these sessions. Maybe it is not so much concepts that need to be taught but rather attitudes, behaviors and approaches. It is telling that most companies don’t track the impact of a training some months after the trainings. That might be too revealing, so you measure where it still looks good. So why go do companies keep going with this kind of training? I suspect that they don’t have any better alternatives.

But maybe there is a better alternative. In these web2.0-days of openness, collaboration and bottom-up intelligence, it seems time for a change in the training departments. Here are a number of these approaches that point in the direction of learning2.0:

Workout – this approach to problem solving was developed at General Electric under Jack Welch and is well documented. Basically, you bring a big group of people together (20-200), state the problem to them, let them come up with specific recommendations and then decide on what to implement. I have worked with this: very bottom-up-ish, very effective and very empowering. 

Wiki & Crowdsourcing – the idea of having an open everyone-contribute encyclopedia seemed very edgy a few years ago. These days wiki sites are amongst the first in almost any Google search and many people rely on its information. Crowdsourcing takes this idea to how work gets done in a company and opens it for the undefined crowds. iStockphoto is an example: people upload their photos and they are sold through iStockphoto with the money being split. What used to be a specialists job (shooting great images) is now up to the crowd.

Open Source Car – the OScar project is a very interesting consequence of this open approach. The idea is to create a car based on the open source principles with the hope of designing breakthroughs in mobility. There are a few basic specs, but the rest is open to everyone with no patents or legal limitations. The current version is at 0.2 but we will see if this approach works.

Omidyar Network – eBay founder, Pierre Omidyar, set up a foundation that tries to enable individuals to improve the quality of life. They invest in people with ideas in areas such as microfinance, participatory media, open innovation, open source and transparency in government. They have an interesting set of projects they founded. What is interesting though is the decentralized nature of their venture. Not some hired researches that crank out great ideas, but funding of the most promising results of the network.

Coaching Ourselves – coming back to the learning issue in organizations. This approach works by getting a small group of learners together, meeting once a week for 90 min and discussing their experience and some relevant concept. No authority is present that teaches and no pre-readings, ppts, actions plans are used. Just one basic theme that is discussed in the group. The premise is that people learn best from their experience. Reflection and relevance are the two ingredients that fuel discussion and learning.

I also like the focus of these sessions. By using the five minds of managers from Henry Mintzberg (the reflective mindset; the analytic mindset; the worldly mindset; the collaborative mindset; the action mindset), the approach tries to keep a balance and not become one-sided. So not just people management, not just strategy, but a balance between the managerial mindsets.

I am just running pilots with this program in a management team. They love it so far. Finally, something non-fluffy and real. Other groups are interested and we want to extend it to non-managers. It is too early to tell, but from the reactions I see in learners and my own assumptions of how to learn best, this might be just the right idea at the right time. Being cost-effective, learning-effective and scalable this might be the better alternative that is currently missing.

Filed under: career, change, coaching, team

The (un)wisdom of prediction markets

We are just on the verge of launching a first prediction market in my area at work. I have been infected with the idea of swarm intelligence by reading Wisdom of the crowds. The premise seems great: many people are smarter than the smartes. The promise is: this tacit knowledge can be made visible through a stock-market mechanism. So I researched a bit and found that companies use this for different purposes: Google predicts the launch date of new products; Microsoft the numer of bugs in a software; HP looks at sales volumes and GE judges innovative idea. These are all nice, but the more I deal with it, the more I hear the question: so what?

My standard reply is something around: gathering people, sharing the process, making intangible knowledge transparent, bringing focus to a topic. What draws me to prediction markets is the potential as an organizational tool. It involves the grassroots people who are doing the work and makes decisions transparent and accountable. If the market works correctly and suppliers are judged by their customers, then it could even involve something like an internal supplier-customer-rating that might work better than silo-focused MBOs. Anyway, that is a bit further away. Still, I wonder myself: so what?

What happens if your market shows that the product won’t launch on time? Who has a benefit from that knowledge? I realize that many people don’t even want to know. Not that they don’t want to face the truth – the know already – but they are not comfortable with having it black on white and for other to see. Also: what happens if the market predicts a no-show for a product? Is there any mechanism for judging the reason or suggestion alternatives? That would certainly be nice: use swarm intelligence to suggest improvements and have the market collectively judge the best ones.

Similiar thing with bugs in a software. Nice to know that this might be not up to standards, but even nicer to have a market that predicts (and pre-selects) the most promising levers for chaning it. Sales volums is similar, just as innovation.

It seems to me that the prediction market suffers from a criteria it has to be matched against. The value of the stocks being traded is tied directly to the criteria being defined. The markets don’t allow collective problem-solving, impact estimation or decision-influencing. It reflects what people think about the future. Currently, I haven’t see a way that it enables an organization to shape the future with collective intelligence. With that it might really answer the question of so what. It would be a tool for improvement and collective participation. Now, that is a stock I would buy.

Filed under: change, organization

How to get people to work together and change

I had a big aha-moment reading a recent Harvard Business Review article. In this text, Clayton Christensen and colleagues discuss that there are different tools to get people to work together and change behaviors. Most change books are about the one method you can use to change organizations. But they actually highlight that there are different ways to drive change depending on the context factors. So they propose to look on the agreement along two dimensions: what people want and how the envision it happen.

collaboration matrix Depending on where you are along these axes, different tools are important to support collaboration or change. Especially this cause-and-effect axis is very interesting. I have repeatedly observed that in an intense group there is high agreement on what is wanted, but not so much on the means. The tools they propose are helpful to navigate in this system. The message is that there is simply not just one way to bring about collaboration. You need an understanding on where you are and then what tools you use to drive that. Or as they say it so well themselves:

“One of the rarest managerial skills is the ability to understand which tools will work in a given situation—and not to waste energy or risk credibility using tools that won’t.”

Good stuff!

Filed under: change, organization, team