After intense competition between Lee Dewyze and Crystal Bowersox for the title of 2010 American Idol, Lee took home the title. It was a battle between reflective, general competence and quirky, stylistic competence, and general competence is what the American people wanted this time. Though an obvious majority of idol fans are delighted that Lee won, another group of people are wondering how could this possibly happen. Well, here is the final answer on Why Lee Won.
First, let’s have a look at how Lee and Crystal have been doing on a weekly basis. The chart shows the percentage of opinions that were positive towards Lee and towards Crystal. Since the beginning of March, they have shared the top spot, but Lee has taken that spot on a more consistent basis.
Lee: Emotionstowards the two contestants were fairly similar. About 48% of people had positive feelings towards Lee (as opposed to neutral or negative) while about 47% of people had positive emotions towards Crystal.
Lee: Recommendations for the two contestants were slightly higher for Lee. 43% of recommendations for Lee were positive as opposed to 39% for Crystal.
Lee: Physical appearance played a role as Lee generated 43% positive opinions about his looks while Crystal only generated 37% positive opinions about her looks.
Crystal: It could have been Crystal’s wacky locks that pulled her down but it wasn’t. Opinions about her hair generated 39% approval while opinions about Lee’shair, as ordinary as it is, generated 36% positive opinions.
Lee: Crystal clearly had an issue with her discolored teeth and this also contributed to her physical appearance scores. But, many people were able to get past that as 24% of opinions about her teeth were positive while 29% of opinions about Lee’s teeth were positive.
Lee: Regardless of how true, Crystals disheveled hair and discolored teeth contributed to an overall opinion about cleanliness. Crystal generated only 34% positive scores in this regard while Lee generated 43% positive scores.
Lee: Crystal’s unique personality did not go unnoticed as 20% of people approved of her quirkiness. But, the calm, even personality of Lee was more desired as 39% as opinions about Lee’s emotional stability generated positive scores.
Lee: Lee’s quietness was very appealing to people as he generated a 47% positive score compared to Crystal’s 30% positive score.
Crystal: Not surprisingly, Crystal generated much higher new and different scores, with 50% positive opinions compared to Lee’s 42% positive opinions.
Crystal: And, Crystal definitely owned the ‘cool’ factor as she generated 55% approval compared to Lee’s 47% approval in this area.
Crystal: Crystal was also felt to be far better at selecting songs with 69% approval compared to 51% approval for Lee.
Crystal: These unique Crystal characteristics led to her generating slightly higher anticipation scores. Crystal generated 50% positive opinions while Lee generated 47% positive opinions.
Given that these are the areas generating the most differences between Lee and Crystal, one thing becomes clear. You might think appearances don’t matter, but that is Why Lee Won.
Do you like TweetFeel Jessie? We certainly do which is why we had him printed on a bunch of t-shirts. And, we’d love to give some of those t-shirts to you! How is that possible you ask? Well, here are the rules.
1) You must be a registered attendee at either MRIA Toronto 2010 or MRA Boston 2010.
2) You must attend the entire social media research presentation given by Conversition’s Annie Pettit (@LoveStats).
3) You must be one of the first five people to give Annie your business card when you ask a legitimate SMR question after her presentation. Annie will decide if your question is legit! You must provide the business card quietly and with NO session disruptions. Any disruption and the game will end.
Peanut Labs and Conversition Strategies Team Up to Launch SocialVoice New Social Media Research Product to Enable Researchers to Now Ask & Listen
SAN FRANCISCO – May 11, 2010 – Peanut Labs, the leading provider of social media sample for market research, and Conversition Strategies, a boutique social media marketing research firm, have recently teamed up to offer SocialVoice, a new social media research product grounded in scientific fundamentals.
Developed to be the next generation of market research, SocialVoice allows researchers to scientifically measure opinions registered in social media. Designed by researchers with over 80 years of research experience, SocialVoice was built specifically with the needs of researchers in mind. It provides the necessary variables to transform unstructured social media conversations into data that mirrors traditional survey research data.
By incorporating techniques such as sampling, weighting, sentiment analysis and content analysis, SocialVoice takes advantage of processes that facilitate traditional research features. These techniques ensure that researchers can use both survey research and social media research in cooperation to enhance their product learnings.
“We are pleased to begin this relationship with Peanut Labs, a company that has proven it understands the importance that social media plays in the marketing research industry,” said Tessie Ting, co-founder of Conversition. “The fact that researchers can easily combine the benefits of traditional survey research with those of social media research is of utmost importance to us, and we’re happy that SocialVoice can be that uniting product,” adds co-founder Jean Davis.
Conversition Strategies is a boutique online market research firm based in the US (Conversition Strategies Limited) and Canada (Conversition Strategies Incorporated). Conversition listens to consumers by applying scientific principles to the collection and analysis of social media data. Its strength lies in combining the expertise of globally respected market researchers with social media mavens.
About Peanut Labs
Peanut Labs connects researchers to social media through partnerships with 400+ leading networks and applications. The company provides access to fresh respondents globally by profiling users from an online population of over 240 million unique individuals. Peanut Labs is based in San Francisco with offices in Seattle and New York City. The company was founded in July 2007 and is privately held. For more information, visit www.peanutlabs.com.
Over the last couple of weeks, the social media space has engaged in extensive battles over whether human or automated sentiment analysis is better. It’s a difficult question to answer and may in fact begin with this question:
“Why assume that automated systems are more accurate judges of human emotions than human beings?”
Human sentiment analysis has many advantages, the first one undoubtedly being that it is more valid than automated sentiment analysis. This is no surprise. This is of no contention. Human beings are better at sentiment analysis than automated systems. After all, it is humans who make automated systems work to begin with and humans who work day to day to improve the systems. End of story.
Of course, the story does not end there. Humans are fabulous at understanding the humor of their best friends but not quite as good as interpreting the humor of strangers. Humans understand spelling and grammar mistakes… usually. We understand grammatical errors… usually. Humans never get tired or bored or inconsistent after coding millions of records for weeks on end. Wait… that’s not right.
The automated sentiment analysis systems that humans have carefully created over the last few decades were built to handle some of these human disadvantages. Forget human capabilities of 250 words per minute and start thinking automated capabilities of thousands of words per second. Forget how my interpretation differed from yours because I’m tired and you didn’t get the joke, and consider the 100% reliability of automated systems even after they’ve been working for ten days straight with no coffee. If you think about it, the accuracy of automated systems boosted with human intervention is quite impressive.
There is no perfect valid sentiment analysis system but there is a system that is right for you. Do you need high accuracy for a small amount of data? Then human systems are right for you. Do you need good accuracy for millions of records? Then automated systems are right for you.
So who would win the fight? I’d say there was never a fight to begin with.
Remember the good old days, the days of surveys? The good old days when you sent out 1000 surveys, waited two weeks, and were guaranteed to get 300 back but never more than 1000 back? That was nice, wasn’t it?
Social media research is a completely different story. We don’t send out surveys; we send out crawlers. We don’t send out questions; we bring back answers. We don’t wait to receive 100 completes; we cross our fingers that we have space to hold millions of records. And those are just the records from yesterday.
The interesting thing about SMR is the quantity of data. In fact, there are so much data on the internet that even speedy and greedy Google hasn’t managed to collect it all. Researchers, on the other hand, are far more careful and picky about collecting data. We still get excited by the minute as data come in, and it does come every minute, but we carefully observe it as it comes in.
We notice that some of the data are fresh out of the oven, created just minutes ago, while other data are less fresh perhaps created hours or days ago. And then, there are always the data that were created weeks or months ago. This is the data that was written with just as much passion as other opinions found long ago, but had been hiding behind poorly tagged blogs and rarely indexed folders. It was there all along, valid and important opinions desperately trying to be heard and appreciated, but only found now.
Data gathering from the internet is full of highs and lows. Some days, data pours in by the truckload while other days, it comes in more slowly. Some days, data is as fresh as fresh can be while other days, it is discovered from long ago like ancient treasures of Egypt. No matter when the opinion was offered though, researchers know that it’s important to count every opinion.
That is the interesting thing. You used to know that the 300 surveys in your hand would still be 300 surveys tomorrow. Social media has turned that traditional data rule on its head. Three hundred verbatims today might be 350 tomorrow and 600 next week. And, that’s ok. It’s better to have collected a larger proportion of the population than to have unknowingly excluded subsamples that might completely change the results. It may not be how we’re used to collecting data, but researchers too must change with the times.