[NOTE: this post originally appeared on Datachondria, a blog dedicated to technology, data, and modern life.]
Kudos -- again -- to LibraryThing for introducing a very nice little feature called "will you like it?"
This looks to be another quick way of using their very intelligent recommendations algorithm. But the name of the feature itself is another little nudge in the direction of leveraging "negative data" -- what we hate as well as what we like. Why do companies and services not make more use of information on what we spurn, as well as that which we actively seek to consume?
I've no doubt that some of the better recommendation engines do indeed use this data, but the fact that it isn't foregrounded means that users aren't inclined to, for example, apply low ratings to things that they actively hate. Indeed -- the entire conceptual language of our user interfaces is geared against this. When I'm rating items in Amazon or iTunes, I'm not inclined to give any stars at all to something that I hate. I want to banish it, not apply the most meagre of rewards.
What songs do I never listen to? Which songs do I skip away from within the first 30 seconds? When I'm browsing a newspaper website, which writers do I always fail to read -- suggesting that I'm deliberately avoiding them? Which websites do I always refuse to click through to?
What books have I started but never finished? Wouldn't it be nice if Amazon could tell me I have only 50 percent likelihood of finishing Finnegan's Wake?
Companies do everything they can to push highly viewed or rated content to customers -- but toxic content doesn't get pushed out of the way with quite the same enthusiasm. As anyone who has been involved in merchandising will tell you, bad content isn't just a "dead zone" around which good content can exist without impact. It risks infecting everything around it.