Two Tales From the Edges of Algorithms

July 10, 2014 No comments Tags:, , ,

Two short tales landed in my view this past week that provide lessons on the difficulties of getting computers to take over complex problems in their entirety.

What Facebook’s Newsfeed Thinks It Knows

While catching up with a friend last week, he told me a story that could easily be a cautionary tale for using social networks to augment real world relationships. K______ has a friend whom he sees a few times a week and talks with regularly. That regular contact is where they socialize, and because of that they don’t interact much on Facebook. In particular, they don’t interact with the status messages, photos and links that each other posts.

Without realizing it over time, they had each fallen off of each other’s Newsfeeds, because Facebook watches whose posts you interact and uses that to help decide what to show you. Though the exact Newsfeed secret sauce is a secret, The Daily Beast did some experimenting and came away with some interesting conclusions about how the Newsfeed decides what to show. But getting back to our friends here…

Recently, K______ realized he hadn’t seen his friend for longer than normal. Being too late in the day to call, he looked her up on Facebook. Her few status updates of the past few days said her boyfriend had been in the hospital after being hit by a car, and that was where her life was at right then. There were a bunch of well-wishing comments on the posts, but they hadn’t been seen by K______.

What’s interesting here is that within the Newsfeed algorithm is an assumption that the frequency of interaction with posts indicates the currency or relevancy of a relationship between two points in the “social graph.” Those assumptions are necessary, and often they work really well. But when the assumptions fail, they highlight the reality that social networks are incomplete representations of relationships, just as profiles are incomplete representations of people.

Facebook is good at showing you what’s going on with people you know, but when those gaps are exposed they can be dramatic.

A Bit of White Flag

A great deal of what Google has done with search, and the philosophy it brings to almost all its endeavours, is that computers can take over entire jobs from people. The Google way is a highly technocratic one, an ethos of pushing the technological envelope and sometimes straining or violating human norms in the process.

In search, there’s been a growing criticism of Google’s inability to control content farms, websites that publish garbage or stolen content optimized for high search rankings and stuffed with more ads than Times Square. Google happens to also be the world’s biggest ad placement service, and criticism has also suggested that Google benefits too much from ad placement to fight hard against what Danny Sullivan of Search Engine Land lambasted as an Internet sewage factory.

That goes too far; Google cares too much about getting data right to indulge in that kind of dirty play. A couple years ago when they announced Chrome, I speculated that part of its potential was to provide another way for Google services to learn from people’s browsing behaviour. Today they took a step in that direction by releasing the Personal Block plugin that let’s people blacklist junk sites from their future search results, and in doing so calls out content farms to Google.

This is the first overt move into social search I’ve seen from Google. The plugin is a good idea: it provides personal value, and derives community value implicitly from the action that invokes the personal value. Always on brand for Google, the plugin is ‘experimental’, but it doesn’t take a lot of imagination to see a bit of surrender here: they can’t get the algorithm to identify content farms correctly, so it’s time to call in some human intel, crowdsource style.

It’s a humanizing, and maybe a bit of a humiliating moment for Google, who recently bragged about building a robotic car that they disturbingly trusted enough to drive through civilian traffic without a special permit (but with a human override on hand). Will crowd wisdom tune Google’s algorithm to the content farm problem? Possibly. Will content farmers might automate Chrome to send in false positives or to tattle on competing sites. Also possible, they’re clever jerks. Only time will tell, and the answer will likely be in between those extremes, which is where conjecture usually lands.

Algorithms are powerful, but so is human intelligence. It’s funny that while writing this I stopped to see tweets about a computer kicking butt on Jeopardy tonight. It’s been a bad and a good day for the Algorithm in Society. Who knows what tomorrow will bring, but I feel like John Henry is having a laugh somewhere.

Leave a Reply