Algorithms need humans to help untangle meaning

Updated: 2013-03-31 08:19

By Steve Lohr (The New York Times)

  Print Mail Large Medium  Small 分享按钮 0

Trading stocks, targeting ads, steering political campaigns, arranging dates and even choosing bra sizes: computer algorithms are doing all this work and more.

But increasingly, they have a decidedly old-fashioned helper - a human being.

Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. They are not always able to decipher the mystery of reasoning.

"For all their brilliance, computers can be thick as a brick," said Tom M. Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh.

And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm's work. Or they assemble online databases of knowledge and check and verify them.

Question-answering technologies like Apple's Siri and I.B.M.'s Watson rely particularly on the emerging machine-man collaboration. Twitter uses an army of contract workers, whom it calls judges, to interpret the meaning and context of search terms that suddenly spike in frequency.

Even at Google, where algorithms and engineers reign supreme, the human contribution to search results is increasing. Several months ago, Google began presenting summaries of information on a search page when a user typed in the name of a well-known person or place. These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.

When Google's algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages.

"There has been a shift in our thinking," said Scott Huffman, an engineering director in charge of search quality at Google. "A part of our resources are now more human curated."

Other human helpers, known as evaluators or raters, help Google refine its search algorithm, a powerhouse of automation, fielding 100 billion queries a month. "Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement," Mr. Huffman said.

Katherine Young, 23, is a Google rater - a contract worker and a college student in Macon, Georgia. She is shown an ambiguous search query and then presented with two sets of Google search results and asked to rate their relevance, accuracy and quality.

Of her judgments, Ms. Young said, "You try to put yourself in the shoes of the person who typed in the query."

Ben Taylor, 25, is a product manager at FindTheBest, a start-up in Santa Barbara, California, that compares topics and products, from universities to nursing homes, smartphones to dog breeds. Much of its information is prepared in templates and tagged with code a computer can read. The process has become more automated, with Mr. Taylor and others essentially giving "go fetch" commands that the computer algorithm obeys.

The algorithms are getting better. But they cannot do it alone.

"You need judgment, and to be able to intuitively recognize the smaller sets of data that are most important," Mr. Taylor said. "To do that, you need some level of human involvement."

The New York Times

 

8.03K