Baidu gets ex-Google scientist Ng in race for AI research
Updated: 2014-06-09 11:58
By Chris Davis in New York (China Daily USA)
Chinese search giant Baidu - often called the Google of China - has doubled down in the Artificial Intelligence (AI) race. Not only have they hired away Google's top scientist Andrew Ng in the field, but they are starting a new research center in Google's Silicon Valley backyard.
Baidu will put $300 million into the project and Ng, and his long-time collaborator Adam Coats, are at work building a new staff there. "I feel excited about the work and I'm honored Adam has joined me," Ng said in a telephone interview with China Daily.
Ng's specialty is an esoteric field that embraces several abstract names: machine learning, deep learning, unsupervised learning, autonomous artificial intelligence, that basically mean trying to get computers to teach themselves without being specifically programmed. The holy grail of the field is an algorithm that will mimic how the brain works.
"Most of us use algorithms dozens of times a day without realizing it," Ng said. "Every time you check your email and your spam folder saves you from going through hundreds of spam emails, every time your cell phone camera auto-focuses on your friend's face, that's machine learning."
Deep learning is the technology that specifically takes inspiration from how the brain's neural network operates, the goal being to "build software that learns from data," Ng said. "And in the past few years, I think deep learning has created substantial economic value.
"At web search engines like Baidu it is allowing us to surf more website pages and force better ads to users. It's a technology that is taking the machine learning world by storm, and it's something we plan to build on here in our research," Ng said.
Ng first got interested in AI as a 16-year-old intern for a professor at the National University of Singapore, where he helped implement a neural network. "Since then," he said, "I just thought what more meaningful thing could there be for me to work on than to make computers smarter so that they can help people more."
As a professor at Stanford University, Ng's access to computers limited his research to relatively small neural networks. So he looked around Silicon Valley.
"It turned out Google had a lot of computers," he said. "So I started a project to build a much larger deep learning facility, 100 times bigger than what academia previously had been able to do. "
That project, started in 2011, was dubbed "Google Brain" and one of the team's early successes came when it connected 16,000 computer processors into a "neural network" model of a brain - at the time the largest of its kind in the world.
"Imagine, if you will, it's like a little simulated baby brain and it wakes up not knowing anything and what we decided to do was make it watch YouTube for a week and after a week we would probe it to try and figure out what it had learned," Ng said.
The team expected the network to begin to recognize the most common image on YouTube - the human face - which it did, but to their complete surprise, the machine taught itself to also recognize cats.
"The remarkable thing about this was that no one had ever told it what a cat is," Ng said. "It had discovered the concept of a cat by itself."
Deep Learning is broadly broken into two categories, Ng explained. Learning from "tagged data", presorted and labeled units of information like say 50,000 pictures of cars. The state-of-the-art software out there for soaking up huge amounts of tagged data, Ng said, is pretty good.
"This is what has driven the performance improvement in speech recognition, in web search and in identifying the most relevant ads," Ng said, noting that Baidu has a large operation in Beijing that has been doing groundbreaking work in learning from tagged data for years.
"Those investments have paid off well over the years and we'll certainly continue to invest in that," he said.
Ng says the focus of the new Sunnyvale center will be more on learning from "untagged data", which is modeled after the way humans actually learn. "Instead of having a parent point out every object to you every moment of the day, most of what you learn is from going out and seeing and experiencing the world for yourself," Ng said.
"It is closer to how we believe humans and animals learn, and so it has more of a potential for larger breakthroughs in AI," he said.
(China Daily USA 06/09/2014 page1)