How Google is teaching computers to see is an article from Gigaom about Google’s effort to get computers to “see”:
Google is attempting to teach computers to recognize human faces without telling the computing algorithms which faces are human.
It’s using zillions of still images from Google to have computers learn through categorizing what they “see.” Here’s an excerpt from a Google research paper on what they’re doing, followed by an observation about it in Gigaom article:
this would suggest that it is at least in principle possible that a baby learns to group faces into one class ￼because it has seen many of them and not because it is guided by supervision or rewards.
Understanding the origins of language and how people learn to classify objects is something people are still trying to work out, so Google may be onto something…
It appears that Google using “inductive learning” in this process, a process which I use extensively in my classes and which I’ve written about a lot in my books. Teaching “inductively” generally means providing students with a number of examples from which they can create a pattern and form a concept or rule. Teaching “deductively” is first providing the rule or concept and then having students practice applying it.
In a recent article I wrote for ASCD Educational Leadership, you can read about one example, including how I use a “data set” (an example of one in that article, too).
Google has also shown in a video how Google Translate also uses inductive learning — see The Best Sites For Learning About Google Translate.
This kind of learning has also received a great deal of support from researchers.
(Coincidentally, minutes after I published this post I found another new article about inductive learning and language.)