A computer scientist from Carnegie Mellon University, named Aman Tiwari, has created an artificial intelligence which can predict the income of a person by viewing their neighborhood and their home. The AI, known as Penny, has been fed information such as incomes for a given neighborhood alongside pictures of that neighborhood provided by DigitalGlobe, and taught to decipher what inputs are meaningful to predicting the residents’ incomes. Robbie Gonzalez writes at WIRED:
Penny provides a glimpse at how AI and machine learning make sense of a city. “It’s not for deciding whether to put a hedgerow in your yard, it’s to help us understand how machines make sense of our world,” says Jordan Winkler, the product manager for DigitalGlobe, the company that provided the imagery Penny uses. But he says Penny is mostly about getting people to think about how AI and machine learning actually work—or don’t.
The AI doesn’t always work as planned. The point of the project is to show the limitations of AI today. Gonzalez continues:
Often, Penny performs intuitively. Plop a freeway or parking lot onto the Upper East Side and the AI predicts lower median income. Add some brownstones and parks to East New York and suddenly median incomes rise.
But every once in a while, Penny surprises you. Dropping the Plaza Hotel into Harlem makes Penny even more sure that it’s a low-income area. Adding trees doesn’t help, either. Scenarios in which the AI defies intuition highlight both the power and the limitations of any system based on machine learning. “We don’t know whether it knows something that we haven’t noticed, or if it’s just plain wrong,” Tiwari says.
So which is it? Hard to say. “Sometimes an AI does amazing things, or locks onto some very intelligent solution to a problem, but that solution is inscrutable to us, so we don’t understand why it’s behaving in counterintuitive ways,” says Jeff Clune, a University of Wyoming computer scientist who studies the opaque inner workings of neural networks. “But it’s simultaneously true that these networks don’t know as much as we think they know, and they often fail in bizarre or baffling ways—which is to say they make predictions that are wildly inaccurate when it’s obvious they shouldn’t be doing so.”
Read more here.
Latest posts by E.J. Smith (see all)
- How an Introvert and DARPA Revolutionized Autonomous Vehicles - August 18, 2017
- Are State-Run Savings Plans a Good Idea? - August 17, 2017
- iHollywood: Are You Ready for Apple Shows? - August 16, 2017