I have a friend who’s blind. She considers herself very fortunate to live in a time when modern technology means that that’s not the utterly debilitating curse it’s historically been. She’s a self-described “accessible technology geek,” and occasionally she shares some of the things she finds with me.
She recently told me about an iOS app she found called TapTapSee. The premise sounds very simple: you take a picture, or show it one from the pictures collection on your phone or tablet, and it identifies what it’s a picture of. But of course, if you know anything about programming, you know that’s not “a simple task” by any means!
She said it works amazingly well. It could do simple tasks like tell her what color her furniture is, or more complicated things. She took a picture of her daughter, and it accurately described both her and the clothes she was wearing. She showed it some food from the fridge, and it told her both the type of food and the brand name–even for generic store-brand food. That’s pretty impressive!
My friend says it takes about a minute to get results back, which tells me that this program is probably either uploading the picture, or some sort of metadata about what the picture contains, to a remote server somewhere and then returning an answer. I have no idea how much of that time is on-the-wire latency, but the server actually doing the heavy lifting is still coming up with an answer pretty quickly.
Of course, I’m not an accessible technology geek; I’m more of a computer-science-and-programming geek. So I looked at this (no pun intended) from a slightly different perspective: Today, in 2013, a computer program exists that can accurately recognize objects, given input in the form of image data! And that’s not all. Today, in 2013, a program exists that can analyze questions and answers in natural language, and do a better job at it than the most accomplished human experts in its field! A program exists that can do a reasonably good job of pretending that it understands spoken commands, and respond to them in an intuitive way.
Programs exist that allow robots, of both humanoid (though still rather small) and more animal-like varieties, to move around and keep their balance. And some companies are starting to build full-scale robots that are remarkably human-like in appearance.
This isn’t science-fiction anymore; this is stuff that’s going on today. And so the geek in me wonders, with all these pieces we have available already… how long until someone starts putting them all together and invents Mr. Data? It’s starting to look like it might actually happen in our lifetimes!