The first time Mark Edwards used Aipoly Vision, he cried. Edwards, 56 and legally blind since birth, had signed up as an early tester for the smartphone app that claims to help the visually impaired people “see” the world around them. “When it immediately told me what was surrounding me, I was completely overcome with tears of joy,” says Edwards. “That doesn’t happen very often to a middle-aged man.”
Other early users of the app have called it “game changing” and on par with self-driving cars for its potential to transform the lives of blind people. Born out of the Singularity University in California—an institution set up in 2008 at NASA Research Park to produce “exponential technologies to address humanity’s grand challenges”—Aipoly Vision combines recent advances in artificial intelligence with the standard technology found in an iPhone. The neural networks and deep learning algorithms that power it may be complex, but how it works is simple: Users point their phone at any day-to-day object and the app speaks out what it is seeing.
“When we were first coming up with the idea we spoke to 88 blind people and asked what they wanted,” says Marita Cheng, one of the app’s creators. “We found that their biggest frustration was having to ask what things are.” Before Aipoly Vision, a visually impaired person travelling solo had only one option besides asking for help: to rely on apps that use volunteers to help identify things via video calls. But that can be time-consuming. “The thing that’s special about our app is that it’s all done in real time,” Cheng says. “Current methods require an Internet connection and can take anywhere up to two minutes. Our system can recognize three objects per second.” Bringing in a third party—like those volunteer apps do—also isn’t ideal for those who want privacy. But with Aipoly, says Cheng “no one knows what you’re looking at.”



The version of the app that Edwards is testing is able to identify around 1,000 objects and any color. An update expected to be released by the end of the month will be able to recognize around 5,000 objects. In its current form it can also distinguish between different brands and explain to the user what’s in an image printed or on a screen. What’s more, users can “teach” the app about objects it doesn’t recognize—benefiting both themselves (later on) and other users.
Text recognition is another struggle for blind people, though this has already been largely solved through an app called KNFB Reader. Developed through a partnership between the National Federation of the Blind (NFB) and Ray Kurzweil (one of the founders of the Singularity University), the KNFB Reader app allows users to listen to an audio readback of printed text. While some have criticized its relatively high cost ($99), the KNFB Reader has proved invaluable in helping blind and visually impaired people carry out tasks like reading menus in restaurants or understanding printed instructions. Aipoly Vision aims to take this one step further. And for free.
Chris Danielsen, a spokesperson for the NFB who has been blind since birth, tells Newsweek that object recognition usually isn’t a problem for blind people if they can touch it. “Where this technology comes in handy is when you don’t know what color something is or when there are items inside a container,” he says. “It can also be useful when you’re in an unfamiliar area, as it could be used to recognize local landmarks, or in an education setting to describe diagrams in textbooks or images without captions.”
While Aipoly’s AI is not yet sophisticated enough to describe the details displayed in a diagram, it is able to understand certain images and communicate them to the user. For example, when shown a cartoon of Superman it is able to understand that the figure is indeed the superhero from the planet Krypton, but the app couldn’t tell you what exactly Superman is doing. And this isn’t its only limitation. The app touts its ability to differentiate between logos of different brands, which is very handy when a user is faced with cans and packets of food or drink that feel identical to each other. Unfortunately, when shown a can of Coca Cola in tests, the app misidentified it as Coors beer, a mistake that could prove dangerous if made in a real world setting.
Despite the limitations—and no money spent on advertising or marketing—the app has already had 35,000 downloads on Apple’s App Store. The limited funding received through grants from competitions has been spent instead on improving the technology and developing new functionality, such as facial recognition, which it hopes to add in future versions of the app.
Something that has come as a surprise to the app’s creators is its popularity in Japan, where there have been over 8,000 downloads since the start of 2016. It turns out that people in Japan—people without visual impairments—are using it to learn English. “It speaks with a proper British accent so people know how to pronounce in English,” says Simon Edwardsson, co-founder of Aipoly. “That’s something we’re going to explore more in the future.”
The app’s creators are also working on ways to give it a more holistic view of the world, so that it can explain not just what an object is, but can also offer context on how the object relates to its surroundings. For Edwards, even in its current form Aipoly Vision has become an essential part of his daily life. It is allowing him to explore the world in a way they never could before. “Walking in town I use it to tell me what’s ahead, be it a traffic light, a sign or a mailbox,” he says. “That’s life changing. I know that sounds grandiose but it really is.”