Newswire » Lifestyle » Google Finances New Technology for Blind People

Google Finances New Technology for Blind People

project-tango-2

Google Finances New Technology for Blind People

Ever wondered what being blind feels like? As children we used to pretend that we were blind and would try and make our way around the house without opening our eyes; to see if we could make it to the next room without peeping, feeling our way around the furniture and doorways. After a while we would move the game outdoors, pretending that we were blind outside in the garden and make our way around the lawn and driveway. Fear would then finally sink in (because life outdoors seemed more dangerous than bumping into the furniture) and we would open our eyes, defeated by fear, but happy that life would change back to normal with the opening of one’s eyelids.

Aside from games, I temporarily lost my sight from ingesting the wrong medication. All I could see for days were mirages and shadows swimming before my eyes, I thought I would never see again. But after a while my eyes regained their sight and my world turned back to normal. Of course, for many it is quite simply not as easy as that, unless your Jesus, curing blind people is no easy deal.

Computer and mobile technology is beginning change all that, and maybe sooner than not, curing the blind won’t be simply a matter for the divine. The latest computer vision and mobile technology could help blind people ‘see’. A team of computer scientists at the University of Lincoln (UK), who are employed with the Lincoln Centre for Autonomous Systems, are developing new adaptive mobile technology that could enable blind and visually-impaired people to ‘see’ through their Smart-phones or Tablets. Colour and depth sensor technology inside Smart-phones or Tablets will be used to enable 3D mapping and localisation, navigation and object recognition. Google and the Google Faculty Research Award, who has recently launched Project Tango, is funding the computer vision and machine learning specialists – who will aim to embed the smart vision system into mobile and computer devices. These devices will be able to help navigate people who have eyesight problems to find their way in unfamiliar territory, through the use of vibrations, sounds or the spoken word.

“This project will build on our previous research to create an interface that can be used to help people with visual impairments. There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are reusability and acceptability. If people were able to use technology embedded in devices such as Smart-phones, it would not require them to wear extra equipment which could make them feel self-conscious,” said Dr Nicola Bellotto at Lincoln’s School of Computer Science.

She is the Project leader with a research team working with her. She is joined by specialist in machine learning and quality of life technology, Dr Oscar Martinez Mozos, and Dr Grzeforz Cielniak, who works in mobile robotics and machine perception. Dr Bellotto is also an expert on machine perception and human-centred robotics. “There are also existing smart-phone apps that are able to, for example, recognize an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognize particular features of their environment.”

How will this new technology work for those who are blind or visually impaired? The camera on the device will capture the environment as the person moves around the room with the use of colour and depth sensors, by recognizing and detecting objects. It will also capture other visual clues and data, like where the doors, windows, tables and chairs are located. The device will identify the type of room the person finds himself in; memorizing and storing the data of the person’s progress (and ability to use and navigate the device) and the exact location or position of the objects within the indoors environment.

These new devices will have the intelligence some robots have. Some robots store memory (in the form of data) and then apply it to its environment and circumstances (in the same way a human does) every time it re-enters the same environment. A human will use his or her memory when they re-enter an environment they have been to before and will remember what to enjoy, use, avoid or be careful of. A robot will act different to a time before, as it learns more about its environment, adapting to its surroundings and the history of its surroundings.

The new technology will work similarly, giving the blind person perhaps less information or warning than previously, as they (both human and device) find their way around the room every day. Thus in short, the blind or visually impaired person’s device will adapt to the user’s ability to use the device in a certain environment and the device itself will adjust to accommodate the user’s ability and need for information. This might become the ‘blind man’s’ new best friend and as Dr Bellotto pointed out earlier and could make life easier (by not having to move around with objects like walking sticks and special guide dogs- which could make a person feel self-conscious) while preventing him from bumping into furniture.

Leave a Reply

© 1991-2014 Fountain Resource Group Ltd. · Registered Company Number: 193051C · RSS · Website designed by Solid Website Design