|
Thursday, February 20, 2014
Google Announces 'Project Tango' 3D Smartphone Platform
|
|
You are sending an email that contains the article
and a private message for your recipient(s). |
Your Name: |
|
Your e-mail: |
* Required! |
Recipient (e-mail): |
* |
Subject: |
* |
Introductory Message: |
|
HTML/Text
(Photo: Yes/No) |
(At the moment, only Text is allowed...)
|
|
|
Message Text: |
Google on Thursday announced a new research project aimed at bringing 3D technology to smartphones, for potential
applications such as indoor mapping and gaming.
According to Google's Project Tango team, the goal of Project
Tango is to give mobile devices a human-scale understanding of
space and motion.
"We are physical beings that live in a 3D world. Yet, our
mobile devices assume that physical world ends at the
boundaries of the screen," said Project leader Johnny Lee.
"What if you could capture the dimensions of your home simply
by walking around with your phone before you went furniture
shopping?" Google said on its Project Tango web page.
"What if directions to a new location didn?t stop at the
street address? What if you never again found yourself lost in
a new building? What if the visually impaired could navigate
unassisted in unfamiliar indoor places? What if you could
search for a product and see where the exact shelf is located
in a super-store?"
Over the past year, the Project Tango team has been working
with universities, research labs, and industrial partners
spanning nine countries around the world to harvest research
from the last decade of work in robotics and computer vision,
concentrating that technology into a unique mobile phone.
Now, Google is ready to put early prototypes into the hands of
developers and let them write new applications.
Google first prototype is a 5" phone containing customized
hardware and software designed to track the full 3D motion of
the device, while simultaneously creating a map of the
environment. These sensors allow the phone to make over a
quarter million 3D measurements every second, updating it?s
position and orientation in real-time, combining that data
into a single 3D model of the space around you.
It runs Android and includes development APIs to provide
position, orientation, and depth data to standard Android
applications written in Java, C/C++, as well as the Unity Game
Engine.
The technology could be used for "playing hide-and-seek in
your house with your favorite game character, or transforming
the hallways into a tree-lined path."
Partners in the project include California-based Movidius,
which makes vision-processor technology for mobile and
portable devices and will provide the processor platform.
Movidius said in a statement the goal was "to mirror human
vision with a newfound level of depth, clarity and realism on
mobile and portable connected devices."
"Google has paved the future direction for smart mobile vision
systems and we?re excited to be working with a company that
shares our vision to usher in the next wave of applications
that fundamentally alter how a mobile device is used to
experience the world around us," said Remi El-Ouazzane, chief
executive of Movidius.
|
|
|
|
|