Google to Shut Down Project Tango in Favor of ARCoreby Anton Shilov on December 18, 2017 8:00 AM EST
- Posted in
- Project Tango
Google announced on Friday that it would cease support for its Tango computer vision and augmented reality initiative on March 1, 2018. The company urges Tango developers to migrate to the more common ARCore framework that does not need specialized hardware and thus can be used with mainstream smartphones.
Google kicked off its project Tango in early 2014 with the goal to give mobile devices a human-like computer vision, primarily for the purposes of augmented reality. Tango uses custom hardware (an RGB camera, a motion tracking camera, IR depth sensors, accelerometres, gyroscopes, etc.) to capture what is going on around and then rather powerful compute devices (such as the Movidius Myriad 1 vision processor and additional microcontrollers for sensor hub and timestamp functionality) to process this data and understand positioning of the device in a room. Given the hardware requirements of Tango and their costs, the initiative was slow to gather traction outside of Google. Early last year Intel built a prototype of a Tango-supporting smartphone featuring its RealSense camera and an Atom SoC, then Qualcomm demonstrated its Snapdragon-based Tango-supporting concept. Eventually, ASUS and Lenovo have released Google Tango-supporting smartphones for consumers, but this is as far as Tango ever got.
By contrast, ARCore does not need specialized hardware to support a significant part of the Tango functionality (at least when it comes to consumer applications) and promises to work on normal, reasonably powerful Android-based smartphones. Therefore, for AR software developers and for Google it makes sense to focus on ARCore just because it promises to be available to hundreds of millions of users worldwide.
Google says that it has taken everything it learned from its project Tango to build ARCore, so it definitely was not useless. Meanwhile, Tango’s purported human-scale understanding of space and motion could be used for Google’s autonomous vehicles platforms. The latter already use custom hardware and the costs of sensors and SoCs are generally not a problem for devices like self-driving cars. Moreover, Visual Positioning System of standalone Daydream AR/VR headsets reportedly has its roots in Tango. So, while the project Tango is dead, its elements are going to be used here and there.
- Project Tango Demoed with Qualcomm at SIGGRAPH 2016
- Intel and Google Equip Smartphones with 3D Cameras and Computer Vision
- Google and Qualcomm Partner To Make A Project Tango Smartphone
- Google Announces Project Tango Tablet Dev Kit with Tegra K1 and 3D Capture/Tracking
Sources: Google (via AndroidPolice)
Post Your CommentPlease log in or sign up to comment.
View All Comments
Manch - Tuesday, December 19, 2017 - linkregular cameras don't work the same as our eyes. And as the op said, standard vision cannot FULLY substitute depth info, which he is very correct in that. The tech was pricey but no more than a lot of other features added to phones and would have eventually come down. I think it sucks they're abandoning it bc using regular dual cameras just doesn't work as good and therefore the apps wont work as good which makes nobody want it.
mode_13h - Tuesday, December 19, 2017 - linkTango devices used BOTH stereo cameras AND an active depth sensor (either structured light or time-of-flight). The reason being that accurate stereo requires a good amount of texture and illumination, while active depth sensors are limited by range & ambient lighting.
As someone who's written Tango apps and has two iterations of Tango hardware, I assure you that even two-sensor smart phones aren't a satisfactory replacement for those cases where you really care about scene geometry.
Single-camera SLAM systems work adequately for a subset of use cases, such as dealing with non-moving objects, finding the ground plane, and coarse obstacle detection. While that's good enough for a significant subset of AR apps, there's still a significant number of INTERESTING things you can do with the addition of a depth sensor and a second camera.
nerd1 - Wednesday, December 20, 2017 - linkYou need a well calibrated and synchronized cameras, enough features and long baseline to get any usable depth data. You can get some depth data from monocular camera + IMU, but that method requires a lot of processing, and much noisier than simple RGBD camera.
mode_13h - Wednesday, December 20, 2017 - linkIt's funny how you talk about "RGBD" cameras like depth is just another wavelength. I'm basically with you, but it's not like all non-stereo depth sensing technologies don't each have their share of limitations.
Depth and stereo are complementary. And rather than simple stereo, what I'd really like is a plenoptic camera.
BillBear - Tuesday, December 19, 2017 - linkApple got more developer buy in before ARKit came out of beta than Google got long after Tango's release, because Apple's version of the tech was going to end up in the hands of users.
Of course Google followed suit. They were getting bad press and are dedicated to the fast follower strategy.
Dug - Tuesday, December 19, 2017 - linkGoogle shutting down a project? How many companies have they bought, and then just let all development drop? Not saying this shouldn't be dropped, but they just keep doing this and ruining things for other people.