Java API Depth Perception Tutorial深度感知教程

Configuration 配置信息

In order to use depth perception, your TangoConfig must have KEY_BOOLEAN_DEPTH set to true. In the default TangoConfigKEY_BOOLEAN_DEPTH is set to false.

为了使用深度感知,你的TangoConfig必须将KEY_BOOLEAN_DEPTH设为真。

 
try {
    mConfig = new TangoConfig();
    mConfig = mTango.getConfig(TangoConfig.CONFIG_TYPE_CURRENT);
    mConfig.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
} catch (TangoErrorException e) {
    // handle exception
}

Define the callback定义回调

The caller is responsible for allocating memory, which will be released after the callback function has finished.

调用方负责分配内存空间,该空间将在该回调函数结束之后释放。

 
private void setTangoListeners() {
    final ArrayList<TangoCoordinateFramePair> framePairs = new ArrayList<TangoCoordinateFramePair>();
    framePairs.add(new TangoCoordinateFramePair(
        TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
        TangoPoseData.COORDINATE_FRAME_DEVICE));     // Listen for new Tango data
    mTango.connectListener(framePairs, new OnTangoUpdateListener() {         @Override
        public void onXyzIjAvailable(TangoXyzIjData arg0) {
            byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4]; //文件流输入缓冲区
            FileInputStream fileStream = new FileInputStream(
                     xyzIj.xyzParcelFileDescriptor.getFileDescriptor()); //xyzIj的定义在哪里?输入xyzIj的文件路径
            try {
                fileStream.read(buffer,
                        xyzIj.xyzParcelFileDescriptorOffset, buffer.length);
                fileStream.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
            // Do not process the buffer inside the callback because不要处理该回调中的缓冲,因为
            // you will not receive any new data while it processes在处理中你接收不到任何新数据
        }         @Override
        public void onPoseAvailable(final TangoPoseData pose) {
            // Process pose data from device with respect to start of service
        }         @Override
        public void onTangoEvent(final TangoEvent event) {
            // This callback also has to be here
        }
    });
}

Define the onXYZijAvailable() callback. Do not do any expensive processing on the data within the callback; you will not receive new data until the callback returns.

定义onXYZijAvailable()回调。不要在回调中做任何高消耗的数据处理。你只有在回调返回时才能接收新数据。

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

上次更新日期:六月 9, 2016

Depth Perception

How it works

Depth Perception gives an application the ability to understand the distance to objects in the real world. To implement Depth Perception, manufacturers of Tango devices can choose among common depth technologies, including Structured Light, Time of Flight, and Stereo. Structured Light and Time of Flight require the use of an infrared (IR) projector and IR sensor; Stereo does not.

Depth Perception can be useful in a number of ways:

  • A game might want to detect when the user is approaching a wall or other object in the area and have that be part of the gameplay.

  • By combining Depth Perception with Motion Tracking, the device can measure distances between points in an area that aren't in the same frame.

  • Depth data can be associated with the color image data to look up the color for a point cloud for texturing or meshing.

Usability tips

Current devices are designed to work best indoors at moderate distances (0.5 to 4 meters). This configuration gives good depth at a distance while balancing power requirements for IR illumination and depth processing. It may not be ideal for close-range object scanning or gesture detection.

For a device that relies on viewing IR light using its camera, there are some situations where accurate Depth Perception is difficult. Areas lit with light sources high in IR like sunlight or incandescent bulbs, or objects that do not reflect IR light, cannot be scanned well.

Point clouds

The Tango APIs provide a function to get depth data in the form of a point cloud. This format gives (x, y, z) coordinates for as many points in the scene as are possible to calculate. Each dimension is a floating point value recording the position of each point in meters in the coordinate frame of the depth-sensing camera.

For more information, see the TangoPointCloud struct API reference.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

上次更新日期:二月 14, 2017

tangoxyzijdata-structure

Description:
This data structure holds information about a Tango Point cloud, including the timestamp and how many points are contained within the current Point cloud.

Note that Blueprint users are restricted in how they may use data- we suggest either using the "Get Single Point" function to access individual array points, or if iteration over all Depth points is required, using C++. For more information read here.

Fields:

    • Timestamp [Float]: The seconds since tango service was started, when this structure was generated.
    • Count [Integer]: The number of Points (FVectors) contained within the latest point cloud.

How do I match xyzIj sets to pose data?

I am trying to work with the Google Project Tango tablet, I currently am sending data from my tablet to Epic's Unreal Engine 4 and trying to visualize it properly. The orientation of the cloud data however is having consistent issues lining up with the orientation of the Tango's pose data.

Currently I am taking the pose from the same frame as the xyzIj data, but the resulting point clouds never line up quite right. I tried doing manual rotation and correction of the cloud data so that it matched up with the pose data, however the solution to rotating one set of xyzIj data does not correct the remaining sets either.

I am currently under the assumption that the poses I am using to match these sets are in error, but I am unsure as to how to find the correct one. I also find that using getPoseAtTime tends to produce similarly flawed results, but that may have something to do with the fact that I don't quite understand it's usage.

Is there a solution to this issue outside of hand correcting each data set that I have failed to realize, or does everyone deal with similar issues?

Which part of the xyzij data is the point cloud?

I want to graphically show the point cloud data to the user, and I'm wondering what part of the xyzIj data is the actual coordinates of the point cloud data. I assume it's XyzIj.xyz, but on the developers website, this call is described as "packed coordinate triplet". When I print it out it's just separate point numbers; does that mean it prints out an x point, y point, z point, x point, y point, etc... or am I understanding it wrong?

mPointCount = xyzIj.xyzCount;
//Is this the coordinate points of the point cloud?
FloatBuffer newPointCloudPoints = xyzIj.xyz;
int i = 0;
//Printing out all the points
while(newPointCloudPoints.get(i) != 0) {
Log.i(TAG, "point cloud " + i + " " + newPointCloudPoints.get(i));
i++;
}

Answering:

Each "point" has three entries in the buffer.

The first index of the float buffer is x, second is y, third is z. The sequence repeats for every point.

for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
float z = xyz.get(i + 2);
}

Project Tango: Depthmap Transformation from XYZij data

I'm currently trying to filter the depth information using OpenCV. For that reason I need to transform Project Tango's depth information XYZij into a image like depthmap. (Like the output of Microsoft Kinect) Unfortunately the official APIs lacking the ij part of XYZij. That's why I'm trying to project the XYZ part using the camera intrinsics projection, wich is explained in the official C API Dokumentation. My current approach looks like this:

    float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy); float k1 = static_cast<float>(ccIntrinsics.distortion[0]);
float k2 = static_cast<float>(ccIntrinsics.distortion[1]);
float k3 = static_cast<float>(ccIntrinsics.distortion[2]); for (int k = 0; k < xyz_ij->xyz_count; ++k) { float X = xyz_ij->xyz[k][0];
float Y = xyz_ij->xyz[k][1];
float Z = xyz_ij->xyz[k][2]; float ru = sqrt((pow(X, 2) + pow(Y, 2)) / pow(Z, 2));
float rd = ru + k1 * pow(ru, 3) + k2 * pow(ru, 5) + k3 * pow(ru, 7); int x = X / Z * fx * rd / ru + cx;
int y = X / Z * fy * rd / ru + cy; // drawing into OpenCV Mat in red
depth.at<cv::Vec3b>(x, y)[0] = 240; }

The resulting depthmap can be seen in the lower right corner. But it seems that this calculation result in a linear representation ... Does anyone has already done something similar? Are the XYZ points already correct positioned for this projection?

answering:

I have actually found a solution ... Just skipped the distortion calculation like they do in the rgb-depth-sync-example. My code now looks like this:

    float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy); int width = static_cast<int>(ccIntrinsics.width);
int height = static_cast<int>(ccIntrinsics.height); for (int k = 0; k < xyz_ij->xyz_count; ++k) { float X = xyz_ij->xyz[k * 3][0];
float Y = xyz_ij->xyz[k * 3][1];
float Z = xyz_ij->xyz[k * 3][2]; int x = static_cast<int>(fx * (X / Z) + cx);
int y = static_cast<int>(fy * (Y / Z) + cy); uint8_t depth_value = UCHAR_MAX - ((Z * 1000) * UCHAR_MAX / 4500); cv::Point point(y % height, x % width);
line(depth, point, point, cv::Scalar(depth_value, depth_value, depth_value), 4.5); }

And the working OpenCV result looks like this:

>>Google Tango初学者教程

>>Hello_Motion_Tracking 任务一:Project Tango采集运动追踪数据

Hello_Depth_Perception 任务二:Project Tango采集深度感知数据的更多相关文章

  1. Hello_Motion_Tracking 任务一:Project Tango采集运动追踪数据

    我们来看一下中的几个基本的例子 (区域描述.深度感知.运动追踪.视频4个) 参考:Google Tango初学者教程 1. hello_motion_tracking package com.proj ...

  2. Hello_Area_Description 任务三:Project Tango采集区域描述数据

    Permission Dialogs for Users in Java在Java中用户使用的权限对话框 Tango works by using visual cues from the devic ...

  3. UE4在Android调用Project Tango

    Project Tango应该说是Google一试水AR的设备,其中Project Tango主要二个功能,一个是获取深度信息,如MS的Kinect,有相当多的设备都有这个功能,二是第一人称相对定位, ...

  4. google project tango 学习笔记

    google io 2015上 project tango 的演示视频

  5. DirectSound播放PCM(可播放实时采集的音频数据)

    前言 该篇整理的原始来源为http://blog.csdn.net/leixiaohua1020/article/details/40540147.非常感谢该博主的无私奉献,写了不少关于不同多媒体库的 ...

  6. 采用Flume实时采集和处理数据

    它已成功安装Flume在...的基础上.本文将总结使用Flume实时采集和处理数据,详细过程,如下面: 第一步,在$FLUME_HOME/conf文件夹下,编写Flume的配置文件,命名为flume_ ...

  7. Simultaneous Localization and Mapping Technology Based on Project Tango

    Abstract: Aiming at the problem of system error and noise in simultaneous localization and mapping ( ...

  8. Project Tango Explorer

    https://sensortower.com/android/ie/projecttango-google/app/project-tango-explorer/com.projecttango.t ...

  9. 使用火蜘蛛采集器Firespider采集天猫商品数据并上传到微店

    有很多朋友都需要把天猫的商品迁移到微店上去.可在天猫上的商品数据非常复杂,淘宝开放接口禁止向外提供数据,一般的采集器对ajax数据采集的支持又不太好. 还有现在有了火蜘蛛采集器,经过一定的配置,终于把 ...

随机推荐

  1. SQL 相关分页方法

    [1] SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER OFFGO ALTER PROCEDURE [dbo].[procCom_Get_Pagination]( @ ...

  2. js的拼接

    var datatr = " <tr>"; datatr += "<td bgcolor='#EEEEEE'><input class='i ...

  3. Mysql慢查询日志过程

    原创地址 :http://itlab.idcquan.com/linux/MYSQL/922126.html mysql慢查询日志对于跟踪有问题的查询非常有用,可以分析出代码实现中耗费资源的sql语句 ...

  4. [模板]ST表浅析

    ST表,稀疏表,用于求解经典的RMQ问题.即区间最值问题. Problem: 给定n个数和q个询问,对于给定的每个询问有l,r,求区间[l,r]的最大值.. Solution: 主要思想是倍增和区间d ...

  5. 峰Spring4学习(8)spring对事务的支持

    一.事务简介: 二.编程式事务管理: 例子 1.需求:模拟转账,张三向李四转账50元: 数据库中存在t_count表: 代码实现: BankDao.java: package com.cy.dao; ...

  6. Django将.csv文件(excel文件)显示到网页上

    今天,我成功将项目要导入的测试数据导入并呈现了,虽然还不是很完美,但我之后仍会继续改进. 1.首先在主页面上加一个超链接按钮: 其它的不需要管,其它是我的另一个项目,没什么大用的 2.之后配置URL: ...

  7. java之JMS

    一.简介:JMS即Java消息服务(Java Message Service)应用程序接口,是一个Java平台中关于面向消息中间件(MOM)的API,用于在两个应用程序之间,或分布式系统中发送消息,进 ...

  8. 01:zabbix监控redis

    一.zabbix 自动发现并监控redis多实例 1.1 编写脚本 1.1.1 redis_low_discovery.sh 用于发现redis多实例 [root@redis02 homed]# ca ...

  9. Jquery 插件PrintArea 打印指定的网页区域

    Jquery 插件PrintArea 打印指定的网页区域 需要下载jquery 和printarea.js插件 PrintArea.Js插件,可以打印整个网页中某个指定的区域. $("打印区 ...

  10. React性能优化 PureComponent

    为什么使用? React15.3中新加了一个 PureComponent 类,顾名思义, pure 是纯的意思, PureComponent 也就是纯组件,取代其前身 PureRenderMixin  ...