Java API Depth Perception Tutorial深度感知教程

Configuration 配置信息

In order to use depth perception, your TangoConfig must have KEY_BOOLEAN_DEPTH set to true. In the default TangoConfigKEY_BOOLEAN_DEPTH is set to false.

为了使用深度感知,你的TangoConfig必须将KEY_BOOLEAN_DEPTH设为真。

 
try {
    mConfig = new TangoConfig();
    mConfig = mTango.getConfig(TangoConfig.CONFIG_TYPE_CURRENT);
    mConfig.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
} catch (TangoErrorException e) {
    // handle exception
}

Define the callback定义回调

The caller is responsible for allocating memory, which will be released after the callback function has finished.

调用方负责分配内存空间,该空间将在该回调函数结束之后释放。

 
private void setTangoListeners() {
    final ArrayList<TangoCoordinateFramePair> framePairs = new ArrayList<TangoCoordinateFramePair>();
    framePairs.add(new TangoCoordinateFramePair(
        TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
        TangoPoseData.COORDINATE_FRAME_DEVICE));     // Listen for new Tango data
    mTango.connectListener(framePairs, new OnTangoUpdateListener() {         @Override
        public void onXyzIjAvailable(TangoXyzIjData arg0) {
            byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4]; //文件流输入缓冲区
            FileInputStream fileStream = new FileInputStream(
                     xyzIj.xyzParcelFileDescriptor.getFileDescriptor()); //xyzIj的定义在哪里?输入xyzIj的文件路径
            try {
                fileStream.read(buffer,
                        xyzIj.xyzParcelFileDescriptorOffset, buffer.length);
                fileStream.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
            // Do not process the buffer inside the callback because不要处理该回调中的缓冲,因为
            // you will not receive any new data while it processes在处理中你接收不到任何新数据
        }         @Override
        public void onPoseAvailable(final TangoPoseData pose) {
            // Process pose data from device with respect to start of service
        }         @Override
        public void onTangoEvent(final TangoEvent event) {
            // This callback also has to be here
        }
    });
}

Define the onXYZijAvailable() callback. Do not do any expensive processing on the data within the callback; you will not receive new data until the callback returns.

定义onXYZijAvailable()回调。不要在回调中做任何高消耗的数据处理。你只有在回调返回时才能接收新数据。

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

上次更新日期:六月 9, 2016

Depth Perception

How it works

Depth Perception gives an application the ability to understand the distance to objects in the real world. To implement Depth Perception, manufacturers of Tango devices can choose among common depth technologies, including Structured Light, Time of Flight, and Stereo. Structured Light and Time of Flight require the use of an infrared (IR) projector and IR sensor; Stereo does not.

Depth Perception can be useful in a number of ways:

  • A game might want to detect when the user is approaching a wall or other object in the area and have that be part of the gameplay.

  • By combining Depth Perception with Motion Tracking, the device can measure distances between points in an area that aren't in the same frame.

  • Depth data can be associated with the color image data to look up the color for a point cloud for texturing or meshing.

Usability tips

Current devices are designed to work best indoors at moderate distances (0.5 to 4 meters). This configuration gives good depth at a distance while balancing power requirements for IR illumination and depth processing. It may not be ideal for close-range object scanning or gesture detection.

For a device that relies on viewing IR light using its camera, there are some situations where accurate Depth Perception is difficult. Areas lit with light sources high in IR like sunlight or incandescent bulbs, or objects that do not reflect IR light, cannot be scanned well.

Point clouds

The Tango APIs provide a function to get depth data in the form of a point cloud. This format gives (x, y, z) coordinates for as many points in the scene as are possible to calculate. Each dimension is a floating point value recording the position of each point in meters in the coordinate frame of the depth-sensing camera.

For more information, see the TangoPointCloud struct API reference.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

上次更新日期:二月 14, 2017

tangoxyzijdata-structure

Description:
This data structure holds information about a Tango Point cloud, including the timestamp and how many points are contained within the current Point cloud.

Note that Blueprint users are restricted in how they may use data- we suggest either using the "Get Single Point" function to access individual array points, or if iteration over all Depth points is required, using C++. For more information read here.

Fields:

    • Timestamp [Float]: The seconds since tango service was started, when this structure was generated.
    • Count [Integer]: The number of Points (FVectors) contained within the latest point cloud.

How do I match xyzIj sets to pose data?

I am trying to work with the Google Project Tango tablet, I currently am sending data from my tablet to Epic's Unreal Engine 4 and trying to visualize it properly. The orientation of the cloud data however is having consistent issues lining up with the orientation of the Tango's pose data.

Currently I am taking the pose from the same frame as the xyzIj data, but the resulting point clouds never line up quite right. I tried doing manual rotation and correction of the cloud data so that it matched up with the pose data, however the solution to rotating one set of xyzIj data does not correct the remaining sets either.

I am currently under the assumption that the poses I am using to match these sets are in error, but I am unsure as to how to find the correct one. I also find that using getPoseAtTime tends to produce similarly flawed results, but that may have something to do with the fact that I don't quite understand it's usage.

Is there a solution to this issue outside of hand correcting each data set that I have failed to realize, or does everyone deal with similar issues?

Which part of the xyzij data is the point cloud?

I want to graphically show the point cloud data to the user, and I'm wondering what part of the xyzIj data is the actual coordinates of the point cloud data. I assume it's XyzIj.xyz, but on the developers website, this call is described as "packed coordinate triplet". When I print it out it's just separate point numbers; does that mean it prints out an x point, y point, z point, x point, y point, etc... or am I understanding it wrong?

mPointCount = xyzIj.xyzCount;
//Is this the coordinate points of the point cloud?
FloatBuffer newPointCloudPoints = xyzIj.xyz;
int i = 0;
//Printing out all the points
while(newPointCloudPoints.get(i) != 0) {
Log.i(TAG, "point cloud " + i + " " + newPointCloudPoints.get(i));
i++;
}

Answering:

Each "point" has three entries in the buffer.

The first index of the float buffer is x, second is y, third is z. The sequence repeats for every point.

for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
float z = xyz.get(i + 2);
}

Project Tango: Depthmap Transformation from XYZij data

I'm currently trying to filter the depth information using OpenCV. For that reason I need to transform Project Tango's depth information XYZij into a image like depthmap. (Like the output of Microsoft Kinect) Unfortunately the official APIs lacking the ij part of XYZij. That's why I'm trying to project the XYZ part using the camera intrinsics projection, wich is explained in the official C API Dokumentation. My current approach looks like this:

    float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy); float k1 = static_cast<float>(ccIntrinsics.distortion[0]);
float k2 = static_cast<float>(ccIntrinsics.distortion[1]);
float k3 = static_cast<float>(ccIntrinsics.distortion[2]); for (int k = 0; k < xyz_ij->xyz_count; ++k) { float X = xyz_ij->xyz[k][0];
float Y = xyz_ij->xyz[k][1];
float Z = xyz_ij->xyz[k][2]; float ru = sqrt((pow(X, 2) + pow(Y, 2)) / pow(Z, 2));
float rd = ru + k1 * pow(ru, 3) + k2 * pow(ru, 5) + k3 * pow(ru, 7); int x = X / Z * fx * rd / ru + cx;
int y = X / Z * fy * rd / ru + cy; // drawing into OpenCV Mat in red
depth.at<cv::Vec3b>(x, y)[0] = 240; }

The resulting depthmap can be seen in the lower right corner. But it seems that this calculation result in a linear representation ... Does anyone has already done something similar? Are the XYZ points already correct positioned for this projection?

answering:

I have actually found a solution ... Just skipped the distortion calculation like they do in the rgb-depth-sync-example. My code now looks like this:

    float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy); int width = static_cast<int>(ccIntrinsics.width);
int height = static_cast<int>(ccIntrinsics.height); for (int k = 0; k < xyz_ij->xyz_count; ++k) { float X = xyz_ij->xyz[k * 3][0];
float Y = xyz_ij->xyz[k * 3][1];
float Z = xyz_ij->xyz[k * 3][2]; int x = static_cast<int>(fx * (X / Z) + cx);
int y = static_cast<int>(fy * (Y / Z) + cy); uint8_t depth_value = UCHAR_MAX - ((Z * 1000) * UCHAR_MAX / 4500); cv::Point point(y % height, x % width);
line(depth, point, point, cv::Scalar(depth_value, depth_value, depth_value), 4.5); }

And the working OpenCV result looks like this:

>>Google Tango初学者教程

>>Hello_Motion_Tracking 任务一:Project Tango采集运动追踪数据

Hello_Depth_Perception 任务二:Project Tango采集深度感知数据的更多相关文章

  1. Hello_Motion_Tracking 任务一:Project Tango采集运动追踪数据

    我们来看一下中的几个基本的例子 (区域描述.深度感知.运动追踪.视频4个) 参考:Google Tango初学者教程 1. hello_motion_tracking package com.proj ...

  2. Hello_Area_Description 任务三:Project Tango采集区域描述数据

    Permission Dialogs for Users in Java在Java中用户使用的权限对话框 Tango works by using visual cues from the devic ...

  3. UE4在Android调用Project Tango

    Project Tango应该说是Google一试水AR的设备,其中Project Tango主要二个功能,一个是获取深度信息,如MS的Kinect,有相当多的设备都有这个功能,二是第一人称相对定位, ...

  4. google project tango 学习笔记

    google io 2015上 project tango 的演示视频

  5. DirectSound播放PCM(可播放实时采集的音频数据)

    前言 该篇整理的原始来源为http://blog.csdn.net/leixiaohua1020/article/details/40540147.非常感谢该博主的无私奉献,写了不少关于不同多媒体库的 ...

  6. 采用Flume实时采集和处理数据

    它已成功安装Flume在...的基础上.本文将总结使用Flume实时采集和处理数据,详细过程,如下面: 第一步,在$FLUME_HOME/conf文件夹下,编写Flume的配置文件,命名为flume_ ...

  7. Simultaneous Localization and Mapping Technology Based on Project Tango

    Abstract: Aiming at the problem of system error and noise in simultaneous localization and mapping ( ...

  8. Project Tango Explorer

    https://sensortower.com/android/ie/projecttango-google/app/project-tango-explorer/com.projecttango.t ...

  9. 使用火蜘蛛采集器Firespider采集天猫商品数据并上传到微店

    有很多朋友都需要把天猫的商品迁移到微店上去.可在天猫上的商品数据非常复杂,淘宝开放接口禁止向外提供数据,一般的采集器对ajax数据采集的支持又不太好. 还有现在有了火蜘蛛采集器,经过一定的配置,终于把 ...

随机推荐

  1. 02 - Unit04:笔记本加载功能

    @ExceptionHandler 在控制器中统一处理异常. 为了重用异常处理方法,可以将处理方法抽象到父类中,子类共享异常处理方法. 语法: @ExceptionHandler(Exception. ...

  2. GCC参数详解 二

    1简介 2简单编译 2.1预处理 2.2编译为汇编代码(Compilation) 2.3汇编(Assembly) 2.4连接(Linking) 3多个程序文件的编译 4检错 5库文件连接 5.1编译成 ...

  3. QT 控制LED实验

    1.实验准备 在PC 机D:盘下创建文件夹qt-led,将光盘qt_led_exp 文件夹下的images 文件夹拷贝到E:盘下qt-led 文件夹qt-led 内 2.新建工程 新建一个Empty ...

  4. SecureCRT 上传/下载文件到Linux服务器

    1. 安装上传/.下载软件 a) cd /tmp wget http://www.ohse.de/uwe/releases/lrzsz-0.12.20.tar.gz tar zxvf lrzsz-0. ...

  5. POI导出Excel和InputStream存储为文件

    POI导出Excel和InputStream存储为文件   本文需要说明的两个问题 InputStream如何保存到某个文件夹下 POI生成Excel POI操作utils类 代码如下.主要步骤如下: ...

  6. 整体读入cmd结果,而不是分行读入,效率极高

    public static long GetDirectorySize(string path) { long res = 0; System.Diagnostics.Process p = new ...

  7. django-url命名空间+反查

    from django.conf.urls import url, include urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^h ...

  8. ES6系列_8之函数和数组

    1.对象的函数解构 ES6为我们提供了这样的解构赋值使在前后端分离时,后端返回来JSON格式的数据,前端可以直接把这个JSON格式数据当作参数,传递到函数内部进行处理.比如: let json = { ...

  9. Nginx 图片服务器搭建

    安装Nginx >yum install -y nginx 安装vsftpd  http://www.cnblogs.com/eason-d/p/9057389.html 2: 创建目录 /us ...

  10. Oracle安装盘空间不足,对.DBF文件进行迁移

    一. select * from dba_data_files 使用该条语句可以查看当前库中有多少表空间并且DBF文件的存储位置 二. 找到对应的dbf文件,将该文件复制到你需要移动的位置 三. 开始 ...