Jetson TX2上的demo(原创)
Jetson TX2上的demo
一、快速傅里叶-海动图 sample
The CUDA samples directory is copied to the home directory on the device by JetPack. The built binaries are in the following directory:
/home/ubuntu/NVIDIA_CUDA-<version>_Samples/bin/armv7l/linux/release/gnueabihf/
这里的version需要看你自己安装的CUDA版本而定
Run the samples at the command line or by double-clicking on them in the file browser. For example, when you run the oceanFFT sample, the following screen is displayed.

二、车辆识别加框sample
nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/backend$
./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264
--trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt
--trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel --trt-forcefp32 0 --trt-proc-interval 1 -fps 10

三、GEMM(通用矩阵乘法)测试
nvidia@tegra-ubuntu:/usr/local/cuda/samples/7_CUDALibraries/batchCUBLAS$ ./batchCUBLAS -m1024 -n1024 -k1024
batchCUBLAS Starting...
GPU Device 0: "NVIDIA Tegra X2" with compute capability 6.2
==== Running single kernels ====
Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbf800000, -1) beta= (0x40000000, 2)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 0.00372291 sec GFLOPS=576.83@@@@ sgemm test OK
Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0x0000000000000000, 0) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 0.10940003 sec GFLOPS=19.6296@@@@ dgemm test OK
==== Running N=10 without streams ====
Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbf800000, -1) beta= (0x00000000, 0)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 0.03462315 sec GFLOPS=620.245@@@@ sgemm test OK
Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbff0000000000000, -1) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 1.09212208 sec GFLOPS=19.6634@@@@ dgemm test OK
==== Running N=10 with streams ====
Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0x40000000, 2) beta= (0x40000000, 2)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 0.03504515 sec GFLOPS=612.776@@@@ sgemm test OK
Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbff0000000000000, -1) beta= (0x0000000000000000, 0)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 1.09177494 sec GFLOPS=19.6697@@@@ dgemm test OK
==== Running N=10 batched ====
Testing sgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0x3f800000, 1) beta= (0xbf800000, -1)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 0.03766394 sec GFLOPS=570.17@@@@ sgemm test OK
Testing dgemm#### args: ta=0 tb=0 m=1024 n=1024 k=1024 alpha = (0xbff0000000000000, -1) beta= (0x4000000000000000, 2)#### args: lda=1024 ldb=1024 ldc=1024
^^^^ elapsed = 1.09389901 sec GFLOPS=19.6315@@@@ dgemm test OK
Test Summary0 error(s)
四、内存带宽测试
nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/bandwidthTest$ ./bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: NVIDIA Tegra X2
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 20215.8
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 20182.2
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 35742.8
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
五、设备查询
nvidia@tegra-ubuntu:~/work/TensorRT/tmp/usr/src/tensorrt$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ls
deviceQuery deviceQuery.cpp deviceQuery.o Makefile NsightEclipse.xml readme.txt
nvidia@tegra-ubuntu:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA Tegra X2"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.2
Total amount of global memory: 7851 MBytes (8232062976 bytes)
( 2) Multiprocessors, (128) CUDA Cores/MP: 256 CUDA Cores
GPU Max Clock rate: 1301 MHz (1.30 GHz)
Memory Clock rate: 1600 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 524288 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: Yes
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = NVIDIA Tegra X2Result = PASS
六、大型项目的测试
详情查看https://developer.nvidia.com/embedded/jetpack
这里面还有一些项目
Jetson TX2上的demo(原创)的更多相关文章
- 在Jetson TX2上显示摄像头视频并使用python进行caffe推理
参考文章:How to Capture Camera Video and Do Caffe Inferencing with Python on Jetson TX2 与参考文章大部分都是相似的,如果 ...
- 在Jetson TX2上捕获、显示摄像头视频
参考文章:How to Capture and Display Camera Video with Python on Jetson TX2 与参考文章大部分都是相似的,如果不习惯看英文,可以看看我下 ...
- 在Jetson TX2上安装caffe和PyCaffe
caffe是Nvidia TensorRT最支持的深度学习框架,因此在Jetson TX2上安装caffe很有必要.顺便说一句,下面的安装是支持python3的. 先决条件 在Jetson TX2上完 ...
- 在Jetson TX2上安装OpenCV(3.4.0)
参考文章:How to Install OpenCV (3.4.0) on Jetson TX2 与参考文章大部分都是相似的,如果不习惯看英文,可以看看我下面的描述 在我们使用python3进行编程时 ...
- Jetson TX2安装tensorflow(原创)
Jetson TX2安装tensorflow 大致分为两步: 一.划分虚拟内存 原因:Jetson TX2自带8G内存这个内存空间在安装tensorflow编译过程中会出现内存溢出引发的安装进程奔溃 ...
- Jetson TX2 安装JetPack3.3教程
Jetson TX2 刷机教程(JetPack3.3版本) 参考网站:https://blog.csdn.net/long19960208/article/details/81538997 版权声明: ...
- 02-NVIDIA Jetson TX2 通过JetPack 3.1刷机完整版(踩坑版)
未经允许,不得擅自改动和转载 文 | 阿小庆 2018-1-20 本文继第一篇文章:01-NVIDIA Jetson TX2开箱上电显示界面 TX2 出厂时,已经自带了 Ubuntu 16.04 系统 ...
- Jetson TX2火力全开
Jetson Tegra系统的应用涵盖越来越广,相应用户对性能和功耗的要求也呈现多样化.为此NVIDIA提供一种新的命令行工具,可以方便地让用户配置CPU状态,以最大限度地提高不同场景下的性能和能耗. ...
- 在TX2上多线程读取视频帧进行caffe推理
参考文章:Multi-threaded Camera Caffe Inferencing TX2之多线程读取视频及深度学习推理 背景 一般在TX2上部署深度学习模型时,都是读取摄像头视频或者传入视频文 ...
随机推荐
- 【转载】wifi一键配网smartconfig原理及应用
物联网给我们又提供了一种窃取WiFi密码的好方式:让智能设备主动断线. 同时也提供一种让智能设备连接到恶意WiFi的方式:设备一键配置功能时疯狂广播恶意WiFi的信息. 转自:http://blog. ...
- 记录WEUI中滚动加载的一个BUG
最近写微信公众号,用到的技术栈是jq+vue的混合开发,采用的UI是移动端比较火的WEUI,在微信开发中应该较广泛.个人看惯了elementUI文档,相对于饿了么组件文档的详细,WEUI的文档还是比较 ...
- Django模板中的数字自增
Django框架的模板提供了{% for %} 标签来进行循环 例如对集合进行循环是比较简单的 {% for row in v1 %} <div>{{row.name}}</div& ...
- 分享一个android静默安装,安装后重新启动app的方法
一:需求简介 之前boss提出一个需求,运行在广告机上的app,需要完成自动升级的功能,广告机是非触摸屏的,不能通过手动点击,所以app必须做到自动下载,自动安装升级,并且安装完成后,app还要继续运 ...
- 385cc412a70eb9c6578a82ac58fce14c 教大家破解md5验证值
Md5密文破解(解密)可以说是网络攻击中的一个必不可少的环节,是工具中的一个重要"辅助工具".md5解密主要用于网络攻击,在对网站等进行入侵过程,有可能获得管理员或者其他用户的账号 ...
- JavaWeb(二)cookie与session的应用
前言 前面讲了一堆虚的东西,所以这篇我们来介绍一下cookie和session的应用. 一.使用cookie记住用户名 1.1.思路介绍 1.2.实现代码 1)LoginServlet package ...
- HDU 1495 非常可乐(数论,BFS)
非常可乐 Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) Total Submi ...
- Wolf and Rabbit
http://acm.hdu.edu.cn/showproblem.php?pid=1222 Wolf and Rabbit Time Limit: 2000/1000 MS (Java/Others ...
- Spring学习日志之Spring MVC启动配置
对DispatcherServlet进行配置 Spring MVC的配置实际上就是对DispatcherServlet的配置 public class DispatcherServletConfig ...
- 使用Git将本地项目或代码上传到GitHub上
1.要托管到github,那你就应该要有一个属于你自己的github帐号,所以你应该先到github.com注册.打开浏览器在地址栏输入地址:github.com 填写用户名.邮箱.密码,点击Sign ...