The Jetson Nano Developer Kit is an AI computer for learning and for making.



一个推理框架,用于部署模型到嵌入式设备.



Four Steps to Deep Learning

https://github.com/dusty-nv/jetson-inference#system-setup

Hello AI World

环境准备

图像分类

核心类imageNet https://github.com/dusty-nv/jetson-inference/blob/master/imageNet.h

imageNet接收image作为input,输出每一种类别的概率.



在编译出来的build/aarch64/bin目录下有2个示例程序

/*
* Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/

// include imageNet header for image recognition
#include <jetson-inference/imageNet.h>

// include loadImage header for loading images
#include <jetson-utils/loadImage.h>


// main entry point
int main( int argc, char** argv )
{
// a command line argument containing the image filename is expected,
// so make sure we have at least 2 args (the first arg is the program)
if( argc < 2 )
{
printf("my-recognition: expected image filename as argument\n");
printf("example usage: ./my-recognition my_image.jpg\n");
return 0;
}

// retrieve the image filename from the array of command line args
const char* imgFilename = argv[1];

// these variables will be used to store the image data and dimensions
// the image data will be stored in shared CPU/GPU memory, so there are
// pointers for the CPU and GPU (both reference the same physical memory)
float* imgCPU = NULL; // CPU pointer to floating-point RGBA image data
float* imgCUDA = NULL; // GPU pointer to floating-point RGBA image data
int imgWidth = 0; // width of the image (in pixels)
int imgHeight = 0; // height of the image (in pixels) // load the image from disk as float4 RGBA (32 bits per channel, 128 bits per pixel)
if( !loadImageRGBA(imgFilename, (float4**)&imgCPU, (float4**)&imgCUDA, &imgWidth, &imgHeight) )
{
printf("failed to load image '%s'\n", imgFilename);
return 0;
}

// load the GoogleNet image recognition network with TensorRT
// you can use imageNet::ALEXNET to load AlexNet model instead
imageNet* net = imageNet::Create(imageNet::GOOGLENET);

// check to make sure that the network model loaded properly
if( !net )
{
printf("failed to load image recognition network\n");
return 0;
}

// this variable will store the confidence of the classification (between 0 and 1)
float confidence = 0.0;

// classify the image with TensorRT on the GPU (hence we use the CUDA pointer)
// this will return the index of the object class that the image was recognized as (or -1 on error)
const int classIndex = net->Classify(imgCUDA, imgWidth, imgHeight, &confidence);

// make sure a valid classification result was returned
if( classIndex >= 0 )
{
// retrieve the name/description of the object class index
const char* classDescription = net->GetClassDesc(classIndex);

// print out the classification results
printf("image is recognized as '%s' (class #%i) with %f%% confidence\n",
classDescription, classIndex, confidence * 100.0f);
}
else
{
// if Classify() returned < 0, an error occurred
printf("failed to classify image\n");
} // free the network's resources before shutting down
delete net;

// this is the end of the example!
return 0;
}
  • 载入图像 loadImageRGBA

    加载的图像存储于共享内存,映射到cpu和gpu.实际的内存里的image只有1份,cpu/gpu pointer指向的都是同一份物理内存。

The loaded image will be stored in shared memory that's mapped to both the CPU and GPU. There are two pointers available for access in the CPU and GPU address spaces, but there is really only one copy of the image in memory. Both the CPU and GPU pointers resolve to the same physical memory, without needing to perform memory copies (i.e. cudaMemcpy()).

  • 载入神经网络模型

    imageNet::Create()

    GOOGLENET是一个预先训练好的模型,使用的数据集是ImageNet(注意不是imageNet对象).类别有1000个,包括了动植物,常见生活用品等.

    // load the GoogleNet image recognition network with TensorRT
// you can use imageNet::ALEXNET to load AlexNet model instead
imageNet* net = imageNet::Create(imageNet::GOOGLENET);

// check to make sure that the network model loaded properly
if( !net )
{
printf("failed to load image recognition network\n");
return 0;
}

  • 对图片进行分类

    Classify返回的是类别对应的index

    //this variable will store the confidence of the classification (between 0 and 1)
float confidence = 0.0;

// classify the image with TensorRT on the GPU (hence we use the CUDA pointer)
// this will return the index of the object class that the image was recognized as (or -1 on error)
const int classIndex = net->Classify(imgCUDA, imgWidth, imgHeight, &confidence);

  • 解释结果

   // make sure a valid classification result was returned
if( classIndex >= 0 )
{
// retrieve the name/description of the object class index
const char* classDescription = net->GetClassDesc(classIndex);

// print out the classification results
printf("image is recognized as '%s' (class #%i) with %f%% confidence\n",
classDescription, classIndex, confidence * 100.0f);
}
else
{
// if Classify() returned < 0, an error occurred
printf("failed to classify image\n");
}

These descriptions of the 1000 classes are parsed from ilsvrc12_synset_words.txt when the network gets loaded (this file was previously downloaded when the jetson-inference repo was built).



  • 退出

    程序退出前要释放掉资源
    // free the network's resources before shutting down
delete net;

// this is the end of the example!
return 0;
}

cmake文件

# require CMake 2.8 or greater
cmake_minimum_required(VERSION 2.8)

# declare my-recognition project
project(my-recognition)

# import jetson-inference and jetson-utils packages.
# note that if you didn't do "sudo make install"
# while building jetson-inference, this will error.
find_package(jetson-utils)
find_package(jetson-inference)

# CUDA and Qt4 are required
find_package(CUDA)
find_package(Qt4)

# setup Qt4 for build
include(${QT_USE_FILE})
add_definitions(${QT_DEFINITIONS})

# compile the my-recognition program
cuda_add_executable(my-recognition my-recognition.cpp)

# link my-recognition to jetson-inference library
target_link_libraries(my-recognition jetson-inference)

没什么要特别说的,主要的依赖如下:

  • find_package(jetson-utils)
  • find_package(jetson-inference)
  • target_link_libraries(my-recognition jetson-inference)

实时图片识别

上面的代码展示的是本地图片的识别,这一节给出实时的摄像头拍摄图片识别的demo.

  • iamgenet-camera

Jetson Nano Developer Kit的更多相关文章

  1. Jetson Nano系列教程3:GPIO

    摘要: JetsonTX1,TX2,AGXXavier和Nano开发板包含一个40引脚的GPIO头,类似于Raspberry PI中的40引脚头.这些GPO可以通过JetsonGPIOLibrary包 ...

  2. Jetson Nano系列教程1:烧写系统镜像

    下载镜像 NVIDIA官方为Jetson Nano Developer Kit (后面统称为Jetson Nano了)提供了SD卡版本的系统镜像,并且根据JetPack版本不断得在更新.所以你可以直接 ...

  3. Jetson Nano系列教程0:初识Jetson Nano

    关于Jetson Nano Developer Kit Jetson nano搭载四核Cortex-A57 MPCore 处理器,采用128 核 Maxwell™  GPU.支持JetPack SDK ...

  4. jetson nano 安装 snowboy 遇到的问题及处理

    Snowboy 是 KITT.AI 开发的一个高度可定制的热词检测引擎,当笔者的 jetson nano 加上话筒后,就立马尝试安装,但在安装过程中却发生了错误,所以把处理方式记录了下来以作备忘. 首 ...

  5. jetson nano开发使用的基础详细分享

    前言: 最近拿到一块jetson nano 2GB版本的板子,折腾了一下,从烧录镜像.修改配件等,准备一篇开箱基础文章给大家介绍一下这块AI开发板. 作者:良知犹存 转载授权以及围观:欢迎关注微信公众 ...

  6. [Jetson Nano]Jetson Nano快速入门

    NVIDIAJetsonNano开发套件是适用于制造商,学习者和开发人员的小型AI计算机.相比Jetson其他系列的开发板,官方报价只要99美金,可谓是相当有性价比.本文如何是一个快速入门的教程,主要 ...

  7. Jetson Nano 系列教程2:串口调试接口登录Jetson Nano

    连接Jetson Nano可以有多种方法,这里我们一一介绍一下.开始本章节前,请先参考上一章,烧写好镜像 直接连接 所谓直接连接,就是将Jetson Nano当做主机,连接HDMI屏幕,连接键盘和鼠标 ...

  8. Darknet YOLOv3 on Jetson Nano

    推荐比较好的博客:https://ai4sig.org/2019/06/jetson-nano-darknet-yolov3/ 用的AlexeyAB的版本,并且给出了yolov3和tiny的效果对比. ...

  9. 1、Jetson Nano 远程桌面XP问题

    jeston nano上网 方法3(最简单的方法) 最简单的方法真的特简单,用USB数据线连接主板的USB接口以及手机,打开手机的USB共享即可,若要使用静态IP,可在主板上修改配置文件,接口一般为u ...

随机推荐

  1. Dubbo原理和源码解析之标签解析

    一.Dubbo 配置方式 Dubbo 支持多种配置方式: XML 配置:基于 Spring 的 Schema 和 XML 扩展机制实现 属性配置:加载 classpath 根目录下的 dubbo.pr ...

  2. 机器学习中的范数规则化-L0,L1和L2范式(转载)

    机器学习中的范数规则化之(一)L0.L1与L2范数 zouxy09@qq.com http://blog.csdn.net/zouxy09 今天我们聊聊机器学习中出现的非常频繁的问题:过拟合与规则化. ...

  3. 你真的了解String吗?(修正版)

    修正前:new出来的对象,会在堆中存放真正的值: 大错特错!!!! 修正后:new出来的对象,堆存放的并不是真正的值,而是常量池中字符串常量的地址. 一.抛砖引玉 ​ 不知道大家在做面试题时是否会遇到 ...

  4. 单例模式--java代码实现

    单例模式 单例模式,顾名思义,在程序运行中,实例化某个类时只实例化一次,即只有一个实例对象存在.例如在古代,一个国家只能有一个皇帝,在现代则是主席或总统等. 在Java语言中单例模式有以下实现方式 1 ...

  5. 大数据与 AI 生态中的开源技术总结

    本文由云+社区发表 作者:堵俊平 在数据爆炸与智能革命的新时代,新的平台与应用层出不穷,开源项目推动了前沿技术和业界生态快速发展.本次分享将以技术和生态两大视角来看大数据和人工智能技术的发展,通过分析 ...

  6. 项目分层-----MVC

    MVC设计模式:modle层,view层,controller层 以前学习的servlet其实就是一个java类,或者说经过规范的java类,实际进行跳转时,还是要在web.xml文件中配置才能正常跳 ...

  7. PostgreSQL(PostGIS)安装和入门的若干问题

    1. 装完PostgreSQL后记得打开pgAdmin4启动一下服务器和启动一下数据库,否则PostGIS装不上. 2. pgAdmin4是网页,而3是客户端,当然都可以在File - Prefere ...

  8. 产品经理之UML表达业务逻辑

    文章大纲 一. 什么是UML二. UML基础介绍三.UML实例介绍四.参考文档   一. 什么是UML   UML(Unified Modeling Language,统一建模语言) 是一种在软件设计 ...

  9. 关于ORACLE的各种操作~持续汇总~

    增.删.改: 增加所有 INSERT INTO 表名 VALUES(序列名.NEXTVAL,'值1','值2','值3','值4','值5'); 指定增加 INSERT INTO 表名(字段1,字段2 ...

  10. 网络I/O 工作机制

    数据从一台主机发送到网络中的另一台主机需要经过很多步骤,先得有相互沟通的意向,然后得有物理渠道(物理链路),其次双方还得有语言能够交流,且步调要一致. TCP状态转化 如图,是TCP/IP 的握手过程 ...