【视频开发】 ffmpeg支持的硬解码接口
To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.
To test decoding, use the following command:
ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark
****************vlc 启用 dxva2.0硬件解码后,CPU使用率明显降低*************
基于ffmpeg的dxva h264硬件解码 的例子(实际上就是从vlc源码中抽出来的),但是好像没什么效果
http://download.csdn.NET/download/xin_hua_3/7324839
用GPU-Z工具看GPU负载确实没有负载
===============================
FFmpeg provides a subsystem for hardware acceleration.
Hardware acceleration allows to use specific devices (usually graphical card or other specific devices) to perform multimedia processing. This allows to use dedicated hardware to perform demanding computation while freeing the CPU from such computations. Typically
hardware acceleration enables specific hardware devices (usually the GPU) to perform operations related to decoding and encoding video streams, or filtering video.
When using FFmpeg the tool, HW-assisted decoding is enabled using through the -hwaccel option, which enables a specific decoder. Each decoder
may have specific limitations (for example an H.264 decoder may only support baseline profile). HW-assisted encoding is enabled through the use of a specific encoder (for example nvenc_h264).
Filtering HW-assisted processing is only supported in a few filters, and in that case you enable the OpenCL code through a filter option.
There are several hardware acceleration standards API, some of which are supported to some extent by FFmpeg.
Platforms overview
API availability
Linux Intel | Linux NVIDIA | Windows Intel | Windows NVIDIA | OS X | Android | iOS | Raspberry Pi | |
---|---|---|---|---|---|---|---|---|
CUDA | N | Y | N | Y | Y | N | N | N |
Direct3D 11 | N | N | Y | Y | N | N | N | N |
DXVA2 | N | N | Y | Y | N | N | N | N |
MediaCodec | N | N | N | N | N | Y | N | N |
MMAL | N | N | N | N | N | N | N | Y |
NVENC | N | Y | N | Y | N | N | N | N |
OpenCL | Y | Y | Y | Y | Y | N | N | N |
Quick Sync | Y | N | Y | N | N | N | N | N |
VA-API | Y | Y* | N | N | N | N | N | N |
VDA† | N | N | N | N | Y | N | N | N |
VDPAU | N | Y | N | N | N | N | N | N |
VideoToolbox | N | N | N | N | Y | N | Y | N |
XvMC | Y | Y | N | N | N | N | N | N |
* Semi-maintained.
† Deprecated by upstream.
FFmpeg implementations
AVHWAccel | Decoder | Encoder | CLI | Filtering | AVHWFramesContext | |
---|---|---|---|---|---|---|
CUDA | N | N | N | N/A | Y* | Y |
Direct3D 11 | Y | N | N/A | N | N | N |
DXVA2 | Y | N | N/A | Y | N | N |
MediaCodec | N | Y | N | N/A | N/A | N |
MMAL | Y | Y | N/A | N | N/A | N |
NVENC | N/A | N/A | Y | N/A | N/A | N |
OpenCL | N/A | N/A | N/A | N/A | Y | N |
Quick Sync | Y | Y | Y | Y | N | N |
VA-API | Y | N | Y | Y | Y | Y |
VDA | Y | Y | N/A | Y | N/A | N |
VDPAU | Y | N† | N/A | Y | N | Y |
VideoToolbox | Y | N | Y | Y | N | N |
XvMC | Y | N† | N/A | N | N/A | N |
N/A This feature is not directly supported by the API, or is not currently implementable.
* Work in progress. If "Y" is indicated, infrastructure is in place but no filters have been implemented yet.
† Actually yes, but is deprecated and should not be used.
VDPAU
Video
Decode and Presentation API for Unix. Developed by NVidia for UNIX/Linux systems.
To enable this you typically need the libvdpau development package in your distribution, and a compatible graphic card.
Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be
done at the application level (check for example the ffmpeg_vdpau.c file used by ffmpeg.c).
Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).
Several decoders are currently supported through VDPAU in libavcodec, in particular MPEG Video, VC-1, H.264, MPEG4.
XvMC
XVideo Motion Compensation. This is an extension of the X video extension (Xv) for the X Window System (and thus again only available only on UNIX/linux).
Official specification is available here: http://www.xfree86.org/~mvojkovi/XvMC_API.txt
VA-API
Video Acceleration API (VA API) is a non-proprietary and royalty-free open source software library ("libVA") and API specification, initially developed by Intel but can be used in combination with other devices. Linux only: https://en.wikipedia.org/wiki/Video_Acceleration_API
DXVA2
Direct-X Video Acceleration API, developed by Microsoft (supports Windows and XBox360).
Link to MSDN documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/cc307941%28v=vs.85%29.aspx
Several decoders are currently supported, in particular H.264, MPEG2, VC1 and WMV3.
DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header. For MinGW this can be done by downloading the header maintained by VLC:
http://download.videolan.org/pub/contrib/dxva2api.h
and installing it in the include patch (for example in /usr/include/).
For MinGW64, the dxva2api.h is provided by default. One way to install mingw-w64 is through a pacman repository, and can be installed using
one of the two following commands, depending on the architecture:
pacman -S mingw-w64-i686-gcc
pacman -S mingw-w64-x86_64-gcc
To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.
To test decoding, use the following command:
ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark
VDA
Video Decoding API, only supported on MAC. H.264 decoding is available in FFmpeg/libavcodec.
Developers documentation: https://developer.apple.com/library/mac/technotes/tn2267/_index.html
NVENC
NVENC is an API developed by NVIDIA which enables the use of NVIDIA GPU cards to perform H.264 and HEVC encoding. FFmpeg supports NVENC through the nvenc_h264 and nvenc_hevc encoders.
In order to enable it in FFmpeg you need:
- A supported GPU
- Supported drivers
- Locally installed nvEncodeAPI.h header files from the NVENC SDK
- ffmpeg configured with --enable-nvenc
Visit NVIDIA
Video Codec SDK to download the SDK and to read more about the supported GPUs and supported drivers.
Usage example:
ffmpeg -i input -c:v nvenc_h264 -profile high444p -pixel_format yuv444p -preset default output.mp4
You can see available presets, other options, and encoder info with ffmpeg -h encoder=nvenc_h264 or ffmpeg
-h encoder=nvenc_hevc.
Note: If you get the No NVENC capable devices found error make sure you're encoding to a supported pixel format. See encoder
info as shown above.
Intel QSV
Intel QSV (Quick Sync Video) is a technology which allows decoding and encoding using recent Intel CPU and integrated GPU, supported on recent Intel CPUs. Note that the (CPU)GPU needs to be compatible with both QSV and OpenCL. Some (older) QSV -enabled GPUs
aren't compatible with OpenCL. See: http://www.intel.com/content/www/us/en/architecture-and-technology/quick-sync-video/quick-sync-video-general.html https://software.intel.com/en-us/articles/intel-sdk-for-opencl-applications-2013-release-notes
To enable QSV support, you need the Intel Media SDK integrated in the Intel Media Server Studio: https://software.intel.com/en-us/intel-media-server-studio
The Intel Media Server studio is available for both Linux and Windows, and contains the libva and libdrm libraries, the libmfx dispatcher library and the intel drivers. libmfx is the library which selects the codec depending on the system capabilities, falling
back to a software implementation if the hardware accelerated codec is not available).
FFmpeg QSV support relies on libmfx, but the library provided by Intel does not come with pkg-config files and a proper installer. Thus the
easiest to install the library is to use the libmfx version packaged by lu_zero here: https://github.com/lu-zero/mfx_dispatch
Requirements on Windows: install the Intel Media SDK packaged in the Intel Media Server Studio, which comes with a graphic installer, and a MinGW compilation enviroment (for example provided by MSYS2 with a corresponding Mingw-w64 package). Then you need to
build libmfx and install it in a path recognized by pkg-config. For example if you install in /usr/local then you need the update the$PKG_CONFIG_PATH environment
variable to make it point to /usr/local/lib/pkgconfig.
Requriments on Linux: you need either to rely on the Intel Media Server Studio for Linux, or use a recent enough supported system, with the libva and libdrm libraries, the libva Intel drivers, and the libmfx library packaged by lu_zero. Note: in case you use
the Intel Media Server Studio generic installation script, the installation script may overwrite your system libraries and break the system.
Check the following website for updated information about the Intel Graphics stack on the various Linux platforms: https://01.org/linuxgraphics
To enable QSV support in the FFmpeg build, configure with --enable-libmfx.
Support for decoding and encoding is integrated in FFmpeg through several codecs identified by the _qsv suffix. In particular, it currently
supports MPEG2 video, VC1 (decoding only), H.264 and H.265.
For example to encode to H.264 using h264_qsv, you can use the command:
ffmpeg -i INPUT -c:v h264_qsv -preset:v faster out.qsv.mp4
OpenCL
Official website: https://www.khronos.org/opencl/
Currently only used in filtering (deshake and unsharp filters). In order to use OpenCL code you need to enable the build with --enable-opencl.
An API to use OpenCL API from FFmpeg is provided in libavutil/opencl.h. No decoding/encoding is currently supported (yet).
External resources
- http://multimedia.cx/eggs/mac-hwaccel-video/
- http://thread.gmane.org/gmane.comp.video.ffmpeg.libav.user/11691
- http://stackoverflow.com/questions/23289157/how-to-use-hardware-acceleration-with-ffmpeg
- https://gitorious.org/hwdecode-demos/
h264_mmal
h264_qsv ===》这个对应vaapi?
h264_vda
h264_vdpau ====》VDPAU
June 27th, 2016, FFmpeg 3.1 "Laplace"
FFmpeg 3.1 "Laplace", a new major release, is now available! Some of the highlights:
- DXVA2-accelerated HEVC Main10 decoding
【视频开发】 ffmpeg支持的硬解码接口的更多相关文章
- 查看Android支持的硬解码信息
通过/system/etc/media_codecs.xml可以确定当前设备支持哪些硬解码.通过/system/etc/media_profiles.xml可以知道设备支持的具体profile和lev ...
- 音视频开发-FFmpeg
音视频开发是个非常复杂的,庞大的开发话题,初涉其中,先看一下结合 OEIP(开源项目) 新增例子. 可以打开flv,mp4类型文件,以及rtmp协议音视频数据,声音的播放使用SDL. 把采集的麦/声卡 ...
- 基于FFmpeg的Dxva2硬解码及Direct3D显示(四)
初始化硬解码上下文 目录 初始化硬解码上下文 创建解码数据缓冲区 创建IDirectXVideoDecoder视频解码器 设置硬解码上下文 解码回调函数 创建解码数据缓冲区 这一步为了得到 LPDIR ...
- [原]ffmpeg编译android 硬解码支持库 libstagefright
最近花了一天时间将ffmpeg/tools/build_stagefright执行成功,主要是交叉编译所需要的各种动态库的支持没链接上,导致各种报错,基本上网络上问到的问题我都碰到了,特此记录下来. ...
- 基于FFmpeg的Dxva2硬解码及Direct3D显示(一)
目录 前言 名词解释 代码实现逻辑 前言 关于视频软解码的资料网上比较多了,但是关于硬解可供参考的资料非常之有限,虽然总得来说软解和硬解的基本逻辑一样,但是实现细节上的差别还是比较多的.虽然目前功能已 ...
- 基于FFmpeg的Dxva2硬解码及Direct3D显示(五)
解码及显示 目录 解码及显示 解码 显示 资源清理 解码 循环读取视频帧 AVPacket packet = { 0 }; while (av_read_frame(m_pFmtCtx, &p ...
- 基于FFmpeg的Dxva2硬解码及Direct3D显示(三)
初始化Direct3D 目录 初始化Direct3D 创建Direct3D物理设备对象实例 创建Direct3D渲染设备实例 创建Direct3D视频解码服务 Direct3D渲染可以通过Surfac ...
- 基于FFmpeg的Dxva2硬解码及Direct3D显示(二)
解析视频源 目录 解析视频源 获取视频流 解析视频流 说明:这篇博文分为"获取视频流"和"解析视频流"两个部分,使用的是FFmpeg4.1的版本,与网上流传的低 ...
- 【视频开发】GPU编解码:GPU硬解码---DXVA
GPU编解码:GPU硬解码---DXVA 一.DXVA介绍 DXVA是微软公司专门定制的视频加速规范,是一种接口规范.DXVA规范制定硬件加速解码可分四级:VLD,控制BitStream;IDCT,反 ...
随机推荐
- 基于源代码为树莓派设备构建 TensorFlow
本指南为运行 Raspbian 9.0 操作系统的 Raspberry Pi 嵌入式设备构建 TensorFlow.虽然这些说明可能也适用于其他系列的 Raspberry Pi 设备,但它仅针对此文中 ...
- object store in javascript
- 七牛云——qshell一个神奇的工具
前言 qshell是利用七牛文档上公开的API实现的一个方便开发者测试和使用七牛API服务的命令行工具.该工具设计和开发的主要目的就是帮助开发者快速解决问题.目前该工具融合了七牛存储,CDN,以及其他 ...
- Refactoring open source business models
https://opensource.com/business/16/4/refactoring-open-source-business-models They say you never forg ...
- Uva11762 Race to 1——有向无环图&&记忆化搜索
题意 给出一个整数 $N$,每次可以在不超过 $N$ 的素数中等概率随机选择一个 $P$,如果 $P$ 是 $N$ 的约数,则把 $N$ 变成 $N/P$,否则 $N$ 不变.问平均情况下需要多少次随 ...
- WebSocket 实现前后端通信的笔记
之前在做站内信时,用到了 WebSocket ,整理了一些笔记分享如下.本文基于 SpringBoot 2.1.5,本文不涉及环境搭建. 引入依赖 在 Spring 中要使用 WebSocket 功能 ...
- 2019.12.07 java计算
class Demo05{ public static void main(String[] args) { int a=1; a++; int b=1 + a++ + a + a++; System ...
- vue transition实现页面切换效果
我们都知道vue可以做成单页应用 点击的时候就能切换 如果我们要添加一些视觉效果 比如页面切换的时候有一个缓冲效果 这个时候就需要用到vue里的transition这个标签 在使用这个标签之前需要了 ...
- Mysql 创建只读账户
mysql 创建只读账户: 1.查询所有账号信息 SELECT DISTINCT a.`User`,a.`Host`,a.password_expired,a.password_last_change ...
- web前端开发面试被虐篇(一)
地点:北京 职位:前端开发工程师 要求:达到中级开发,JS基础足够扎实,css基础扎实,要求纯手写代码 面试过程: 进门一个面相老成的大叔递给我一份题,说别的都不好使先做题看看水平,说话语气很温和明显 ...