FFmpeg新版本(2016年10月份以后) 支持硬件解码
FFmpeg provides a subsystem for hardware acceleration.
Hardware acceleration allows to use specific devices (usually graphical card or other specific devices) to perform multimedia processing. This allows to use dedicated hardware to perform demanding computation while freeing the CPU from such computations. Typically hardware acceleration enables specific hardware devices (usually the GPU) to perform operations related to decoding and encoding video streams, or filtering video.
When using FFmpeg the tool, HW-assisted decoding is enabled using through the -hwaccel option, which enables a specific decoder. Each decoder may have specific limitations (for example an H.264 decoder may only support baseline profile). HW-assisted encoding is enabled through the use of a specific encoder (for example h264_nvenc). Filtering HW-assisted processing is only supported in a few filters, and in that case you enable the OpenCL code through a filter option.
There are several hardware acceleration standards API, some of which are supported to some extent by FFmpeg.
Platforms overview
API availability
| Linux Intel | Linux NVIDIA | Windows Intel | Windows NVIDIA | OS X | Android | iOS | Raspberry Pi | |
|---|---|---|---|---|---|---|---|---|
| CUDA | N | Y | N | Y | Y | N | N | N |
| Direct3D 11 | N | N | Y | Y | N | N | N | N |
| DXVA2 | N | N | Y | Y | N | N | N | N |
| MediaCodec | N | N | N | N | N | Y | N | N |
| MMAL | N | N | N | N | N | N | N | Y |
| NVENC | N | Y | N | Y | N | N | N | N |
| OpenCL | Y | Y | Y | Y | Y | N | N | N |
| Quick Sync | Y | N | Y | N | N | N | N | N |
| VA-API | Y | Y* | N | N | N | N | N | N |
| VDA† | N | N | N | N | Y | N | N | N |
| VDPAU | N | Y | N | N | N | N | N | N |
| VideoToolbox | N | N | N | N | Y | N | Y | N |
| XvMC | Y | Y | N | N | N | N | N | N |
* Semi-maintained.
† Deprecated by upstream.
FFmpeg implementations
| AVHWAccel | Decoder | Encoder | CLI | Filtering | AVHWFramesContext | |
|---|---|---|---|---|---|---|
| CUDA1 | Y | Y | N2 | Y | Y | Y |
| Direct3D 11 | Y | N | N/A | N | N | N |
| DXVA2 | Y | N | N/A | Y | N | Y |
| MediaCodec | Y | Y | N | N/A | N/A | N |
| MMAL | Y | Y | N/A | N | N/A | N |
| NVENC | N/A | N3 | Y | Y | N/A | N |
| OpenCL | N/A | N/A | N/A | N/A | Y | N |
| Quick Sync | Y | Y | Y | Y | N | N* |
| VA-API | Y | N | Y | Y | Y | Y |
| VDA | Y | Y | N/A | Y | N/A | N |
| VDPAU | Y | N† | N/A | Y | N | Y |
| VideoToolbox | Y | N | Y | Y | N | N |
| XvMC | Y | N† | N/A | N | N/A | N |
N/A This feature is not directly supported by the API, or is not currently implementable.
* Work in progress. If "Y" is indicated, infrastructure is in place but no filters have been implemented yet.
† Actually yes, but is deprecated for technical reasons and should not be used.
1 Also known as "CUDA Video Decoding API" or "CUVID" or "NvDecode?".
2 See NVENC
3 See CUDA
VDPAU
Video Decode and Presentation API for Unix. Developed by NVidia for UNIX/Linux systems. To enable this you typically need the libvdpau development package in your distribution, and a compatible graphic card.
Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be done at the application level (check for example the ffmpeg_vdpau.c file used by ffmpeg.c). Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).
Several decoders are currently supported through VDPAU in libavcodec, in particular H.264, MPEG-1/2/4, and VC-1.
XvMC
XVideo Motion Compensation. This is an extension of the X video extension (Xv) for the X Window System (and thus again only available only on UNIX/Linux).
Official specification is available here: http://www.xfree86.org/~mvojkovi/XvMC_API.txt
VA-API
Video Acceleration API (VA API) is a non-proprietary and royalty-free open source software library ("libVA") and API specification, initially developed by Intel but can be used in combination with other devices. Linux only: https://en.wikipedia.org/wiki/Video_Acceleration_API
DXVA2
Direct-X Video Acceleration API, developed by Microsoft (supports Windows and XBox360).
Link to MSDN documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/cc307941%28v=vs.85%29.aspx
Several decoders are currently supported, in particular H.264, MPEG2, VC1 and WMV3.
DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header. For MinGW this can be done by downloading the header maintained by VLC:
http://download.videolan.org/pub/contrib/dxva2api.h
and installing it in the include patch (for example in /usr/include/).
For MinGW64, the dxva2api.h is provided by default. One way to install mingw-w64 is through a pacman repository, and can be installed using one of the two following commands, depending on the architecture:
pacman -S mingw-w64-i686-gcc
pacman -S mingw-w64-x86_64-gcc
To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.
To test decoding, use the following command:
ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark
VDA
Video Decoding API, only supported on MAC. H.264 decoding is available in FFmpeg/libavcodec.
Developers documentation: https://developer.apple.com/library/mac/technotes/tn2267/_index.html
NVENC
NVENC is an API developed by NVIDIA which enables the use of NVIDIA GPU cards to perform H.264 and HEVC encoding. FFmpeg supports NVENC through the h264_nvenc and hevc_nvencencoders. In order to enable it in FFmpeg you need:
- A supported GPU
- Supported drivers
- ffmpeg configured without --disable-nvenc
Visit NVIDIA Video Codec SDK to download the SDK and to read more about the supported GPUs and supported drivers.
Usage example:
ffmpeg -i input -c:v h264_nvenc -profile high444p -pixel_format yuv444p -preset default output.mp4
You can see available presets, other options, and encoder info with ffmpeg -h encoder=h264_nvenc or ffmpeg -h encoder=hevc_nvenc.
Note: If you get the No NVENC capable devices found error make sure you're encoding to a supported pixel format. See encoder info as shown above.
CUDA/CUVID/NvDecode
CUVID, which is also called nvdec by Nvidia now, can be used for decoding on Windows and Linux. In combination with nvenc it offers full hardware transcoding.
CUVID offers decoders for H264, HEVC, MJPEG, mpeg1/2/4, vp8/9, vc1. Codec support varies by hardware. The full set of codecs being available only on Pascal hardware, which adds VP9 and 10 bit support.
While decoding 10 bit video is supported, it is not possible to do full hardware transcoding currently (See the partial hardware example below).
Sample decode using CUVID, the cuvid decoder copies the frames to system memory in this case:
ffmpeg -c:v h264_cuvid -i input output.mkv
Full hardware transcode with CUVID and NVENC:
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i input -c:v h264_nvenc -preset slow output.mkv
Partial hardware transcode, with frames passed through system memory (This is necessary for transcoding 10bit content):
ffmpeg -c:v h264_cuvid -i input -c:v h264_nvenc -preset slow output.mkv
If ffmpeg was compiled with support for libnpp, it can be used to insert a GPU based scaler into the chain:
ffmpeg -hwaccel_device 0 -hwaccel cuvid -c:v h264_cuvid -i input -vf scale_npp=-1:720 -c:v h264_nvenc -preset slow output.mkv
The -hwaccel_device option can be used to specify the GPU to be used by the cuvid hwaccel in ffmpeg.
Intel QSV
Intel QSV (Quick Sync Video) is a technology which allows decoding and encoding using recent Intel CPU and integrated GPU, supported on recent Intel CPUs. Note that the (CPU)GPU needs to be compatible with both QSV and OpenCL. Some (older) QSV -enabled GPUs aren't compatible with OpenCL. See: http://www.intel.com/content/www/us/en/architecture-and-technology/quick-sync-video/quick-sync-video-general.html https://software.intel.com/en-us/articles/intel-sdk-for-opencl-applications-2013-release-notes
To enable QSV support, you need the Intel Media SDK integrated in the Intel Media Server Studio: https://software.intel.com/en-us/intel-media-server-studio
The Intel Media Server studio is available for both Linux and Windows, and contains the libva and libdrm libraries, the libmfx dispatcher library and the intel drivers. libmfx is the library which selects the codec depending on the system capabilities, falling back to a software implementation if the hardware accelerated codec is not available).
FFmpeg QSV support relies on libmfx, but the library provided by Intel does not come with pkg-config files and a proper installer. Thus the easiest to install the library is to use the libmfx version packaged by lu_zero here: https://github.com/lu-zero/mfx_dispatch
Requirements on Windows: install the Intel Media SDK packaged in the Intel Media Server Studio, which comes with a graphic installer, and a MinGW compilation enviroment (for example provided by MSYS2 with a corresponding Mingw-w64 package). Then you need to build libmfx and install it in a path recognized by pkg-config. For example if you install in /usr/local then you need the update the $PKG_CONFIG_PATH environment variable to make it point to /usr/local/lib/pkgconfig.
Requriments on Linux: you need either to rely on the Intel Media Server Studio for Linux, or use a recent enough supported system, with the libva and libdrm libraries, the libva Intel drivers, and the libmfx library packaged by lu_zero. Note: in case you use the Intel Media Server Studio generic installation script, the installation script may overwrite your system libraries and break the system.
Check the following website for updated information about the Intel Graphics stack on the various Linux platforms: https://01.org/linuxgraphics
To enable QSV support in the FFmpeg build, configure with --enable-libmfx.
Support for decoding and encoding is integrated in FFmpeg through several codecs identified by the _qsv suffix. In particular, it currently supports MPEG2 video, VC1 (decoding only), H.264 and H.265.
For example to encode to H.264 using h264_qsv, you can use the command:
ffmpeg -i INPUT -c:v h264_qsv -preset:v faster out.qsv.mp4
If you have a Kaby Lake CPU, you can encode with HEVC using hevc_qsv:
ffmpeg -i INPUT -c:v hevc_qsv -load_plugin hevc_hw -preset:v faster out.qsv.mp4
OpenCL
Official website:
https://www.khronos.org/opencl/
Currently only used in filtering (deshake and unsharp filters). In order to use OpenCL code you need to enable the build with --enable-opencl. An API to use OpenCL API from FFmpeg is provided in libavutil/opencl.h. No decoding/encoding is currently supported (yet).
For enable-opencl to work you need to basically install your local graphics cards drivers, as well as SDK, then use its .lib files and headers.
AMD VCE
AMD VCE is exposed through VA-API on linux. For windows there have been port attempts but nothing official yet.
External resources
- http://multimedia.cx/eggs/mac-hwaccel-video/
- http://thread.gmane.org/gmane.comp.video.ffmpeg.libav.user/11691
- http://stackoverflow.com/questions/23289157/how-to-use-hardware-acceleration-with-ffmpeg
- https://gitorious.org/hwdecode-demos/
http://trac.ffmpeg.org/wiki/HWAccelIntro
FFmpeg新版本(2016年10月份以后) 支持硬件解码的更多相关文章
- 【第四篇章-android平台MediaCodec】推断是否支持硬件解码码
public boolean isSupportMediaCodecHardDecoder(){ boolean isHardcode = false; //读取系统配置文件/system/etc/m ...
- 2016年6月份那些最实用的 jQuery 插件专辑
jQuery 是一个快速.流行的 JavaScript 库,jQuery 用于文档处理.事件处理.动画和 Ajax 交互非常简单,学习曲线也很平坦.2016年6月的 jQuery 插件专辑里,我们选择 ...
- 武汉北大青鸟解读2016年10大IT热门岗位
武汉北大青鸟解读2016年10大IT热门岗位 2016年1月5日 13:37 北大青鸟 这是IT从业者的辉煌时代,IT行业的失业率正处在历史的低点,而且有的岗位——例如网络和安全工程师以及软件开发人员 ...
- 微信iphone7、 ios10播放视频解决方案 2016.11.10
2016.11.10日更新以下方法 微信最新出同层播放规范 即使是官方的也无法解决所有android手机的问题. 另外iphone 5 .5s 某些手机始终会弹出播放,请继续采用 “以下是老的解决办法 ...
- Power-BI 关于2016年7月份深圳一手房房价分析报表 腾讯课堂开课啦
上周我们的公开课讲了全国房地产投资开发的情况,通过对时间.区域等多维度的分析,透析了全国房地产开发的投资情况.这周呢,我们就全国一线城市的房价,选取了深圳作为分析对象,对深圳一手房房价进行一 ...
- 2016年10月31日 星期一 --出埃及记 Exodus 19:16
2016年10月31日 星期一 --出埃及记 Exodus 19:16 On the morning of the third day there was thunder and lightning, ...
- 2016年10月30日 星期日 --出埃及记 Exodus 19:15
2016年10月30日 星期日 --出埃及记 Exodus 19:15 Then he said to the people, "Prepare yourselves for the thi ...
- 2016年10月29日 星期六 --出埃及记 Exodus 19:14
2016年10月29日 星期六 --出埃及记 Exodus 19:14 After Moses had gone down the mountain to the people, he consecr ...
- 2016年10月28日 星期五 --出埃及记 Exodus 19:13
2016年10月28日 星期五 --出埃及记 Exodus 19:13 He shall surely be stoned or shot with arrows; not a hand is to ...
随机推荐
- ASP.NET MVC性能优化(实际项目中)
前言 在开发中为了紧赶项目进度而未去关注性能的问题,在项目逐渐稳定下来后发现性能令人感到有点忧伤,于是开始去关注这方面,本篇为记录在开发中遇到的问题并解决,不喜勿喷.注意:以下问题都是在移动端上出现, ...
- mybatis 使用接口增删改查和两表一对一关联查询
导包 总配置文件 <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE configuration ...
- 内存保护机制及绕过方案——通过覆盖虚函数表绕过/GS机制
1 GS内存保护机制 1.1 GS工作原理 栈中的守护天使--GS,亦称作Stack Canary / Cookie,从VS2003起开始启用(也就说,GS机制是由编译器决定的,跟操作系统 ...
- python基础之递归,声明式编程,面向对象(一)
在函数内部,可以调用其他函数,如果一个函数在内部调用自身本身,这个函数就是递归函数.递归效率低,需要在进入下一次递归时保留当前的状态,解决方法是尾递归,即在函数的最后一步(而非最后一行)调用自己,但是 ...
- SpringCloud教程 | 第十三篇: 断路器聚合监控(Hystrix Turbine)
版权声明:本文为博主原创文章,欢迎转载,转载请注明作者.原文超链接 ,博主地址:http://blog.csdn.net/forezp. http://blog.csdn.net/forezp/art ...
- 年终盘点:Java今年的大事记都在这里!
在2017年即将结束之际,我们最好停下脚步来看看过去十二个月Java的发展情况.本文,笔者盘点了IT168企业级一年来对Java的跟踪报道. 这一年对Java来说是不容易的,从Java 9一再延期备受 ...
- Vagrant 常用命令
Vagrant 常用命令 首先需要创建一个目录用于存放Vagrantfile文件以及Vagrant在工作中的数据: mkdir my-vagrant-project cd my-vagrant-pro ...
- HDU - 5289:Assignment(单调队列||二分+RMQ||二分+线段树)
Tom owns a company and he is the boss. There are n staffs which are numbered from 1 to n in this com ...
- 20165222《Java程序设计》——实验二 面向对象程序设计
20165222<Java程序设计>——实验二 面向对象程序设计 提交点一.JUnit测试用例 知识点:这里就是了解测试代码的应用,测试代码的书写为:@Test assertEquals( ...
- fir分布式滤波的fpga实现
此设计的结构包括:1.移位寄存器链,n阶的有n-1个寄存器. 2.第一次累加部分.由fir滤波系数对称可得到对称的寄存器相加可以减小电路规模,所以第一次累加很有必要. 3,锁存并移位部分.此部分是为了 ...