Designing IP-Based Video Conferencing Systems: Dealing with Lip Synchronization(唇音同步)
转自:http://www.ciscopress.com/articles/article.asp?p=705533&seqNum=6
Correlating Timebases Using RTCP
The RTCP protocol specifies the use of RTCP packets to provide information that allows the sender to map the RTP domain of each stream into a common reference timebase on the sender, called the Network Time Protocol (NTP) time. NTP time is also referred to aswall clock time because it is the common timebase used for all media transmitted by a sending endpoint. NTP is just a clock measured in seconds.
RTCP uses a separate wall clock because the sender may synchronize any combination of media streams, and therefore it might be inconvenient to favor any one stream as the reference timebase. For instance, a sender might transmit three video streams, all of which must be synchronized, but with no accompanying audio stream. In practice, most video conferencing endpoints send a single audio and video stream and often reuse the audio sample clock to derive the NTP wall clock. However, this generalized discussion assumes that the wall clock is separate from the capture clocks.
NTP
The wall clock, which provides the master reference for the streams on the sender endpoint, is in units of NTP time. However, it is important to bear in mind what NTP time isand what NTP time is not:
- NTP time as defined in the RTP specification is nothing more than a data format consisting of a 64-bit double word: The top 32 bits represent seconds, and the bottom 32 bits represent fractions of a second. The NTP time stamp can therefore represent time values to an accuracy of ± 0.1 nanoseconds (ns).
- The most widespread misconception related to the RTCP protocol is that it requires the use of an NTP time server to generate the NTP clock of the sender. An NTP time server provides a service over the network that allows clients to synchronize their clocks to the time server. The time server specifies that NTP time should measure the number of seconds that have elapsed since January 1, 1970. However, NTP time as defined in the RTP spec does not require the use of an NTP time server. It is possible for RTP implementations to use an NTP time server to provide a reference timebase, but this usage is not necessary and is out of scope of the RTP specification. Indeed, most video conferencing implementations do not use an NTP time server as the source of the NTP wall clock.
NOTE
In the RTP/RTCP protocol, the "NTP time" does not need to come from an NTP time server; the sender can generate it directly from any reference clock. Often, the sender reuses the audio capture clock as the basis for the NTP time.
Forming RTCP Packets
Each RTP stream has an associated RTCP packet stream, and the sender transmits an RTCP packet once every few seconds, according to a formula given in RFC 3550. As a result, RTCP packets consume a small amount of bandwidth compared to the RTP media stream.
For each RTP stream, the sender issues RTCP packets at regular intervals, and those packets contain a pair of time stamps: an NTP time stamp, and the corresponding RTP time stamp associated with that RTP stream. This pair of time stamps communicates the relationship between the NTP time and RTP time for each media stream. The sender calculates the relationship between its NTP timebase and the RTP media stream by observing the value of the RTP media capture clock and the NTP wall clock in real time. The clocks have both an offset and a scale relationship, according to the following equation:
RTP/(RTP sample rate) = (NTP + offset) x scale
After determining this relationship by calculating the offset and scale values, the sender creates the RTCP packet in two steps:
- The sender first selects an NTP time stamp for the RTCP packet. The sender must calculate this time stamp carefully, because the time stamp must correspond to the real-time value of the NTP clock when the RTCP packet appears on the network. In other words, the sender must predict the precise time at which the RTCP packet will appear on the network and then use the corresponding NTP clock time as the value that will appear inside the RTCP packet. To perform this calculation, the sender must anticipate the network interface delay.
After the sender determines the NTP time stamp for the RTCP packet, the sender calculates the corresponding RTP time stamp from the preceding relationship as follows:
RTP = ((NTP + offset) x scale) x sample_rate
The sender can now transmit the RTCP packet with the proper NTP and RTP time stamps.
Determining the values of offset and scale is nontrivial because the sender must figure out the NTP and RTP time stamps at the moment the capture sensor (microphone or camera) captures the data. For instance, to determine the exact point in time when the capture device samples the audio, the sender might need to take into account delays in the capture hardware. Typically, the audio capture device makes a new packet of audio data available to the main processor and then triggers an interrupt to allow the processor to retrieve the packet. When the sender processes an interrupt, the sender must calculate the NTP time of the first sample in each audio packet, corresponding to the moment in time when the sample entered the microphone. One method of calculating this time is by observing the time of the NTP wall clock and then subtracting the predicted latency through the audio capture hardware. However, a better way to map the captured samples to NTP time is for the capture device to provide two features:
- A way for the sender to read the device clock of the capture device in real time, and therefore correlate the capture device clock to NTP wall clock time.
- A way for the sender to correlate samples in the captured data to the capture device clock. The capture device can provide this functionality by adding its own capture device time stamp to each chunk of audio data.
From these two features, the sender can correlate audio samples to NTP wall clock time. The sender can then establish the relationship between NTP time and RTP time stamps by assigning RTP time stamps to the data.
The same principles apply to the video capture device. The sender must correlate a frame of video to the NTP time at which the camera CCD imager captures each field. The sender establishes the RTP/NTP mapping for the video stream by assigning RTP values to the video frames.
NOTE
The Microsoft DirectX streaming technology used for capture devices defines source filters, which are capture drivers that generate packets of captured data, along with time stamps. Hardware vendors write source filters for their capture hardware. Applications that use these source filters rely entirely on the source filters to provide data with accurate time stamps. If a source filter provides output streams for audio and video, it is critical that the source filter use kernel-level routines to ensure that the time stamps on the packets accurately reflect the time at which the hardware samples the media.
Using RTCP for Media Synchronization
The method of synchronizing audio and video is to consider the audio stream the master and to delay the video as necessary to achieve lip sync. However, this scheme has one wrinkle: If video arrives later than audio, the audio stream, not the video stream, must be delayed. In this case, audio is still considered the master; however, the receiver must first add latency to the audio jitter buffer to make the audio "the most delayed stream" and to ensure that synchronization can be achieved by delaying video, not audio.
In addition, the receiver must determine a relationship between the local audio device timebase ATB and the local video device timebase VTB on the receiver by calculating an offset AtoV:
VTB = ATB/(audio sample rate) + AtoV
This equation converts the local audio device timebase ATB into units of seconds by dividing the audio device time stamp by the audio sample rate. The receiver determines the offset AtoV by simultaneously observing Vtime, the value of the real-time video device clock, and Atime, the value of the real-time audio device clock. Then
AtoV = Vtime – ATime/(audio sample rate)
Now that the receiver knows AtoV, it can establish the final mapping for synchronization.
NOTE
The Microsoft DirectX streaming technology used for playout devices defines render filters, which are essentially playback drivers that accept packets of data with time stamps that are relative to a global system time. The render filters play the media at the time indicated on the time stamp. Hardware vendors write filters for their playout hardware. Applications that use these render filters rely entirely on the render filters to play data accurately, based on the time stamps. A DirectX streaming render filter provides input connections in the form of input pins. If a render filter provides input pins for audio and video, it is critical that the render filter use kernel-level procedures to ensure that the time at which the hardware displays the media is accurately reflected by the time stamp on the packet.
To establish this mapping, two criteria must be met:
- At least one RTP packet must arrive from each stream.
- The receiver must receive at least one RTCP packet for each stream, to associate each RTP timebase with the common NTP timebase of the sender.
For this method, the audio is the master stream, and the video is the slave stream. The general approach is for the receiver to maintain buffer-level management for the audio stream and to adapt the playout of the video stream by transforming the video RTP time stamp to a video device time stamp that properly slaves to the audio stream.
When a video frame arrives at the receiver with an RTP time stamp RTPv, the receiver maps the RTP time stamp RTPv to the video device time stamp VTB using four steps, as illustrated in Figure 7-14.
Figure 7-14 Audio and Video Synchronization
This sequence of steps maps the RTP video time stamp into the audio RTP timebase and then back into the video device timebase. The receiver follows these steps in order:
- Map the video RTP time stamp RTPv into the sender NTP time domain, using the mapping established by the RTP/NTP time stamp pairs in the video RTCP packets.
- From this NTP time stamp, calculate the corresponding audio RTP time stamp from the sender using the mapping established by the RTP/NTP time stamp pairs in the audio RTCP packets. At this point, the video RTP time stamp is mapped into the audio RTP timebase.
- From this audio RTP time stamp, calculate the corresponding time stamp in the audio device timebase by using the Krl offset. The result is a time stamp in the audio device timebase ATB.
- From ATB, calculate the corresponding time stamp in the video device timebase VTB using the offset AtoV.
The receiver now ensures that the video frame with RTP time stamp RTPv will play on the video presentation device at the calculated local video device timebase VTB.
Lip Sync Policy
The receiver may decide not to attempt to achieve lip sync for synchronized audio and video streams in certain circumstances, even if lip sync is possible. There are two scenarios in which this situation might occur:
- Excessive audio delay—If the receiver must delay audio to establish lip sync, the receiver might instead choose to achieve the lower audio latency of unsynchronized streams. The reason is because lower end-to-end audio latency achieves the best real-time interaction. The receiver can make this determination after it achieves buffer management for both audio and video streams. If the audio stream is the most-delayed stream, the receiver can opt to delay the video stream to achieve lip sync; if the video stream is the most-delayed stream, however, the receiver might opt to avoid delaying audio to achieve lip sync.
- Excessive video delay—If the receiver must delay video by a significant duration to achieve lip sync, on the order of a second or more, the receiver might need to store a large amount of video bitstream in a delay buffer. For high bit rate video streams, the amount of memory required to store this video data might exceed the available memory in the receiver. In this case, the receiver may opt to set an upper limit on the maximum delay of the video stream to accommodate the limited memory or forego video delay altogether.
Designing IP-Based Video Conferencing Systems: Dealing with Lip Synchronization(唇音同步)的更多相关文章
- Video processing systems and methods
BACKGROUND The present invention relates to video processing systems. Advances in imaging technology ...
- Video tagging systems based on DNNs
Need: With the ever-growth large-scale video in the mobile phone, so what will everyone get from the ...
- 可以创建专业的客户端/服务器视频会议应用程序的音频和视频控件LEADTOOLS Video Conferencing SDK
LEADTOOLS Video Streaming Module控件为您创建一个自定义的视频会议应用程序和工具提供所有需要的功能.软件开发人员可以使用Video Streaming Module SD ...
- DeepCoder: A Deep Neural Network Based Video Compression
郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布! Abstract: 在深度学习的最新进展的启发下,我们提出了一种基于卷积神经网络(CNN)的视频压缩框架DeepCoder.我们分别对预测 ...
- 真实场景中WebRTC 用到的服务 STUN, TURN 和 signaling
FQ收录转自:WebRTC in the real world: STUN, TURN and signaling WebRTC enables peer to peer communication. ...
- Blackhat EU 2013 黑客大会(Full Schedule for Black Hat USA 2013)
大会文档下载:https://www.blackhat.com/eu-13/archives.html 此次BH EU 议题整体较水,涉及系统安全.移动安全.网络传输安全.WEB安全.游戏安全等.下面 ...
- Flexible implementation of a system management mode (SMM) in a processor
A system management mode (SMM) of operating a processor includes only a basic set of hardwired hooks ...
- Video for Linux Two API Specification Revision 2.6.32【转】
转自:https://www.linuxtv.org/downloads/legacy/video4linux/API/V4L2_API/spec-single/v4l2.html Video for ...
- Information Centric Networking Based Service Centric Networking
A method implemented by a network device residing in a service domain, wherein the network device co ...
随机推荐
- Map工具系列-01-Map代码生成工具说明
所有cs端工具集成了一个工具面板 -打开(IE) Map工具系列-01-Map代码生成工具说明 Map工具系列-02-数据迁移工具使用说明 Map工具系列-03-代码生成BySQl工具使用说明 Map ...
- base64和图片的转换
/// <summary> /// base64转图片 /// </summary> /// <param name="strBase64">& ...
- Android实现透明式状态栏
Android实现透明式状态栏 1. 修改style样式 2. 创建values-v19文件夹 3. 在这个文件夹下创建style.xml 4. 对activity_main.xml进行修改 移 ...
- Photon服务器进阶&一个新游戏的出产(三)
下面或许该介绍介绍我用Photon写的一个4人联机麻将了~ 上图
- phpexcel导入数据提示失败
phpexcel导入excel时明明只有几行数据,却提示506行失败,原来是excel中有506行"无效数据"(看起来是空的,但是和没有数据不一样).
- 关于 DataGridTextColumn的IsReadOnly
1. 以下是绑定方式,但是IsReadOnly不起作用 <DataGrid x:Name="dgTest" ItemsSource="{Binding}" ...
- 【Alpha】阶段汇总
[项目文档&API文档] PhyLab2.0需求与功能分析改进文档(NABCD) PhyLab2.0设计分析阶段任务大纲(α) 团队个人贡献分分配规则 功能规格说明书 [Phylab2.0]A ...
- 一、基于hadoop的nginx访问日志分析---解析日志篇
前一阵子,搭建了ELK日志分析平台,用着挺爽的,再也不用给开发拉各种日志,节省了很多时间. 这篇博文是介绍用python代码实现日志分析的,用MRJob实现hadoop上的mapreduce,可以直接 ...
- MYSQL开启慢查询日志实施
查看当前服务器是否开启慢查询:1.快速办法,运行sql语句show VARIABLES like "%slow%" 2.直接去my.conf中查看.my.conf中的配置(放在[m ...
- TypedReference
http://stackoverflow.com/questions/4764573/why-is-typedreference-behind-the-scenes-its-so-fast-and-s ...