Live555 Streaming from a live source
https://www.mail-archive.com/live-devel@lists.live555.com/msg05506.html
-----ask--------------------------------
Hi,
We are trying to stream from a live source with Live555. We implement our own DeviceSource class. In this class we implement
doGetNextFrame in the following (logic) way. We remove all the unnecessary
implementation details so you can see the idea If no frame is available do the following
nextTask() =
envir().taskScheduler().scheduleDelayedTask(30000,(TaskFunc*)nextTime,
this); If a frame is available do the following
If (fFrameSize < fMaxSize)
{
memcpy(fTo, Buffer_getUserPtr(hEncBuf) ,fFrameSize); // copy the frame to
Live555
nextTask() =
envir().taskScheduler().scheduleDelayedTask(0,(TaskFunc*)FramedSource::after
Getting, this);
}
else
{
What should we do? (We do not understand what should we do in this option)
} As you can see we would like to feed the Live555 frame by frame from the
live source. However, after some calls of the function doGetNextFrame the
fMaxSize is smaller than fFrameSize and the application is in deadlock
state. We do not understand what should we do in order to eliminate this state. We can give part of a frame to Live555 but then it means that we are not
going to feed the Live555 library in frame by frame scenario. (We can build
a byte buffer between the live source and live555 but we do not sure it is
the right way) Please let us know what is the preferred way of handing this issue Thanks,
Sagi
-----ans--------------------------------
This should be "<=", not "<".
Also, I hope you are setting "fFrameSize" properly before you get to this "if" statement.
You can probably replace this last statement with:
FramedSource::afterGetting(this);
which is more efficient (and will avoid infinite recursion, because you're reading from a live source).
-----ask--------------------------------
Hi Ross, We are setting fFrameSize to the size of the frame before the posted code.
I am familiar with fNumTruncatedBytes but as you say the data will be
dropped. We do not want this to happen.
I did not sure I understand your last statement "make sure that your
downstream object always has enough buffer space to avoid trunction - i.e.,
so that fMaxSize is always >= fFrameSize". How can I assure it, the Live555
library request 150,000 bytes exactly. We give it frame by frame and on the
last frame it is not the exact number so we are in the situation of fMaxSize
< fFrameSize. If I understand you correctly we have two options 1. Feeding Live555 frame by frame and on the last frame truncate the frame
and loss the data
2. Handle internal buffer inside our DeviceSource in order to give Live555
parts of a frame on the last frame. It means that Live555 will handle the
recognition of Frames and on this scenario I do not understand what should
be the fPresentationTime because we are sending only part of a frame to the
Live555 library and on the next call we will send the following part of the
frame. What is the preferred way of action? Thanks,
Sagi -----ans--------------------------------
This is true only for the "StreamParser" class, which you should *not* be using, because you are delivering discrete frames - rather than a byte stream - to your downstream object. In particular, you should be using a "*DiscreteFramer" object downstream, and not a "*Framer". What objects (classes) do you have 'downstream' from your input device, and what type of data (i.e., what codec) is your "DeviceSource" object trying to deliver? (This may help identify the problem.) -----ask--------------------------------
Hi Ross, Ok, we used the StreamParser class and probably this cause the problem we
have.
This is our Device class
class CapDeviceSource: public FramedSource {
We are trying to stream MPEG4 (Later on we will move to H.264)
What is the best class to derive from instead of FramedSource in order to
use DiscreteFramer downstream object?
If I understood you correctly it is MPEG4VideoStreamDiscreteFramer and we
should implement the function doGetNextFunction but looking on the code we
thought it is best to implement the function afterGettingFrame1, yet it is
not virtual so probably we are missing something.
Thanks,
Sagi
-----ans--------------------------------
Provided that your source object delivers one frame at a time, you should be able to feed it directly into a "MPEG4VideoStreamDiscreteFramer", with no modifications.
No, there's nothing more for you to implement; just use "MPEG4VideoStreamDiscreteFramer" as is. (For H.264, however, it'll be a bit more complicated; you will need to implement your own subclass of "H264VideoStreamFramer" for that.)
-----ask--------------------------------
Hi Ross, Thanks for the hint, we understood our problem. We used
MPEG4VideoStreamFramer instead of MPEG4VideoStreamDiscreteFramer. We changed
this and now it looks much better.
Again, thank you very much for your great support and library. For the next stage we would like to use H264 codec, so I think we should
write our own H264VideoStreamDiscreteFramer, is it correct? Thanks,
Sagi -----ans--------------------------------
Yes, you need to write your own subclass of "H264VideoStreamFramer"; see http://www.live555.com/liveMedia/faq.html#h264-streaming -----ask--------------------------------
Hi Ross, We are checking for audio stream support with Live555 and we would like to
know if we can stream the following codec
AAC-LC and/or AAC-HE through the library.
Thanks,
Sagi -----ans--------------------------------
Yes, you can do so using a "MPEG4GenericRTPSink", created with appropriate parameters to specify AAC audio. (Note, for example, how "ADTSAudioFileServerMediaSubsession" streams AAC audio that comes from an ADTS-format file.) -----ask--------------------------------
Hi Ross, We have implemented a stream for AAC audio and it works great, we also
implement a stream for H.264 and it also works great. We would like to
combine these two streams under one name.
Currently, we have one stream called h264Video and another stream called
aacAudio (Different streams, DESCRIBE). We would like to have one stream
called audioVideo which configure two setups 1 for the video and 1 for the
audio.
Can you please let us know what is the best way to implement it?
Thanks,
Sagi -----ask--------------------------------
Hi Ross, We successfully combined the two streams into one stream and it works great.
The Audio and Video are on the same url address. As it seems to us the Audio
and Video are synchronized but we are not sure if we need to handle it (in
some way other then setting presentation time) or it all handle in your
library. The only thing we are currently doing is to update presentation
time for the audio and for the video. We appreciate your input on this
matter
Thanks,
Sagi -----ans--------------------------------
Good. As you figured out, you can do this just by creating a single "ServerMediaSession" object, and adding two separate "ServerMediaSubsessions" to it.
Yes, if the presentation times of the two streams are in sync, and aligned with 'wall clock' time (i.e., the time that you'd get by calling "gettimeofday()"), and you are using RTCP (which is implemented by default in "OnDemandServerMediaSubsession"), then you will see A/V synchronization in standards-compliant clients. -----ask--------------------------------
how is the presentationtime of two streams synchronised?
I have to synchronise the mpeg-4 es and a wave file. I am able to send the two
streams together by creating single servermediasession and adding two separate
servermediasubsession, but they are not synchronised.
In case of mpeg-4 es video, the gettimeofday() is getting called when the
constructor of MPEGVideoStreamFramer is called and in case of wave, in
WAVAudioFileSource::doGetNextFrame(). I think due to this the video and audio
is not getting synchronised. So in this case how should i synchronise the audio
and video?
Regards,
Nisha -----ans--------------------------------
how is the presentationtime of two streams synchronised?
Please read the FAQ!
You *must* set accurate "fPresentationTime" values for each frame of each of your sources. These values - and only these values - are what are used for synchronization. If the "fPresentationTime" values are not accurate - and synchronized - at the server, then they cannot possibly become synchronized at a client.
Live555 Streaming from a live source的更多相关文章
- [流媒体]live555简介(转)
live555简介 Live555 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SIP等的支持.Live555实现 了对多种音视频编码 ...
- Darwin Streaming Server 6.0.3安装、订制、插件或模块
How to setup Darwin Streaming Server 6.0.3 on 32 or 64 bit Linux platforms, add custom functionality ...
- live555
相关资料: Live555 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SIP等的支持.Live555实现 了对多种音视频编码格式的音 ...
- Live555 分析(三):客服端
live555的客服端流程:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出DESCRIBE--发出SETUP--发出PLAY--进入Lo ...
- live555源码分析
live555源代码下载(VC6工程):http://download.csdn.net/detail/leixiaohua1020/6374387 liveMedia 项目(http://www.l ...
- live555源代码分析
live555源代码下载(VC6工程):http://download.csdn.net/detail/leixiaohua1020/6374387 liveMedia 项目(http://www.l ...
- 多媒体开发之---live555 分析客户端
live555的客服端流程:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出DESCRIBE--发出SETUP--发出PLAY--进入Lo ...
- live555流媒体框架介绍
LIVE555 Streaming Media This code forms a set of C++ libraries for multimedia streaming, using open ...
- live555的使用(转载)
Live555 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SIP等的支持.Live555实现 了对多种音视频编码格式的音视频数据的流 ...
随机推荐
- 生产者与消费者(一)---wait与notify
生产者消费者问题是研究多线程程序时绕不开的经典问题之一,它描述是有一块缓冲区作为仓库,生产者可以将产品放入仓库,消费者则可以从仓库中取走产品.解决生产者/消费者问题的方法可分为两类: (1)采用某种机 ...
- 平衡搜索树(三) B-Tree
B树的简介 B 树是为了磁盘或其它存储设备而设计的一种多叉平衡查找树.与红黑树很相似,但在降低磁盘I/0操作方面要更好一些(树的深度较低).许多数据库系统都一般使用B树或者B树的各种变形结构.B树与红 ...
- PHP中的设计模式:单例模式(译)
原文链接:http://coderoncode.com/2014/01/27/design-patterns-php-singletons.html 单例模式用于限制类实例化到单个对象,当整个系统只需 ...
- 自定义流程gooflow.08 demo在线演示
一.功能简介 gooflow功能清单1.自定义流程绘制2.自定义属性添加3.支持3种步骤类型 普通审批步骤 自动决策步骤 手动决策步骤 4.决策方式(支持js决策,sql语句决策) 5.审批人员参与方 ...
- 序列化魔术函数__sleep()和反序列化魔术函数__wakeup()
1.string serialize ( mixed $value )— 产生一个可存储的值的表示 serialize() 返回字符串,此字符串包含了表示 value 的字节流,可以存储于任何地方. ...
- 学学Whatsapp,如何让自己挣160亿美金,然后退休?开发个J2ME应用。
facebook用160亿美元收购了Whatsapp,要知道这是facebook市值1600亿美元的十分之一,而Whatsapp是一个只有50名员工的小公司,这个价格让硅谷各种科技公司大佬跌破镜框.其 ...
- ListView的setOnItemClickListener和setOnItemLongClickListener同时响应的问题
lvContentList.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(A ...
- ASP.NET MVC轻教程 Step By Step 5——初识表单
上一节我们将留言列表显示在Index视图里了,现在该添加一个留言的表单,好让用户自己添加留言. 首先在HomeController中添加一个名为“Write”的动作方法. public ActionR ...
- ios开发之C语言第一天
最近在学习ios开发,先学习C语言,再学习OC和swift.正所谓"万丈高楼平地起",打好基础是很重要的,所以C语言也必须好好学习.学习中所使用的操作系统是OS X,开发工具是Xc ...
- C语言怎么计算程序所花时间
在函数之前和之后取得系统的时间,然后相减就是函数执行时间,不过在取得系统时间的时候,最小单位是微秒 具体代码如下: #include<stdio.h> #include<iostre ...