http://stackoverflow.com/questions/27279161/using-live555-to-stream-live-video-from-an-ip-camera-connected-to-an-h264-encode

I am using a custom Texas Instruments OMAP-L138 based board that basically consists of an ARM9 based SoC and a DSP processor. It is connected to a camera lens. What I'm trying to do is to capture live video stream which is sent to the dsp processor for H264 encoding which is sent over uPP in packets of 8192 bytes. I want to use the testH264VideoStreamer supplied by Live555 to live stream the H264 encoded video over RTSP. The code I have modified is shown below:

#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //to allow read() function UsageEnvironment* env;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink; //-------------------------------------------------------------------------------
/* Open File Descriptor*/
int stream = open("/dev/upp", O_RDONLY);
/* Declaring a static 8 bit unsigned integer of size 8192 bytes that keeps its value between invocations */
static uint8_t buf[];
//------------------------------------------------------------------------------ //------------------------------------------------------------------------------
// Execute play function as a forwarding mechanism
//------------------------------------------------------------------------------
void play(); // forward //------------------------------------------------------------------------------
// MAIN FUNCTION / ENTRY POINT
//------------------------------------------------------------------------------
int main(int argc, char** argv)
{
// Begin by setting up our live555 usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler); // Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model. const unsigned short rtpPortNum = ;
const unsigned short rtcpPortNum = rtpPortNum+;
const unsigned char ttl = ; const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum); Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source // Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = ;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, ); // Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = ; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = ;
unsigned char CNAME[maxCNAMElen+];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically /*Create RTSP SERVER*/
RTSPServer* rtspServer = RTSPServer::createNew(*env, );
if (rtspServer == NULL)
{
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit();
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "IPCAM @ TeReSol","UPP Buffer" ,
"Session streamed by \"testH264VideoStreamer\"",
True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms); char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url; // Start the streaming:
*env << "Beginning streaming...\n";
play(); env->taskScheduler().doEventLoop(); // does not return return ; // only to prevent compiler warning
} //----------------------------------------------------------------------------------
// afterPlaying() -> Defines what to do once a buffer is streamed
//----------------------------------------------------------------------------------
void afterPlaying(void* /*clientData*/)
{
*env << "...done reading from upp buffer\n";
//videoSink->stopPlaying();
//Medium::close(videoSource);
// Note that this also closes the input file that this source read from. // Start playing once again to get the next stream
play(); /* We don't need to close the dev as long as we're reading from it. But if we do, use: close( "/dev/upp", O_RDWR);*/ } //----------------------------------------------------------------------------------------------
// play() Method -> Defines how to read and what to make of the input stream
//----------------------------------------------------------------------------------------------
void play()
{ /* Read nbytes of buffer (sizeof buf ) from the filedescriptor stream and assign them to address where buf is located */
read(stream, &buf, sizeof buf);
printf("Reading from UPP in to Buffer"); /*Open the input file as a 'byte-stream file source': */
ByteStreamMemoryBufferSource* buffSource
= ByteStreamMemoryBufferSource::createNew(*env, buf, sizeof buf,False/*Empty Buffer After Reading*/);
/*By passing False in the above creatNew() method means that the buffer would be read at once */ if (buffSource == NULL)
{
*env << "Unable to read from\"" << "Buffer"
<< "\" as a byte-stream source\n";
exit();
} FramedSource* videoES = buffSource;
// Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES,False);
// Finally, start playing:
*env << "Beginning to read from UPP...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}

The Problem is that the code though compiles successfully but I'm unable to get the desired output. the RTSP stream on VLC player is on play mode however I can't see any video. I'd be grateful for any assistance in this matter. I might come as a little vague in my description but I'm happy to further explain any part that is required.

1 Answer

Okay so I figured out what needed to be done and am writing for the benefit of all who might face a similar issue. What I needed to do was modify my testH264VideoStreamer.cpp and DeviceSource.cpp file such that it directly reads data from the device (in my case it was the custom am1808 board), store it in a buffer and stream it. The changes I made were:

testH264VideoStreamer.cpp

#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //to allow read() function UsageEnvironment* env; H264VideoStreamFramer* videoSource;
RTPSink* videoSink; void play(); // forward
//-------------------------------------------------------------------------
//Entry Point -> Main FUNCTION
//------------------------------------------------------------------------- int main(int argc, char** argv) {
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler); // Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model. const unsigned short rtpPortNum = ;
const unsigned short rtcpPortNum = rtpPortNum+;
const unsigned char ttl = ; const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum); Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source // Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = ;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, ); // Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = ; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = ;
unsigned char CNAME[maxCNAMElen+];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically RTSPServer* rtspServer = RTSPServer::createNew(*env, );
if (rtspServer == NULL) {
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit();
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "ipcamera","UPP Buffer" ,
"Session streamed by \"testH264VideoStreamer\"",
True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms); char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url; // Start the streaming:
*env << "Beginning streaming...\n";
play(); env->taskScheduler().doEventLoop(); // does not return return ; // only to prevent compiler warning
}
//----------------------------------------------------------------------
//AFTER PLAY FUNCTION CALLED HERE
//----------------------------------------------------------------------
void afterPlaying(void* /*clientData*/)
{ play();
}
//------------------------------------------------------------------------
//PLAY FUNCTION ()
//------------------------------------------------------------------------
void play()
{ // Open the input file as with Device as the source:
DeviceSource* devSource
= DeviceSource::createNew(*env);
if (devSource == NULL)
{ *env << "Unable to read from\"" << "Buffer"
<< "\" as a byte-stream source\n";
exit();
} FramedSource* videoES = devSource; // Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES,False); // Finally, start playing:
*env << "Beginning to read from UPP...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}

DeviceSource.cpp

#include "DeviceSource.hh"
#include <GroupsockHelper.hh> // for "gettimeofday()"
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //static uint8_t *buf = (uint8_t*)malloc(102400);
static uint8_t buf[];
int upp_stream;
//static uint8_t *bufPtr = buf; DeviceSource*
DeviceSource::createNew(UsageEnvironment& env)
{
return new DeviceSource(env);
} EventTriggerId DeviceSource::eventTriggerId = ; unsigned DeviceSource::referenceCount = ; DeviceSource::DeviceSource(UsageEnvironment& env):FramedSource(env)
{
if (referenceCount == )
{
upp_stream = open("/dev/upp",O_RDWR);
}
++referenceCount; if (eventTriggerId == )
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
} DeviceSource::~DeviceSource(void) {
--referenceCount;
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = ; if (referenceCount == )
{
}
} int loop_count; void DeviceSource::doGetNextFrame()
{
//for (loop_count=0; loop_count < 13; loop_count++)
//{
read(upp_stream,buf, ); //bufPtr+=8192; //}
deliverFrame();
} void DeviceSource::deliverFrame0(void* clientData)
{
((DeviceSource*)clientData)->deliverFrame();
} void DeviceSource::deliverFrame()
{
if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet u_int8_t* newFrameDataStart = (u_int8_t*) buf; //(u_int8_t*) buf; //%%% TO BE WRITTEN %%%
unsigned newFrameSize = sizeof(buf); //%%% TO BE WRITTEN %%% // Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
} else {
fFrameSize = newFrameSize;
}
gettimeofday(&fPresentationTime, NULL);
memmove(fTo, newFrameDataStart, fFrameSize);
FramedSource::afterGetting(this);
}

After compiling the code with these modifications, I was able to receive video stream on vlc player.

Live555 to stream live video and audio in one RTSP stream

http://stackoverflow.com/questions/26082837/live555-to-stream-live-video-and-audio-in-one-rtsp-stream

I have been able to stream video using live555 on its own as well as audio to stream using live555 on its own.

But I want to have the video and audio playing on the same VLC. My video is h264 encoded and audio is AAC encoded. What do I need to do to pass these packets into a FramedSource.

What MediaSubsession/DeviceSource do I override, as this is not a fixed file but live video/live audio?

Thanks in advance!

1 Answer

In order to stream video/H264 and audio/MPEG4-GENERIC in the same RTSP unicast session you should do something like :

#include "liveMedia.hh"
#include "BasicUsageEnvironment.hh" int main()
{
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
BasicUsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);
RTSPServer* rtspServer = RTSPServer::createNew(*env);
ServerMediaSession* sms = ServerMediaSession::createNew(*env);
sms->addSubsession(H264VideoFileServerMediaSubsession::createNew(*env, "test.264",false));
sms->addSubsession(ADTSAudioFileServerMediaSubsession::createNew(*env, "test.aac",false));
rtspServer->addServerMediaSession(sms);
}

对于trigerEvent的使用可以见文档:

http://stackoverflow.com/questions/13863673/how-to-write-a-live555-framedsource-to-allow-me-to-stream-h-264-live

http://stackoverflow.com/questions/19427576/live555-x264-stream-live-source-based-on-testondemandrtspserver

Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.

You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.

Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this

scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler); framedSource = H264FramedSource::createNew(*env, 0,0); h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource); // initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink); env->taskScheduler().doEventLoop();

Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?

My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intensive purposes this stackoverflow question is answered because I was mostly after how to stream it. I hope this helps other people.

As for my FramedSource it looks a little something like this:

concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out; EventTriggerId H264FramedSource::eventTriggerId = ;
unsigned H264FramedSource::FrameSize = ;
unsigned H264FramedSource::referenceCount = ; int W = ;
int H = ; H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
} H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(),
fCurIndex()
{
if (referenceCount == )
{
}
++referenceCount; x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = ;
param.i_width = ;
param.i_height = ;
param.i_fps_num = ;
param.i_fps_den = ;
// Intra refres:
param.i_keyint_max = ;
param.b_intra_refresh = ;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = ;
param.rc.f_rf_constant_max = ;
param.i_sps_id = ;
//For streaming:
param.b_repeat_headers = ;
param.b_annexb = ;
x264_param_apply_profile(&param, "baseline"); encoder = x264_encoder_open(&param);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = ;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = ; x264_picture_alloc(&pic_in, X264_CSP_I420, , );
convertCtx = sws_getContext(, , PIX_FMT_RGB24, , , PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == )
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
} H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == )
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = ;
}
} void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*;
sws_scale(convertCtx, &surfaceData, &srcstride,, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = ;
int frame_size = -;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false; if (frame_size >= )
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = ; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL; envir().taskScheduler().triggerEvent(eventTriggerId, this);
} void H264FramedSource::doGetNextFrame()
{
deliverFrame();
} void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
} void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > && fPreferredFrameSize > ) {
if (fPresentationTime.tv_sec == && fPresentationTime.tv_usec == ) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/;
fPresentationTime.tv_usec = uSeconds%;
} // Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
} if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload; // Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
} memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}

Using Live555 to Stream Live Video from an IP camera connected to an H264 encoder的更多相关文章

  1. ffmpeg打开视频解码器失败:Could not find codec parameters for stream 0 (Video: h264): unspecified size

    在使用ffmpeg进行拉流分离音视频数据再解码播放操作的时候: 有时候经常会报错: Could not find codec parameters for stream 0 (Video: h264) ...

  2. 关于AXI4-Stream to Video Out 和 Video Timing Controller IP核学习

    关于AXI4-Stream to Video Out 和 Video Timing Controller IP核学习 1.AXI4‐Stream to Video Out Top‐Level Sign ...

  3. pygame save that Stream as video output.

    python - how to save pygame camera as video output - Stack Overflow  https://stackoverflow.com/quest ...

  4. live555 源代码简单分析1:主程序

    live555是使用十分广泛的开源流媒体服务器,之前也看过其他人写的live555的学习笔记,在这里自己简单总结下. live555源代码有以下几个明显的特点: 1.头文件是.hh后缀的,但没觉得和. ...

  5. 使用live555 在linux下搭建 rtsp server

    系统环境 Debian 7 x64  / centos 7 x64  都可以 首先去下载源码 http://www.live555.com/liveMedia/public/live555-lates ...

  6. live555 交叉编译移植到海思开发板

    本文章参考了.http://blog.csdn.net/lawishere/article/details/8182952,写了hi3518的配置说明.特此感谢 https://blog.csdn.n ...

  7. live555笔记_hi3516A

    1.简介 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SI ...

  8. 庖丁解牛-----Live555源码彻底解密(根据MediaServer讲解Rtsp的建立过程)

    live555MediaServer.cpp服务端源码讲解 int main(int argc, char** argv) { // Begin by setting up our usage env ...

  9. live555流媒体框架介绍

    LIVE555 Streaming Media This code forms a set of C++ libraries for multimedia streaming, using open ...

随机推荐

  1. 网站开发常用jQuery插件总结(12)固定元素插件scrolltofixed

    这个插件在前段时间用过一次,当时是改一个网站.要求顶部的菜单栏随着滚动条的滚动而固定.也大体写了一下,不过在文章中也只是提了一下,文章地址:jQuery插件固定元素位置. 在这篇文章中,再进行总结一下 ...

  2. skymvc文件上传支持多文件上传

    skymvc文件上传支持多文件上传 支持单文件.多文件上传 可以设定 文件大小.存储目录.文件类型 //上传的文件目录 $this->upload->uploaddir="att ...

  3. uploadify插件的使用

    插件: uploadify.css jquery.uploadify.js bootstrap html代码: <input type="file" name="u ...

  4. linux常用命令搜索

    解压tar - xzvf webcmp.tar.gz /目的目录 压缩tar - czvf webcmp.tar.gz /压缩源文件 发包命令 cd /cap/sc_bossdata_20140516 ...

  5. 在使用MOS管时要注意的问题

    1.当Vds电压增大,Ciss增大,栅极充放电电流也会增大 2.MOS管的功率损耗要控制在额定功耗以下 3.在Buck电路中,开关MOS管的Vds在MOS管关断时会非常大

  6. python的特殊方法:

    __str__和__repr__ 如果要把一个类的实例变成 str,就需要实现特殊方法__str__(): class Person(object): def __init__(self, name, ...

  7. 【2011 Greater New York Regional 】Problem I :The Golden Ceiling

    一道比较简单但是繁琐的三维计算几何,找错误找的我好心酸,没想到就把一个变量给写错了 = =: 题目的意思是求平面切长方体的截面面积+正方体顶部所遮盖的面积: 找出所有的切点,然后二维凸包一下直接算面积 ...

  8. 我是如何实用:before :after

    本文地址http://www.cnblogs.com/Bond/p/3972854.html 最近一直做移动端,没和IE6打交道了,瞬间感觉世界变美好了.移动端虽然还是各种坑,但是比起修复IE6那还是 ...

  9. POJ 2075 Tangled in Cables 最小生成树

    简单的最小生成树,不过中间却弄了很久,究其原因,主要是第一次做生成树,很多细节不够熟练,find()函数的循环for判断条件是 pre[i]>=0,也就是遇到pre[i]==-1时停止,i就是并 ...

  10. SectionIndexer中的getSectionForPosition()与getPositionForSection()

    大家在做字母索引的时候常常会用到SectionIndexer这个类,里面有2个重要的方法 1.   getSectionForPosition()通过该项的位置,获得所在分类组的索引号 2. getP ...