在上一篇文章《(二)Audio子系统之new AudioRecord()》中已经介绍了Audio系统如何创建AudioRecord对象以及输入流,并创建了RecordThread线程,接下来,继续分析AudioRecord方法中的startRecording的实现

  

  函数原型:

public void startRecording() throws IllegalStateException

作用:

    开始进行录制

  参数:

    无

  返回值:

    无

异常:

    若没有初始化完成时,抛出IllegalStateException

接下来进入系统分析具体实现

frameworks/base/media/java/android/media/AudioRecord.java

    public void startRecording()
throws IllegalStateException {
if (mState != STATE_INITIALIZED) {
throw new IllegalStateException("startRecording() called on an "
+ "uninitialized AudioRecord.");
} // start recording
synchronized(mRecordingStateLock) {
if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) {
handleFullVolumeRec(true);
mRecordingState = RECORDSTATE_RECORDING;
}
}
}

首先判断是否已经初始化完毕了,在前一篇文章中,mState已经是STATE_INITIALIZED状态了。所以继续分析native_start函数

frameworks/base/core/jni/android_media_AudioRecord.cpp

static jint
android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
{
sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
if (lpRecorder == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return (jint) AUDIO_JAVA_ERROR;
} return nativeToJavaStatus(
lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession));
}

继续往下:lpRecorder->start

frameworks\av\media\libmedia\AudioRecord.cpp

status_t AudioRecord::start(AudioSystem::sync_event_t event, int triggerSession)
{
AutoMutex lock(mLock);
if (mActive) {
return NO_ERROR;
} // reset current position as seen by client to 0
mProxy->setEpoch(mProxy->getEpoch() - mProxy->getPosition());
// force refresh of remaining frames by processAudioBuffer() as last
// read before stop could be partial.
mRefreshRemaining = true; mNewPosition = mProxy->getPosition() + mUpdatePeriod; int32_t flags = android_atomic_acquire_load(&mCblk->mFlags); status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
ALOGV("mAudioRecord->start()");
status = mAudioRecord->start(event, triggerSession);
if (status == DEAD_OBJECT) {
flags |= CBLK_INVALID;
}
}
if (flags & CBLK_INVALID) {
status = restoreRecord_l("start");
} if (status != NO_ERROR) {
ALOGE("start() status %d", status);
} else {
mActive = true;
sp<AudioRecordThread> t = mAudioRecordThread;
if (t != 0) {
t->resume();
} else {
mPreviousPriority = getpriority(PRIO_PROCESS, 0);
get_sched_policy(0, &mPreviousSchedulingGroup);
androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
}
} return status;
}

在这个函数中主要的工作如下:

1.重置当前录音Buffer中的录音数据写入的起始位置,录音Buffer的组成在第一篇文章中已经介绍了;

2.标记mRefreshRemaining为true,从注释中可以看到,他应该是用来强制刷新剩余的frames,后面应该会突出这个变量的作用,先不急;

3.从mCblk->mFlags的地方获取flags,这里是0x0;

4.第一次来,肯定走mAudioRecord->start();

5.如果start失败了,会重新调用restoreRecord_l函数,再次建立输入流通道,这个函数在前一篇文章已经分析过了;

6.调用AudioRecordThread线程的resume函数;

这里我们主要分析第4、6步;

首先分析下AudioRecord.cpp::start()的第4步:mAudioRecord->start()

mAudioRecord是sp<IAudioRecord>类型的,也就是说他是Binder中的Bp端,我们需要找到BnAudioRecord,可以在AudioFlinger.h中找到Bn端的定义

frameworks\av\services\audioflinger\AudioFlinger.h

    // server side of the client's IAudioRecord
class RecordHandle : public android::BnAudioRecord {
public:
RecordHandle(const sp<RecordThread::RecordTrack>& recordTrack);
virtual ~RecordHandle();
virtual status_t start(int /*AudioSystem::sync_event_t*/ event, int triggerSession);
virtual void stop();
virtual status_t onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags);
private:
const sp<RecordThread::RecordTrack> mRecordTrack; // for use from destructor
void stop_nonvirtual();
};

所以我们继续找RecordHandle类是在哪里实现的,同时,这里可以看到除了start方法以外还有stop方法。

frameworks\av\services\audioflinger\Tracks.cpp

status_t AudioFlinger::RecordHandle::start(int /*AudioSystem::sync_event_t*/ event,
int triggerSession) { return mRecordTrack->start((AudioSystem::sync_event_t)event, triggerSession);
}

在AudioFlinger.h文件中可以看到const sp<RecordThread::RecordTrack> mRecordTrack;他还是在Tracks.cpp中实现的,继续往下走

status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event,
int triggerSession)
{ sp<ThreadBase> thread = mThread.promote();
if (thread != 0) {
RecordThread *recordThread = (RecordThread *)thread.get();
return recordThread->start(this, event, triggerSession);
} else {
return BAD_VALUE;
}
}

这里的Thread是在AudioRecord.cpp::openRecord_l()中调用createRecordTrack_l的Thread对象,再深入一点,在thread->createRecordTrack_l方法中调用了new RecordTrack(this,...),而RecordTrack是继承TrackBase的,在TrackBase父类的构造函数中TrackBase(ThreadBase *thread,...): RefBase(), mThread(thread),...{},这个父类的实现也是在Tracks.cpp。所以这里的mThread就是RecordThread

所以这里继续调用RecordThread的start方法

frameworks\av\services\audioflinger\Threads.cpp

status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
AudioSystem::sync_event_t event,
int triggerSession)
{
sp<ThreadBase> strongMe = this;
status_t status = NO_ERROR; if (event == AudioSystem::SYNC_EVENT_NONE) {
recordTrack->clearSyncStartEvent();
} else if (event != AudioSystem::SYNC_EVENT_SAME) {
recordTrack->mSyncStartEvent = mAudioFlinger->createSyncEvent(event,
triggerSession,
recordTrack->sessionId(),
syncStartEventCallback,
recordTrack);
// Sync event can be cancelled by the trigger session if the track is not in a
// compatible state in which case we start record immediately
if (recordTrack->mSyncStartEvent->isCancelled()) {
recordTrack->clearSyncStartEvent();
} else {
// do not wait for the event for more than AudioSystem::kSyncRecordStartTimeOutMs
recordTrack->mFramesToDrop = -
((AudioSystem::kSyncRecordStartTimeOutMs * recordTrack->mSampleRate) / 1000);
}
} {
// This section is a rendezvous between binder thread executing start() and RecordThread
AutoMutex lock(mLock);
if (mActiveTracks.indexOf(recordTrack) >= 0) {
if (recordTrack->mState == TrackBase::PAUSING) {
ALOGV("active record track PAUSING -> ACTIVE");
recordTrack->mState = TrackBase::ACTIVE;
} else {
ALOGV("active record track state %d", recordTrack->mState);
}
return status;
} // TODO consider other ways of handling this, such as changing the state to :STARTING and
// adding the track to mActiveTracks after returning from AudioSystem::startInput(),
// or using a separate command thread
recordTrack->mState = TrackBase::STARTING_1;
mActiveTracks.add(recordTrack);
mActiveTracksGen++;
status_t status = NO_ERROR;
if (recordTrack->isExternalTrack()) {
mLock.unlock();
status = AudioSystem::startInput(mId, (audio_session_t)recordTrack->sessionId());
mLock.lock();
// FIXME should verify that recordTrack is still in mActiveTracks
if (status != NO_ERROR) {//0
mActiveTracks.remove(recordTrack);
mActiveTracksGen++;
recordTrack->clearSyncStartEvent();
ALOGV("RecordThread::start error %d", status);
return status;
}
}
// Catch up with current buffer indices if thread is already running.
// This is what makes a new client discard all buffered data. If the track's mRsmpInFront
// was initialized to some value closer to the thread's mRsmpInFront, then the track could
// see previously buffered data before it called start(), but with greater risk of overrun. recordTrack->mRsmpInFront = mRsmpInRear;
recordTrack->mRsmpInUnrel = 0;
// FIXME why reset?
if (recordTrack->mResampler != NULL) {
recordTrack->mResampler->reset();
}
recordTrack->mState = TrackBase::STARTING_2;
// signal thread to start
mWaitWorkCV.broadcast();
if (mActiveTracks.indexOf(recordTrack) < 0) {
ALOGV("Record failed to start");
status = BAD_VALUE;
goto startError;
}
return status;
} startError:
if (recordTrack->isExternalTrack()) {
AudioSystem::stopInput(mId, (audio_session_t)recordTrack->sessionId());
}
recordTrack->clearSyncStartEvent();
// FIXME I wonder why we do not reset the state here?
return status;
}

在这个函数中主要的工作如下:

1.判断传过来的event的值,从AudioRecord.java可以看到他一直是SYNC_EVENT_NONE,所以这里就清除SyncStartEvent;

2.判断在mActiveTracks集合中传过来的recordTrack是否是第一个,而我们这是第一次来,肯定会是第一个,而如果不是第一个,也就是说之前因为某种状态已经开始了录音,所以再判断是否是PAUSING状态,更新状态到ACTIVE,然后直接return;

3.设置recordTrack的状态为STARTING_1,然后加到mActiveTracks集合中,如果此时再去indexOf的话,肯定就是1了;

4.判断recordTrack是否是外部的Track,而isExternalTrack的定义如下:

            bool        isTimedTrack() const { return (mType == TYPE_TIMED); }
bool isOutputTrack() const { return (mType == TYPE_OUTPUT); }
bool isPatchTrack() const { return (mType == TYPE_PATCH); }
bool isExternalTrack() const { return !isOutputTrack() && !isPatchTrack(); }

再回忆下,我们在new RecordTrack的时候传入的mType是TrackBase::TYPE_DEFAULT,所以这个recordTrack是外部的Track;

5.确定是ExternalTrack,那么就会调用AudioSystem::startInput方法开始采集数据,这个sessionId就是上一篇文章中出现的那个了,而对于这个mId,在AudioSystem::startInput中他的类型是audio_io_handle_t,在上一篇文章中,这个io_handle是通过AudioSystem::getInputForAttr获取到的,获取到之后通过checkRecordThread_l(input)获取到了一个RecordThread对象,我们看下RecordThread类:class RecordThread : public ThreadBase,再看下ThreadBase父类,父类的构造函数实现在Threads.cpp文件中,在这里我们发现把input赋值给了mId,也就是说,调用AudioSystem::startInput函数的参数,就是之前建立的输入流input以及生成的sessionId了。

AudioFlinger::ThreadBase::ThreadBase(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
audio_devices_t outDevice, audio_devices_t inDevice, type_t type)
: Thread(false /*canCallJava*/),
mType(type),
mAudioFlinger(audioFlinger),
// mSampleRate, mFrameCount, mChannelMask, mChannelCount, mFrameSize, mFormat, mBufferSize
// are set by PlaybackThread::readOutputParameters_l() or
// RecordThread::readInputParameters_l()
//FIXME: mStandby should be true here. Is this some kind of hack?
mStandby(false), mOutDevice(outDevice), mInDevice(inDevice),
mAudioSource(AUDIO_SOURCE_DEFAULT), mId(id),
// mName will be set by concrete (non-virtual) subclass
mDeathRecipient(new PMDeathRecipient(this))
{
}

6.如果mRsmpInRear不为null的话,就重置mRsmpInFront等缓冲区索引;这里显然还没开始录音,所以mRsmpInRear是null的;

7.设置recordTrack的状态为STARTING_2,然后调用mWaitWorkCV.broadcast()广播通知所有的线程开始工作。注意:这里不得不提前剧透下,在AudioSystem::startInput中,AudioFlinger::RecordThread已经开始跑起来了,所以其实broadcast对RecordThread是没有作用的,并且,需要特别注意的是,这里更新了recordTrack->mState为STARTING_2,之前在加入mActiveTracks时的状态是STARTING_1,这个地方比较有意思,这里先标记下,到时候在分析RecordThread的时候揭晓答案;

8.判断下recordTrack是否已经加到mActiveTracks集合中了,如果没有的话,就说明start失败了,需要stopInput等;

接下来继续分析AudioSystem::startInput方法

frameworks\av\media\libmedia\AudioSystem.cpp

status_t AudioSystem::startInput(audio_io_handle_t input,
audio_session_t session)
{
const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
if (aps == 0) return PERMISSION_DENIED;
return aps->startInput(input, session);
}

继续调用AudioPolicyService的startInput方法

frameworks\av\services\audiopolicy\AudioPolicyInterfaceImpl.cpp

status_t AudioPolicyService::startInput(audio_io_handle_t input,
audio_session_t session)
{
if (mAudioPolicyManager == NULL) {
return NO_INIT;
}
Mutex::Autolock _l(mLock); return mAudioPolicyManager->startInput(input, session);
}

继续转发

frameworks\av\services\audiopolicy\AudioPolicyManager.cpp

status_t AudioPolicyManager::startInput(audio_io_handle_t input,
audio_session_t session)
{
ssize_t index = mInputs.indexOfKey(input);
if (index < 0) {
ALOGW("startInput() unknown input %d", input);
return BAD_VALUE;
}
sp<AudioInputDescriptor> inputDesc = mInputs.valueAt(index); index = inputDesc->mSessions.indexOf(session);
if (index < 0) {
ALOGW("startInput() unknown session %d on input %d", session, input);
return BAD_VALUE;
} // virtual input devices are compatible with other input devices
if (!isVirtualInputDevice(inputDesc->mDevice)) {
// for a non-virtual input device, check if there is another (non-virtual) active input
audio_io_handle_t activeInput = getActiveInput();
if (activeInput != 0 && activeInput != input) {
// If the already active input uses AUDIO_SOURCE_HOTWORD then it is closed,
// otherwise the active input continues and the new input cannot be started.
sp<AudioInputDescriptor> activeDesc = mInputs.valueFor(activeInput);
if (activeDesc->mInputSource == AUDIO_SOURCE_HOTWORD) {
ALOGW("startInput(%d) preempting low-priority input %d", input, activeInput); stopInput(activeInput, activeDesc->mSessions.itemAt(0));
releaseInput(activeInput, activeDesc->mSessions.itemAt(0));
} else {
ALOGE("startInput(%d) failed: other input %d already started", input, activeInput);
return INVALID_OPERATION;
}
}
} if (inputDesc->mRefCount == 0) {
if (activeInputsCount() == 0) {
SoundTrigger::setCaptureState(true);
}
setInputDevice(input, getNewInputDevice(input), true /* force */); // automatically enable the remote submix output when input is started if not
// used by a policy mix of type MIX_TYPE_RECORDERS
// For remote submix (a virtual device), we open only one input per capture request.
if (audio_is_remote_submix_device(inputDesc->mDevice)) {
ALOGV("audio_is_remote_submix_device(inputDesc->mDevice)");
String8 address = String8("");
if (inputDesc->mPolicyMix == NULL) {
address = String8("0");
} else if (inputDesc->mPolicyMix->mMixType == MIX_TYPE_PLAYERS) {
address = inputDesc->mPolicyMix->mRegistrationId;
}
if (address != "") {
setDeviceConnectionStateInt(AUDIO_DEVICE_OUT_REMOTE_SUBMIX,
AUDIO_POLICY_DEVICE_STATE_AVAILABLE,
address);
}
}
} ALOGV("AudioPolicyManager::startInput() input source = %d", inputDesc->mInputSource); inputDesc->mRefCount++;
return NO_ERROR;
}

在这个函数中主要工作如下:

1.通过input找到mInputs集合中的位置,并获取他的inputDesc;

2.判断input设备是否是虚拟设备,若不是则再判断是否存在active的设备,我们第一次来,不存在的!

3.第一次来嘛,所以会调用SoundTrigger::setCaptureState(true),不过这个是和语音识别有关系,这里就不多说了;

4.继续调用setInputDevice函数,其中getNewInputDevice函数的作用是根据input获取audio_devices_t设备,同样,这个设备在上一篇文章中的AudioPolicyManager::getInputForAttr方法中通过getDeviceAndMixForInputSource获取到的,即AUDIO_DEVICE_IN_BUILTIN_MIC内置MIC设备,同时在该函数最后更新了inputDesc->mDevice;

5.判断是否是remote_submix设备,然后做相应处理;

6.inputDesc的mRefCount计数+1;

继续分析setInputDevice函数

status_t AudioPolicyManager::setInputDevice(audio_io_handle_t input,
audio_devices_t device,
bool force,
audio_patch_handle_t *patchHandle)
{
status_t status = NO_ERROR; sp<AudioInputDescriptor> inputDesc = mInputs.valueFor(input);
if ((device != AUDIO_DEVICE_NONE) && ((device != inputDesc->mDevice) || force)) {
inputDesc->mDevice = device; DeviceVector deviceList = mAvailableInputDevices.getDevicesFromType(device);
if (!deviceList.isEmpty()) {
struct audio_patch patch;
inputDesc->toAudioPortConfig(&patch.sinks[0]);
// AUDIO_SOURCE_HOTWORD is for internal use only:
// handled as AUDIO_SOURCE_VOICE_RECOGNITION by the audio HAL
if (patch.sinks[0].ext.mix.usecase.source == AUDIO_SOURCE_HOTWORD &&
!inputDesc->mIsSoundTrigger) {
patch.sinks[0].ext.mix.usecase.source = AUDIO_SOURCE_VOICE_RECOGNITION;
}
patch.num_sinks = 1;
//only one input device for now
deviceList.itemAt(0)->toAudioPortConfig(&patch.sources[0]);
patch.num_sources = 1;
ssize_t index;
if (patchHandle && *patchHandle != AUDIO_PATCH_HANDLE_NONE) {
index = mAudioPatches.indexOfKey(*patchHandle);
} else {
index = mAudioPatches.indexOfKey(inputDesc->mPatchHandle);
}
sp< AudioPatch> patchDesc;
audio_patch_handle_t afPatchHandle = AUDIO_PATCH_HANDLE_NONE;
if (index >= 0) {
patchDesc = mAudioPatches.valueAt(index);
afPatchHandle = patchDesc->mAfPatchHandle;
} status_t status = mpClientInterface->createAudioPatch(&patch,
&afPatchHandle,
0);
if (status == NO_ERROR) {
if (index < 0) {
patchDesc = new AudioPatch((audio_patch_handle_t)nextUniqueId(),
&patch, mUidCached);
addAudioPatch(patchDesc->mHandle, patchDesc);
} else {
patchDesc->mPatch = patch;
}
patchDesc->mAfPatchHandle = afPatchHandle;
patchDesc->mUid = mUidCached;
if (patchHandle) {
*patchHandle = patchDesc->mHandle;
}
inputDesc->mPatchHandle = patchDesc->mHandle;
nextAudioPortGeneration();
mpClientInterface->onAudioPatchListUpdate();
}
}
}
return status;
}

在这个函数中主要的工作如下:

1.这里已经知道device与inputDesc->mDevice都已经是AUDIO_DEVICE_IN_BUILTIN_MIC,但是force是true;

2.通过device获取mAvailableInputDevices集合中的所有设备,到此刻,我们还只向该集合中添加一个device;

3.这里我们分析下struct audio_patch;他定义在system\core\include\system\audio.h,这里对audio_patch中的source与sinks进行赋值,注意一点,他把mId(audio_io_handle_t)赋值给了id,然后在这个audio_patch中保存了InputSource,sample_rate,channel_mask,format,hw_module等等,几乎都存进去了;

struct audio_patch {
audio_patch_handle_t id; /* patch unique ID */
unsigned int num_sources; /* number of sources in following array */
struct audio_port_config sources[AUDIO_PATCH_PORTS_MAX];
unsigned int num_sinks; /* number of sinks in following array */
struct audio_port_config sinks[AUDIO_PATCH_PORTS_MAX];
}; struct audio_port_config {
audio_port_handle_t id; /* port unique ID */
audio_port_role_t role; /* sink or source */
audio_port_type_t type; /* device, mix ... */
unsigned int config_mask; /* e.g AUDIO_PORT_CONFIG_ALL */
unsigned int sample_rate; /* sampling rate in Hz */
audio_channel_mask_t channel_mask; /* channel mask if applicable */
audio_format_t format; /* format if applicable */
struct audio_gain_config gain; /* gain to apply if applicable */
union {
struct audio_port_config_device_ext device; /* device specific info */
struct audio_port_config_mix_ext mix; /* mix specific info */
struct audio_port_config_session_ext session; /* session specific info */
} ext;
};
struct audio_port_config_device_ext {
audio_module_handle_t hw_module; /* module the device is attached to */
audio_devices_t type; /* device type (e.g AUDIO_DEVICE_OUT_SPEAKER) */
char address[AUDIO_DEVICE_MAX_ADDRESS_LEN]; /* device address. "" if N/A */
};
struct audio_port_config_mix_ext {
audio_module_handle_t hw_module; /* module the stream is attached to */
audio_io_handle_t handle; /* I/O handle of the input/output stream */
union {
//TODO: change use case for output streams: use strategy and mixer attributes
audio_stream_type_t stream;
audio_source_t source;
} usecase;
};

4.调用mpClientInterface->createAudioPatch创建Audio通路;

5.更新patchDesc的属性;

6.如果createAudioPatch的status是NO_ERROR的话,就调用mpClientInterface->onAudioPatchListUpdate更新AudioPatch列表;

这里我们着重分析第4、6步:

首先分析下AudioPolicyManager.cpp的AudioPolicyManager::setInputDevice的第4步:创建Audio通路

frameworks\av\services\audiopolicy\AudioPolicyClientImpl.cpp

status_t AudioPolicyService::AudioPolicyClient::createAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle,
int delayMs)
{
return mAudioPolicyService->clientCreateAudioPatch(patch, handle, delayMs);
}

继续向下

frameworks\av\services\audiopolicy\AudioPolicyService.cpp

status_t AudioPolicyService::clientCreateAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle,
int delayMs)
{
return mAudioCommandThread->createAudioPatchCommand(patch, handle, delayMs);
}

还是在这个文件中

status_t AudioPolicyService::AudioCommandThread::createAudioPatchCommand(
const struct audio_patch *patch,
audio_patch_handle_t *handle,
int delayMs)
{
status_t status = NO_ERROR; sp<AudioCommand> command = new AudioCommand();
command->mCommand = CREATE_AUDIO_PATCH;
CreateAudioPatchData *data = new CreateAudioPatchData();
data->mPatch = *patch;
data->mHandle = *handle;
command->mParam = data;
command->mWaitStatus = true;
ALOGV("AudioCommandThread() adding create patch delay %d", delayMs);
status = sendCommand(command, delayMs);
if (status == NO_ERROR) {
*handle = data->mHandle;
}
return status;
}

后面就是把audio_patch封装下,然后加入到AudioCommands队列中去,所以接下来直接看threadLoop中是怎么处理的

bool AudioPolicyService::AudioCommandThread::threadLoop()
{
nsecs_t waitTime = INT64_MAX; mLock.lock();
while (!exitPending())
{
sp<AudioPolicyService> svc;
while (!mAudioCommands.isEmpty() && !exitPending()) {
nsecs_t curTime = systemTime();
// commands are sorted by increasing time stamp: execute them from index 0 and up
if (mAudioCommands[0]->mTime <= curTime) {
sp<AudioCommand> command = mAudioCommands[0];
mAudioCommands.removeAt(0);
mLastCommand = command; switch (command->mCommand) {
case START_TONE: {
mLock.unlock();
ToneData *data = (ToneData *)command->mParam.get();
ALOGV("AudioCommandThread() processing start tone %d on stream %d",
data->mType, data->mStream);
delete mpToneGenerator;
mpToneGenerator = new ToneGenerator(data->mStream, 1.0);
mpToneGenerator->startTone(data->mType);
mLock.lock();
}break;
case STOP_TONE: {
mLock.unlock();
ALOGV("AudioCommandThread() processing stop tone");
if (mpToneGenerator != NULL) {
mpToneGenerator->stopTone();
delete mpToneGenerator;
mpToneGenerator = NULL;
}
mLock.lock();
}break;
case SET_VOLUME: {
VolumeData *data = (VolumeData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set volume stream %d, \
volume %f, output %d", data->mStream, data->mVolume, data->mIO);
command->mStatus = AudioSystem::setStreamVolume(data->mStream,
data->mVolume,
data->mIO);
}break;
case SET_PARAMETERS: {
ParametersData *data = (ParametersData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set parameters string %s, io %d",
data->mKeyValuePairs.string(), data->mIO);
command->mStatus = AudioSystem::setParameters(data->mIO, data->mKeyValuePairs);
}break;
case SET_VOICE_VOLUME: {
VoiceVolumeData *data = (VoiceVolumeData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set voice volume volume %f",
data->mVolume);
command->mStatus = AudioSystem::setVoiceVolume(data->mVolume);
}break;
case STOP_OUTPUT: {
StopOutputData *data = (StopOutputData *)command->mParam.get();
ALOGV("AudioCommandThread() processing stop output %d",
data->mIO);
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doStopOutput(data->mIO, data->mStream, data->mSession);
mLock.lock();
}break;
case RELEASE_OUTPUT: {
ReleaseOutputData *data = (ReleaseOutputData *)command->mParam.get();
ALOGV("AudioCommandThread() processing release output %d",
data->mIO);
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doReleaseOutput(data->mIO, data->mStream, data->mSession);
mLock.lock();
}break;
case CREATE_AUDIO_PATCH: {
CreateAudioPatchData *data = (CreateAudioPatchData *)command->mParam.get();
ALOGV("AudioCommandThread() processing create audio patch");
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
command->mStatus = PERMISSION_DENIED;
} else {
command->mStatus = af->createAudioPatch(&data->mPatch, &data->mHandle);
}
} break;
case RELEASE_AUDIO_PATCH: {
ReleaseAudioPatchData *data = (ReleaseAudioPatchData *)command->mParam.get();
ALOGV("AudioCommandThread() processing release audio patch");
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
command->mStatus = PERMISSION_DENIED;
} else {
command->mStatus = af->releaseAudioPatch(data->mHandle);
}
} break;
case UPDATE_AUDIOPORT_LIST: {
ALOGV("AudioCommandThread() processing update audio port list");
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doOnAudioPortListUpdate();
mLock.lock();
}break;
case UPDATE_AUDIOPATCH_LIST: {
ALOGV("AudioCommandThread() processing update audio patch list");
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doOnAudioPatchListUpdate();
mLock.lock();
}break;
case SET_AUDIOPORT_CONFIG: {
SetAudioPortConfigData *data = (SetAudioPortConfigData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set port config");
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
command->mStatus = PERMISSION_DENIED;
} else {
command->mStatus = af->setAudioPortConfig(&data->mConfig);
}
} break;
default:
ALOGW("AudioCommandThread() unknown command %d", command->mCommand);
}
{
Mutex::Autolock _l(command->mLock);
if (command->mWaitStatus) {
command->mWaitStatus = false;
command->mCond.signal();
}
}
waitTime = INT64_MAX;
} else {
waitTime = mAudioCommands[0]->mTime - curTime;
break;
}
}
// release mLock before releasing strong reference on the service as
// AudioPolicyService destructor calls AudioCommandThread::exit() which acquires mLock.
mLock.unlock();
svc.clear();
mLock.lock();
if (!exitPending() && mAudioCommands.isEmpty()) {
// release delayed commands wake lock
release_wake_lock(mName.string());
ALOGV("AudioCommandThread() going to sleep");
mWaitWorkCV.waitRelative(mLock, waitTime);
ALOGV("AudioCommandThread() waking up");
}
}
// release delayed commands wake lock before quitting
if (!mAudioCommands.isEmpty()) {
release_wake_lock(mName.string());
}
mLock.unlock();
return false;
}

这里直接看CREATE_AUDIO_PATCH的分支,他调用了AF端的af->createAudioPatch函数,同样在这个loop中,也有后面的UPDATE_AUDIOPATCH_LIST分支

frameworks\av\services\audioflinger\PatchPanel.cpp

status_t AudioFlinger::createAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle)
{
Mutex::Autolock _l(mLock);
if (mPatchPanel != 0) {
return mPatchPanel->createAudioPatch(patch, handle);
}
return NO_INIT;
}

继续往下走 (唉,老实说我都不想走了,绕来绕去的。。

status_t AudioFlinger::PatchPanel::createAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle)
{
ALOGV("createAudioPatch() num_sources %d num_sinks %d handle %d",
patch->num_sources, patch->num_sinks, *handle);
status_t status = NO_ERROR;
audio_patch_handle_t halHandle = AUDIO_PATCH_HANDLE_NONE;
sp<AudioFlinger> audioflinger = mAudioFlinger.promote();
if (audioflinger == 0) {
return NO_INIT;
} if (handle == NULL || patch == NULL) {
return BAD_VALUE;
}
if (patch->num_sources == 0 || patch->num_sources > AUDIO_PATCH_PORTS_MAX ||
patch->num_sinks == 0 || patch->num_sinks > AUDIO_PATCH_PORTS_MAX) {
return BAD_VALUE;
}
// limit number of sources to 1 for now or 2 sources for special cross hw module case.
// only the audio policy manager can request a patch creation with 2 sources.
if (patch->num_sources > 2) {
return INVALID_OPERATION;
} if (*handle != AUDIO_PATCH_HANDLE_NONE) {
for (size_t index = 0; *handle != 0 && index < mPatches.size(); index++) {
if (*handle == mPatches[index]->mHandle) {
ALOGV("createAudioPatch() removing patch handle %d", *handle);
halHandle = mPatches[index]->mHalHandle;
Patch *removedPatch = mPatches[index];
mPatches.removeAt(index);
delete removedPatch;
break;
}
}
} Patch *newPatch = new Patch(patch); switch (patch->sources[0].type) {
case AUDIO_PORT_TYPE_DEVICE: {
audio_module_handle_t srcModule = patch->sources[0].ext.device.hw_module;
ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(srcModule);
if (index < 0) {
ALOGW("createAudioPatch() bad src hw module %d", srcModule);
status = BAD_VALUE;
goto exit;
}
AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index);
for (unsigned int i = 0; i < patch->num_sinks; i++) {
// support only one sink if connection to a mix or across HW modules
if ((patch->sinks[i].type == AUDIO_PORT_TYPE_MIX ||
patch->sinks[i].ext.mix.hw_module != srcModule) &&
patch->num_sinks > 1) {
status = INVALID_OPERATION;
goto exit;
}
// reject connection to different sink types
if (patch->sinks[i].type != patch->sinks[0].type) {
ALOGW("createAudioPatch() different sink types in same patch not supported");
status = BAD_VALUE;
goto exit;
}
// limit to connections between devices and input streams for HAL before 3.0
if (patch->sinks[i].ext.mix.hw_module == srcModule &&
(audioHwDevice->version() < AUDIO_DEVICE_API_VERSION_3_0) &&
(patch->sinks[i].type != AUDIO_PORT_TYPE_MIX)) {
ALOGW("createAudioPatch() invalid sink type %d for device source",
patch->sinks[i].type);
status = BAD_VALUE;
goto exit;
}
} if (patch->sinks[0].ext.device.hw_module != srcModule) {
// limit to device to device connection if not on same hw module
if (patch->sinks[0].type != AUDIO_PORT_TYPE_DEVICE) {
ALOGW("createAudioPatch() invalid sink type for cross hw module");
status = INVALID_OPERATION;
goto exit;
}
// special case num sources == 2 -=> reuse an exiting output mix to connect to the
// sink
if (patch->num_sources == 2) {
if (patch->sources[1].type != AUDIO_PORT_TYPE_MIX ||
patch->sinks[0].ext.device.hw_module !=
patch->sources[1].ext.mix.hw_module) {
ALOGW("createAudioPatch() invalid source combination");
status = INVALID_OPERATION;
goto exit;
} sp<ThreadBase> thread =
audioflinger->checkPlaybackThread_l(patch->sources[1].ext.mix.handle);
newPatch->mPlaybackThread = (MixerThread *)thread.get();
if (thread == 0) {
ALOGW("createAudioPatch() cannot get playback thread");
status = INVALID_OPERATION;
goto exit;
}
} else {
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
audio_devices_t device = patch->sinks[0].ext.device.type;
String8 address = String8(patch->sinks[0].ext.device.address);
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
newPatch->mPlaybackThread = audioflinger->openOutput_l(
patch->sinks[0].ext.device.hw_module,
&output,
&config,
device,
address,
AUDIO_OUTPUT_FLAG_NONE);
ALOGV("audioflinger->openOutput_l() returned %p",
newPatch->mPlaybackThread.get());
if (newPatch->mPlaybackThread == 0) {
status = NO_MEMORY;
goto exit;
}
}
uint32_t channelCount = newPatch->mPlaybackThread->channelCount();
audio_devices_t device = patch->sources[0].ext.device.type;
String8 address = String8(patch->sources[0].ext.device.address);
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
audio_channel_mask_t inChannelMask = audio_channel_in_mask_from_count(channelCount);
config.sample_rate = newPatch->mPlaybackThread->sampleRate();
config.channel_mask = inChannelMask;
config.format = newPatch->mPlaybackThread->format();
audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
newPatch->mRecordThread = audioflinger->openInput_l(srcModule,
&input,
&config,
device,
address,
AUDIO_SOURCE_MIC,
AUDIO_INPUT_FLAG_NONE);
ALOGV("audioflinger->openInput_l() returned %p inChannelMask %08x",
newPatch->mRecordThread.get(), inChannelMask);
if (newPatch->mRecordThread == 0) {
status = NO_MEMORY;
goto exit;
}
status = createPatchConnections(newPatch, patch);
if (status != NO_ERROR) {
goto exit;
}
} else {
if (audioHwDevice->version() >= AUDIO_DEVICE_API_VERSION_3_0) {
if (patch->sinks[0].type == AUDIO_PORT_TYPE_MIX) {
sp<ThreadBase> thread = audioflinger->checkRecordThread_l(
patch->sinks[0].ext.mix.handle);
if (thread == 0) {
ALOGW("createAudioPatch() bad capture I/O handle %d",
patch->sinks[0].ext.mix.handle);
status = BAD_VALUE;
goto exit;
}
status = thread->sendCreateAudioPatchConfigEvent(patch, &halHandle);
} else {
audio_hw_device_t *hwDevice = audioHwDevice->hwDevice();
status = hwDevice->create_audio_patch(hwDevice,
patch->num_sources,
patch->sources,
patch->num_sinks,
patch->sinks,
&halHandle);
}
} else {
sp<ThreadBase> thread = audioflinger->checkRecordThread_l(
patch->sinks[0].ext.mix.handle);
if (thread == 0) {
ALOGW("createAudioPatch() bad capture I/O handle %d",
patch->sinks[0].ext.mix.handle);
status = BAD_VALUE;
goto exit;
}
char *address;
if (strcmp(patch->sources[0].ext.device.address, "") != 0) {
address = audio_device_address_to_parameter(
patch->sources[0].ext.device.type,
patch->sources[0].ext.device.address);
} else {
address = (char *)calloc(1, 1);
}
AudioParameter param = AudioParameter(String8(address));
free(address);
param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING),
(int)patch->sources[0].ext.device.type);
param.addInt(String8(AUDIO_PARAMETER_STREAM_INPUT_SOURCE),
(int)patch->sinks[0].ext.mix.usecase.source);
ALOGV("createAudioPatch() AUDIO_PORT_TYPE_DEVICE setParameters %s",
param.toString().string());
status = thread->setParameters(param.toString());
}
}
} break;
case AUDIO_PORT_TYPE_MIX: {
audio_module_handle_t srcModule = patch->sources[0].ext.mix.hw_module;
ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(srcModule);
if (index < 0) {
ALOGW("createAudioPatch() bad src hw module %d", srcModule);
status = BAD_VALUE;
goto exit;
}
// limit to connections between devices and output streams
for (unsigned int i = 0; i < patch->num_sinks; i++) {
if (patch->sinks[i].type != AUDIO_PORT_TYPE_DEVICE) {
ALOGW("createAudioPatch() invalid sink type %d for mix source",
patch->sinks[i].type);
status = BAD_VALUE;
goto exit;
}
// limit to connections between sinks and sources on same HW module
if (patch->sinks[i].ext.device.hw_module != srcModule) {
status = BAD_VALUE;
goto exit;
}
}
AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index);
sp<ThreadBase> thread =
audioflinger->checkPlaybackThread_l(patch->sources[0].ext.mix.handle);
if (thread == 0) {
ALOGW("createAudioPatch() bad playback I/O handle %d",
patch->sources[0].ext.mix.handle);
status = BAD_VALUE;
goto exit;
}
if (audioHwDevice->version() >= AUDIO_DEVICE_API_VERSION_3_0) {
status = thread->sendCreateAudioPatchConfigEvent(patch, &halHandle);
} else {
audio_devices_t type = AUDIO_DEVICE_NONE;
for (unsigned int i = 0; i < patch->num_sinks; i++) {
type |= patch->sinks[i].ext.device.type;
}
char *address;
if (strcmp(patch->sinks[0].ext.device.address, "") != 0) {
//FIXME: we only support address on first sink with HAL version < 3.0
address = audio_device_address_to_parameter(
patch->sinks[0].ext.device.type,
patch->sinks[0].ext.device.address);
} else {
address = (char *)calloc(1, 1);
}
AudioParameter param = AudioParameter(String8(address));
free(address);
param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING), (int)type);
status = thread->setParameters(param.toString());
} } break;
default:
status = BAD_VALUE;
goto exit;
}
exit:
ALOGV("createAudioPatch() status %d", status);
if (status == NO_ERROR) {
*handle = audioflinger->nextUniqueId();
newPatch->mHandle = *handle;
newPatch->mHalHandle = halHandle;
mPatches.add(newPatch);
ALOGV("createAudioPatch() added new patch handle %d halHandle %d", *handle, halHandle);
} else {
clearPatchConnections(newPatch);
delete newPatch;
}
return status;
}

在这个函数中主要工作如下:

1.在AudioPolicyManager::setInputDevice()函数中,num_sources与num_sinks都为1;

2.当halHandle不是AUDIO_PATCH_HANDLE_NONE的时候,就去mPatches集合中找到这个halHandle,然后删除他,而在这里,halHandle就是AUDIO_PATCH_HANDLE_NONE;

3.这里的source.type为AUDIO_PORT_TYPE_DEVICE,获取patch中的audio_module_handle_t,获取AF端的AudioHwDevice,后面有个for循环判断,根据之前的参数设定,均不会进到if里面;

4.判断source里的audio_module_handle_t与sink里的是否一致,那肯定一致噻;

5.再判断hal代码中的version版本,我们看下hardware\aw\audio\tulip\audio_hw.c的adev->hw_device.common.version = AUDIO_DEVICE_API_VERSION_2_0;

6.调用AF端的checkRecordThread_l函数,即通过audio_io_handle_t从mRecordThreads中获取到RecordThread线程;

7.通过address创建一个AudioParameter对象,并把source.type与source放入AudioParameter对象中;

8.调用thread->setParameters把AudioParameter对象传递过去

这里我们继续分析下第8步:thread->setParameters

frameworks\av\services\audioflinger\Threads.cpp

status_t AudioFlinger::ThreadBase::setParameters(const String8& keyValuePairs)
{
status_t status; Mutex::Autolock _l(mLock); return sendSetParameterConfigEvent_l(keyValuePairs);
}

emmm,感觉又要绕上一大圈

status_t AudioFlinger::ThreadBase::sendSetParameterConfigEvent_l(const String8& keyValuePair)
{
sp<ConfigEvent> configEvent = (ConfigEvent *)new SetParameterConfigEvent(keyValuePair);
return sendConfigEvent_l(configEvent);
}

把AudioParameter对象转化为ConfigEvent对象,继续调用

status_t AudioFlinger::ThreadBase::sendConfigEvent_l(sp<ConfigEvent>& event)
{
status_t status = NO_ERROR; mConfigEvents.add(event);
mWaitWorkCV.signal();
mLock.unlock();
{
Mutex::Autolock _l(event->mLock);
while (event->mWaitStatus) {
if (event->mCond.waitRelative(event->mLock, kConfigEventTimeoutNs) != NO_ERROR) {
event->mStatus = TIMED_OUT;
event->mWaitStatus = false;
}
}
status = event->mStatus;
}
mLock.lock();
return status;
}

把ConfigEvent加入到mConfigEvents中,然后调用mWaitWorkCV.signal()通知RecordThread线程可以start了,在线程中接受到mWaitWorkCV.wait(mLock);时就会回到reacquire_wakelock位置,再继续往下,这时候调用了processConfigEvents_l,他就是用来处理ConfigEvent事件的

void AudioFlinger::ThreadBase::processConfigEvents_l()
{
bool configChanged = false; while (!mConfigEvents.isEmpty()) {
ALOGV("processConfigEvents_l() remaining events %d", mConfigEvents.size());
sp<ConfigEvent> event = mConfigEvents[0];
mConfigEvents.removeAt(0);
switch (event->mType) {
case CFG_EVENT_PRIO: {
PrioConfigEventData *data = (PrioConfigEventData *)event->mData.get();
// FIXME Need to understand why this has to be done asynchronously
int err = requestPriority(data->mPid, data->mTid, data->mPrio,
true /*asynchronous*/);
if (err != 0) {
ALOGW("Policy SCHED_FIFO priority %d is unavailable for pid %d tid %d; error %d",
data->mPrio, data->mPid, data->mTid, err);
}
} break;
case CFG_EVENT_IO: {
IoConfigEventData *data = (IoConfigEventData *)event->mData.get();
audioConfigChanged(data->mEvent, data->mParam);
} break;
case CFG_EVENT_SET_PARAMETER: {
SetParameterConfigEventData *data = (SetParameterConfigEventData *)event->mData.get();
if (checkForNewParameter_l(data->mKeyValuePairs, event->mStatus)) {
configChanged = true;
}
} break;
case CFG_EVENT_CREATE_AUDIO_PATCH: {
CreateAudioPatchConfigEventData *data =
(CreateAudioPatchConfigEventData *)event->mData.get();
event->mStatus = createAudioPatch_l(&data->mPatch, &data->mHandle);
} break;
case CFG_EVENT_RELEASE_AUDIO_PATCH: {
ReleaseAudioPatchConfigEventData *data =
(ReleaseAudioPatchConfigEventData *)event->mData.get();
event->mStatus = releaseAudioPatch_l(data->mHandle);
} break;
default:
ALOG_ASSERT(false, "processConfigEvents_l() unknown event type %d", event->mType);
break;
}
{
Mutex::Autolock _l(event->mLock);
if (event->mWaitStatus) {
event->mWaitStatus = false;
event->mCond.signal();
}
}
ALOGV_IF(mConfigEvents.isEmpty(), "processConfigEvents_l() DONE thread %p", this);
} if (configChanged) {
cacheParameters_l();
}
}

这里果然是一直在等待处理mConfigEvents中的事件,而这个event->mType是CFG_EVENT_SET_PARAMETER,所以继续调用checkForNewParameter_l函数,而他肯定是调用的是RecordThread中的啦,这..绕了真·一大圈。

bool AudioFlinger::RecordThread::checkForNewParameter_l(const String8& keyValuePair,
status_t& status)
{
bool reconfig = false; status = NO_ERROR; audio_format_t reqFormat = mFormat;
uint32_t samplingRate = mSampleRate;
audio_channel_mask_t channelMask = audio_channel_in_mask_from_count(mChannelCount); AudioParameter param = AudioParameter(keyValuePair);
int value; if (param.getInt(String8(AudioParameter::keySamplingRate), value) == NO_ERROR) {
samplingRate = value;
reconfig = true;
}
if (param.getInt(String8(AudioParameter::keyFormat), value) == NO_ERROR) {
if ((audio_format_t) value != AUDIO_FORMAT_PCM_16_BIT) {
status = BAD_VALUE;
} else {
reqFormat = (audio_format_t) value;
reconfig = true;
}
}
if (param.getInt(String8(AudioParameter::keyChannels), value) == NO_ERROR) {
audio_channel_mask_t mask = (audio_channel_mask_t) value;
if (mask != AUDIO_CHANNEL_IN_MONO && mask != AUDIO_CHANNEL_IN_STEREO) {
status = BAD_VALUE;
} else {
channelMask = mask;
reconfig = true;
}
}
if (param.getInt(String8(AudioParameter::keyFrameCount), value) == NO_ERROR) {
// do not accept frame count changes if tracks are open as the track buffer
// size depends on frame count and correct behavior would not be guaranteed
// if frame count is changed after track creation
if (mActiveTracks.size() > 0) {
status = INVALID_OPERATION;
} else {
reconfig = true;
}
}
if (param.getInt(String8(AudioParameter::keyRouting), value) == NO_ERROR) {
// forward device change to effects that have requested to be
// aware of attached audio device.
for (size_t i = 0; i < mEffectChains.size(); i++) {
mEffectChains[i]->setDevice_l(value);
} // store input device and output device but do not forward output device to audio HAL.
// Note that status is ignored by the caller for output device
// (see AudioFlinger::setParameters()
if (audio_is_output_devices(value)) {
mOutDevice = value;
status = BAD_VALUE;
} else {
mInDevice = value;
// disable AEC and NS if the device is a BT SCO headset supporting those
// pre processings
if (mTracks.size() > 0) {
bool suspend = audio_is_bluetooth_sco_device(mInDevice) &&
mAudioFlinger->btNrecIsOff();
for (size_t i = 0; i < mTracks.size(); i++) {
sp<RecordTrack> track = mTracks[i];
setEffectSuspended_l(FX_IID_AEC, suspend, track->sessionId());
setEffectSuspended_l(FX_IID_NS, suspend, track->sessionId());
}
}
}
}
if (param.getInt(String8(AudioParameter::keyInputSource), value) == NO_ERROR &&
mAudioSource != (audio_source_t)value) {
// forward device change to effects that have requested to be
// aware of attached audio device.
for (size_t i = 0; i < mEffectChains.size(); i++) {
mEffectChains[i]->setAudioSource_l((audio_source_t)value);
}
mAudioSource = (audio_source_t)value;
} if (status == NO_ERROR) {
status = mInput->stream->common.set_parameters(&mInput->stream->common,
keyValuePair.string());
if (status == INVALID_OPERATION) {
inputStandBy();
status = mInput->stream->common.set_parameters(&mInput->stream->common,
keyValuePair.string());
}
if (reconfig) {
if (status == BAD_VALUE &&
reqFormat == mInput->stream->common.get_format(&mInput->stream->common) &&
reqFormat == AUDIO_FORMAT_PCM_16_BIT &&
(mInput->stream->common.get_sample_rate(&mInput->stream->common)
<= (2 * samplingRate)) &&
audio_channel_count_from_in_mask(
mInput->stream->common.get_channels(&mInput->stream->common)) <= FCC_2 &&
(channelMask == AUDIO_CHANNEL_IN_MONO ||
channelMask == AUDIO_CHANNEL_IN_STEREO)) {
status = NO_ERROR;
}
if (status == NO_ERROR) {
readInputParameters_l();
sendIoConfigEvent_l(AudioSystem::INPUT_CONFIG_CHANGED);
}
}
} return reconfig;
}

在这里,获取了到了AudioParameter的值,在前面我们知道,他只放入了Routing与InputSource的值,所以这里把patch->sources[0].ext.device.type的属性放入mInDevice,把patch->sinks[0].ext.mix.usecase.source放入mAudioSource中,最后调用HAL层中的set_parameters函数,把mInDevice与mAudioSource设置过去,显然这个reconfig一直是false,也就是说其他的参数并没有改变。

hardware\aw\audio\tulip\audio_hw.c

static int in_set_parameters(struct audio_stream *stream, const char *kvpairs)
{
struct sunxi_stream_in *in = (struct sunxi_stream_in *)stream;
struct sunxi_audio_device *adev = in->dev;
struct str_parms *parms;
char *str;
char value[128];
int ret, val = 0;
bool do_standby = false; ALOGV("in_set_parameters: %s", kvpairs); parms = str_parms_create_str(kvpairs); ret = str_parms_get_str(parms, AUDIO_PARAMETER_STREAM_INPUT_SOURCE, value, sizeof(value)); pthread_mutex_lock(&adev->lock);
pthread_mutex_lock(&in->lock);
if (ret >= 0) {
val = atoi(value);
/* no audio source uses val == 0 */
if ((in->source != val) && (val != 0)) {
in->source = val;
do_standby = true;
}
} ret = str_parms_get_str(parms, AUDIO_PARAMETER_STREAM_ROUTING, value, sizeof(value));
if (ret >= 0) {
val = atoi(value) & ~AUDIO_DEVICE_BIT_IN;
if ((adev->mode != AUDIO_MODE_IN_CALL) && (in->device != val) && (val != 0)) {
in->device = val;
do_standby = true;
} else if((adev->mode == AUDIO_MODE_IN_CALL) && (in->source != val) && (val != 0)) {
in->device = val; select_device(adev);
}
} if (do_standby)
do_input_standby(in);
pthread_mutex_unlock(&in->lock);
pthread_mutex_unlock(&adev->lock); str_parms_destroy(parms);
return ret;
}

1.获取INPUT_SOURCE属性,更新in->source,即AUDIO_SOURCE_MIC;

2.在adev_open函数中,定义了adev->mode为AUDIO_MODE_NORMAL且定义adev->in_device为AUDIO_DEVICE_IN_BUILTIN_MIC & ~AUDIO_DEVICE_BIT_IN,所以这里仅更新了输入流的in->device,不需要select device了,这里就是内置MIC,AUDIO_DEVICE_IN_BUILTIN_MIC;

到这里,Audio输入通路就完成了。

然后分析下AudioPolicyManager.cpp的AudioPolicyManager::setInputDevice的第6步:

这个mpClientInterface->createAudioPatch的返回值也比较长,一层一层往里面跟,可知,最后是在audio_hw.c中的in_set_parameters给赋值过去的,而ret是调用str_parms_get_str的结果

这个函数实现是在system\core\libcutils\str_parms.c文件中

int str_parms_get_str(struct str_parms *str_parms, const char *key, char *val,
int len)
{
char *value; value = hashmapGet(str_parms->map, (void *)key);
if (value)
return strlcpy(val, value, len); return -ENOENT;
}

这个函数的意思就是从str_parms的hashmap中把key的值提取出来并返回;而AUDIO_PARAMETER_STREAM_ROUTING这个key是在调用thread->setParameters(param.toString())之前把type的值放进去的,即

param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING),(int)patch->sources[0].ext.device.type);所以这里的status不是NO_ERROR,所以不会去更新AudioPatch列表了。

在前面一点,我们注意到AF端的RecordThread已经开始start了,这个要还记得,但是先放在这里,后面一点再分析。

然后继续分析AudioRecord.cpp::start()的第6步:AudioRecordThread线程的resume函数

frameworks\av\media\libmedia\AudioRecord.cpp

void AudioRecord::AudioRecordThread::resume()
{
AutoMutex _l(mMyLock);
mIgnoreNextPausedInt = true;
if (mPaused || mPausedInt) {
mPaused = false;
mPausedInt = false;
mMyCond.signal();
}
}

标记mIgnoreNextPausedInt为true,mPaused与mPausedInt都为false,然后调用mMyCond.signal()通知AudioRecordThread线程

bool AudioRecord::AudioRecordThread::threadLoop()
{
{
AutoMutex _l(mMyLock);
if (mPaused) {
mMyCond.wait(mMyLock);
// caller will check for exitPending()
return true;
}
if (mIgnoreNextPausedInt) {
mIgnoreNextPausedInt = false;
mPausedInt = false;
}
if (mPausedInt) {
if (mPausedNs > 0) {
(void) mMyCond.waitRelative(mMyLock, mPausedNs);
} else {
mMyCond.wait(mMyLock);
}
mPausedInt = false;
return true;
}
}
nsecs_t ns = mReceiver.processAudioBuffer();
switch (ns) {
case 0:
return true;
case NS_INACTIVE:
pauseInternal();
return true;
case NS_NEVER:
return false;
case NS_WHENEVER:
// FIXME increase poll interval, or make event-driven
ns = 1000000000LL;
// fall through
default:
LOG_ALWAYS_FATAL_IF(ns < 0, "processAudioBuffer() returned %" PRId64, ns);
pauseInternal(ns);
return true;
}
}

在AudioRecordThread线程中,在mMyCond.wait(mMyLock);等待signal()信号,这里我们知道mPaused为false,mIgnoreNextPausedInt也将变为false,mPausedInt也为false,所以在下一次循环中,就会调用processAudioBuffer函数,调用之后返回一个NS_WHENEVER,所以将对这个线程进行延时1000000000LL,也就是1s,然后一直循环下去,直到应用终止录音。

接下来继续分析下processAudioBuffer函数

nsecs_t AudioRecord::processAudioBuffer()
{
mLock.lock();
if (mAwaitBoost) {
mAwaitBoost = false;
mLock.unlock();
static const int32_t kMaxTries = 5;
int32_t tryCounter = kMaxTries;
uint32_t pollUs = 10000;
do {
int policy = sched_getscheduler(0);
if (policy == SCHED_FIFO || policy == SCHED_RR) {
break;
}
usleep(pollUs);
pollUs <<= 1;
} while (tryCounter-- > 0);
if (tryCounter < 0) {
ALOGE("did not receive expected priority boost on time");
}
// Run again immediately
return 0;
} // Can only reference mCblk while locked
int32_t flags = android_atomic_and(~CBLK_OVERRUN, &mCblk->mFlags); // Check for track invalidation
if (flags & CBLK_INVALID) {
(void) restoreRecord_l("processAudioBuffer");
mLock.unlock();
// Run again immediately, but with a new IAudioRecord
return 0;
} bool active = mActive; // Manage overrun callback, must be done under lock to avoid race with releaseBuffer()
bool newOverrun = false;
if (flags & CBLK_OVERRUN) {
if (!mInOverrun) {
mInOverrun = true;
newOverrun = true;
}
} // Get current position of server
size_t position = mProxy->getPosition(); // Manage marker callback
bool markerReached = false;
size_t markerPosition = mMarkerPosition;
// FIXME fails for wraparound, need 64 bits
if (!mMarkerReached && (markerPosition > 0) && (position >= markerPosition)) {
mMarkerReached = markerReached = true;
} // Determine the number of new position callback(s) that will be needed, while locked
size_t newPosCount = 0;
size_t newPosition = mNewPosition;
uint32_t updatePeriod = mUpdatePeriod;
// FIXME fails for wraparound, need 64 bits
if (updatePeriod > 0 && position >= newPosition) {
newPosCount = ((position - newPosition) / updatePeriod) + 1;
mNewPosition += updatePeriod * newPosCount;
} // Cache other fields that will be needed soon
uint32_t notificationFrames = mNotificationFramesAct;
if (mRefreshRemaining) {
mRefreshRemaining = false;
mRemainingFrames = notificationFrames;
mRetryOnPartialBuffer = false;
}
size_t misalignment = mProxy->getMisalignment();
uint32_t sequence = mSequence; // These fields don't need to be cached, because they are assigned only by set():
// mTransfer, mCbf, mUserData, mSampleRate, mFrameSize mLock.unlock(); // perform callbacks while unlocked
if (newOverrun) {
mCbf(EVENT_OVERRUN, mUserData, NULL);
}
if (markerReached) {
mCbf(EVENT_MARKER, mUserData, &markerPosition);
}
while (newPosCount > 0) {
size_t temp = newPosition;
mCbf(EVENT_NEW_POS, mUserData, &temp);
newPosition += updatePeriod;
newPosCount--;
}
if (mObservedSequence != sequence) {
mObservedSequence = sequence;
mCbf(EVENT_NEW_IAUDIORECORD, mUserData, NULL);
} // if inactive, then don't run me again until re-started
if (!active) {
return NS_INACTIVE;
} // Compute the estimated time until the next timed event (position, markers)
uint32_t minFrames = ~0;
if (!markerReached && position < markerPosition) {
minFrames = markerPosition - position;
}
if (updatePeriod > 0 && updatePeriod < minFrames) {
minFrames = updatePeriod;
} // If > 0, poll periodically to recover from a stuck server. A good value is 2.
static const uint32_t kPoll = 0;
if (kPoll > 0 && mTransfer == TRANSFER_CALLBACK && kPoll * notificationFrames < minFrames) {
minFrames = kPoll * notificationFrames;
} // Convert frame units to time units
nsecs_t ns = NS_WHENEVER;
if (minFrames != (uint32_t) ~0) {
// This "fudge factor" avoids soaking CPU, and compensates for late progress by server
static const nsecs_t kFudgeNs = 10000000LL; // 10 ms
ns = ((minFrames * 1000000000LL) / mSampleRate) + kFudgeNs;
} // If not supplying data by EVENT_MORE_DATA, then we're done
if (mTransfer != TRANSFER_CALLBACK) {
return ns;
} struct timespec timeout;
const struct timespec *requested = &ClientProxy::kForever;
if (ns != NS_WHENEVER) {
timeout.tv_sec = ns / 1000000000LL;
timeout.tv_nsec = ns % 1000000000LL;
ALOGV("timeout %ld.%03d", timeout.tv_sec, (int) timeout.tv_nsec / 1000000);
requested = &timeout;
} while (mRemainingFrames > 0) { Buffer audioBuffer;
audioBuffer.frameCount = mRemainingFrames;
size_t nonContig;
status_t err = obtainBuffer(&audioBuffer, requested, NULL, &nonContig);
LOG_ALWAYS_FATAL_IF((err != NO_ERROR) != (audioBuffer.frameCount == 0),
"obtainBuffer() err=%d frameCount=%zu", err, audioBuffer.frameCount);
requested = &ClientProxy::kNonBlocking;
size_t avail = audioBuffer.frameCount + nonContig;
ALOGV("obtainBuffer(%u) returned %zu = %zu + %zu err %d",
mRemainingFrames, avail, audioBuffer.frameCount, nonContig, err);
if (err != NO_ERROR) {
if (err == TIMED_OUT || err == WOULD_BLOCK || err == -EINTR) {
break;
}
ALOGE("Error %d obtaining an audio buffer, giving up.", err);
return NS_NEVER;
} if (mRetryOnPartialBuffer) {
mRetryOnPartialBuffer = false;
if (avail < mRemainingFrames) {
int64_t myns = ((mRemainingFrames - avail) *
1100000000LL) / mSampleRate;
if (ns < 0 || myns < ns) {
ns = myns;
}
return ns;
}
} size_t reqSize = audioBuffer.size;
mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer);
size_t readSize = audioBuffer.size; // Sanity check on returned size
if (ssize_t(readSize) < 0 || readSize > reqSize) {
ALOGE("EVENT_MORE_DATA requested %zu bytes but callback returned %zd bytes",
reqSize, ssize_t(readSize));
return NS_NEVER;
} if (readSize == 0) {
// The callback is done consuming buffers
// Keep this thread going to handle timed events and
// still try to provide more data in intervals of WAIT_PERIOD_MS
// but don't just loop and block the CPU, so wait
return WAIT_PERIOD_MS * 1000000LL;
} size_t releasedFrames = readSize / mFrameSize;
audioBuffer.frameCount = releasedFrames;
mRemainingFrames -= releasedFrames;
if (misalignment >= releasedFrames) {
misalignment -= releasedFrames;
} else {
misalignment = 0;
} releaseBuffer(&audioBuffer); // FIXME here is where we would repeat EVENT_MORE_DATA again on same advanced buffer
// if callback doesn't like to accept the full chunk
if (readSize < reqSize) {
continue;
} // There could be enough non-contiguous frames available to satisfy the remaining request
if (mRemainingFrames <= nonContig) {
continue;
} #if 0
// This heuristic tries to collapse a series of EVENT_MORE_DATA that would total to a
// sum <= notificationFrames. It replaces that series by at most two EVENT_MORE_DATA
// that total to a sum == notificationFrames.
if (0 < misalignment && misalignment <= mRemainingFrames) {
mRemainingFrames = misalignment;
return (mRemainingFrames * 1100000000LL) / mSampleRate;
}
#endif }
mRemainingFrames = notificationFrames;
mRetryOnPartialBuffer = true; // A lot has transpired since ns was calculated, so run again immediately and re-calculate
return 0;
}

在这个函数中的主要工作如下:

1.对于mAwaitBoost变量,我们查找一下,可以知道只有当音频输入标志mFlags为AUDIO_INPUT_FLAG_FAST才有可能是true,而之前分析,在这里mFlags为AUDIO_INPUT_FLAG_NONE;

2.通过mCblk->mFlags判断是否缓冲区数据已经overrun,并存储在flags中,然后检查track是否失效;

{插一句,这个mActive是在AudioRecordThread线程resume之前赋值true的,然后我们回过头来看AudioRecord::set函数的结尾,对一大堆变量进行了初始化,而这些变量大部分在这里使用到了,如mInOverrun为false,mMarkerPosition为0,mMarkerReached为false,mNewPosition为0,mUpdatePeriod为0,mSequence为1,mNotificationFramesAct是在之前通过audioFlinger->openRecord获取到的,这里为1024}

3.检查缓冲区数据是否已经overrun,如果出现overrun,则标记mInOverrun与newOverrun都为true,后面会通过mCbf发送EVENT_OVERRUN事件给到上一层,这个mCbf其实是JNI中的recorderCallback的回调函数;

4.判断如果当前位置超过标记的位置了,则标记mMarkerReached与markerReached为true,后面会通过mCbf发送EVENT_MARKER事件给到上一层;

5.判断是否需要更新mRemainingFrames,以及是否需要发送EVENT_NEW_POS以及EVENT_NEW_IAUDIORECORD事件;

6.这里的mTransfer在AudioRecord::set函数中已经分析了,为TRANSFER_SYNC,所以最后直接return NS_WHENEVER;

也就是说,在AudioRecordThread这个线程中,通过对mCblk->mFlags的判断来更新当前缓冲区数据的状态,EVENT_OVERRUN/EVENT_MARKER/EVENT_NEW_POS/EVENT_NEW_IAUDIORECORD等。

好了,AudioRecordThread已经分析完了,之前我们分析到Thread.cpp中的AudioFlinger::ThreadBase::sendConfigEvent_l函数调用了mWaitWorkCV.signal(),所以我们继续分析下RecordThread线程

frameworks\av\services\audioflinger\Threads.cpp

bool AudioFlinger::RecordThread::threadLoop()
{
nsecs_t lastWarning = 0; inputStandBy(); reacquire_wakelock:
sp<RecordTrack> activeTrack;
int activeTracksGen;
{
Mutex::Autolock _l(mLock);
size_t size = mActiveTracks.size();
activeTracksGen = mActiveTracksGen;
if (size > 0) {
// FIXME an arbitrary choice
activeTrack = mActiveTracks[0];
acquireWakeLock_l(activeTrack->uid());
if (size > 1) {
SortedVector<int> tmp;
for (size_t i = 0; i < size; i++) {
tmp.add(mActiveTracks[i]->uid());
}
updateWakeLockUids_l(tmp);
}
} else {
acquireWakeLock_l(-1);
}
} // used to request a deferred sleep, to be executed later while mutex is unlocked
uint32_t sleepUs = 0; // loop while there is work to do
for (;;) {
Vector< sp<EffectChain> > effectChains; // sleep with mutex unlocked
if (sleepUs > 0) {
usleep(sleepUs);
sleepUs = 0;
} // activeTracks accumulates a copy of a subset of mActiveTracks
Vector< sp<RecordTrack> > activeTracks; // reference to the (first and only) active fast track
sp<RecordTrack> fastTrack; // reference to a fast track which is about to be removed
sp<RecordTrack> fastTrackToRemove; { // scope for mLock
Mutex::Autolock _l(mLock); processConfigEvents_l(); // check exitPending here because checkForNewParameters_l() and
// checkForNewParameters_l() can temporarily release mLock
if (exitPending()) {
break;
} // if no active track(s), then standby and release wakelock
size_t size = mActiveTracks.size();
if (size == 0) {
standbyIfNotAlreadyInStandby();
// exitPending() can't become true here
releaseWakeLock_l();
ALOGV("RecordThread: loop stopping");
// go to sleep
mWaitWorkCV.wait(mLock);
ALOGV("RecordThread: loop starting");
goto reacquire_wakelock;
} if (mActiveTracksGen != activeTracksGen) {
activeTracksGen = mActiveTracksGen;
SortedVector<int> tmp;
for (size_t i = 0; i < size; i++) {
tmp.add(mActiveTracks[i]->uid());
}
updateWakeLockUids_l(tmp);
} bool doBroadcast = false;
for (size_t i = 0; i < size; ) { activeTrack = mActiveTracks[i];
if (activeTrack->isTerminated()) {
if (activeTrack->isFastTrack()) {
ALOG_ASSERT(fastTrackToRemove == 0);
fastTrackToRemove = activeTrack;
}
removeTrack_l(activeTrack);
mActiveTracks.remove(activeTrack);
mActiveTracksGen++;
size--;
continue;
} TrackBase::track_state activeTrackState = activeTrack->mState;
switch (activeTrackState) { case TrackBase::PAUSING:
mActiveTracks.remove(activeTrack);
mActiveTracksGen++;
doBroadcast = true;
size--;
continue; case TrackBase::STARTING_1:
sleepUs = 10000;
i++;
continue; case TrackBase::STARTING_2:
doBroadcast = true;
mStandby = false;
activeTrack->mState = TrackBase::ACTIVE;
break; case TrackBase::ACTIVE:
break; case TrackBase::IDLE:
i++;
continue; default:
LOG_ALWAYS_FATAL("Unexpected activeTrackState %d", activeTrackState);
} activeTracks.add(activeTrack);
i++; if (activeTrack->isFastTrack()) {
ALOG_ASSERT(!mFastTrackAvail);
ALOG_ASSERT(fastTrack == 0);
fastTrack = activeTrack;
}
}
if (doBroadcast) {
mStartStopCond.broadcast();
} // sleep if there are no active tracks to process
if (activeTracks.size() == 0) {
if (sleepUs == 0) {
sleepUs = kRecordThreadSleepUs;
}
continue;
}
sleepUs = 0; lockEffectChains_l(effectChains);
} // thread mutex is now unlocked, mActiveTracks unknown, activeTracks.size() > 0 size_t size = effectChains.size();
for (size_t i = 0; i < size; i++) {
// thread mutex is not locked, but effect chain is locked
effectChains[i]->process_l();
} // Push a new fast capture state if fast capture is not already running, or cblk change
if (mFastCapture != 0) {
FastCaptureStateQueue *sq = mFastCapture->sq();
FastCaptureState *state = sq->begin();
bool didModify = false;
FastCaptureStateQueue::block_t block = FastCaptureStateQueue::BLOCK_UNTIL_PUSHED;
if (state->mCommand != FastCaptureState::READ_WRITE /* FIXME &&
(kUseFastMixer != FastMixer_Dynamic || state->mTrackMask > 1)*/) {
if (state->mCommand == FastCaptureState::COLD_IDLE) {
int32_t old = android_atomic_inc(&mFastCaptureFutex);
if (old == -1) {
(void) syscall(__NR_futex, &mFastCaptureFutex, FUTEX_WAKE_PRIVATE, 1);
}
}
state->mCommand = FastCaptureState::READ_WRITE;
#if 0 // FIXME
mFastCaptureDumpState.increaseSamplingN(mAudioFlinger->isLowRamDevice() ?
FastCaptureDumpState::kSamplingNforLowRamDevice : FastMixerDumpState::kSamplingN);
#endif
didModify = true;
}
audio_track_cblk_t *cblkOld = state->mCblk;
audio_track_cblk_t *cblkNew = fastTrack != 0 ? fastTrack->cblk() : NULL;
if (cblkNew != cblkOld) {
state->mCblk = cblkNew;
// block until acked if removing a fast track
if (cblkOld != NULL) {
block = FastCaptureStateQueue::BLOCK_UNTIL_ACKED;
}
didModify = true;
}
sq->end(didModify);
if (didModify) {
sq->push(block);
#if 0
if (kUseFastCapture == FastCapture_Dynamic) {
mNormalSource = mPipeSource;
}
#endif
}
} // now run the fast track destructor with thread mutex unlocked
fastTrackToRemove.clear(); // Read from HAL to keep up with fastest client if multiple active tracks, not slowest one.
// Only the client(s) that are too slow will overrun. But if even the fastest client is too
// slow, then this RecordThread will overrun by not calling HAL read often enough.
// If destination is non-contiguous, first read past the nominal end of buffer, then
// copy to the right place. Permitted because mRsmpInBuffer was over-allocated. int32_t rear = mRsmpInRear & (mRsmpInFramesP2 - 1);
ssize_t framesRead; // If an NBAIO source is present, use it to read the normal capture's data
if (mPipeSource != 0) {
size_t framesToRead = mBufferSize / mFrameSize;
framesRead = mPipeSource->read(&mRsmpInBuffer[rear * mChannelCount],
framesToRead, AudioBufferProvider::kInvalidPTS);
if (framesRead == 0) {
// since pipe is non-blocking, simulate blocking input
sleepUs = (framesToRead * 1000000LL) / mSampleRate;
}
// otherwise use the HAL / AudioStreamIn directly
} else {
ssize_t bytesRead = mInput->stream->read(mInput->stream,
&mRsmpInBuffer[rear * mChannelCount], mBufferSize);
if (bytesRead < 0) {
framesRead = bytesRead;
} else {
framesRead = bytesRead / mFrameSize;
}
} if (framesRead < 0 || (framesRead == 0 && mPipeSource == 0)) {
ALOGE("read failed: framesRead=%d", framesRead);
// Force input into standby so that it tries to recover at next read attempt
inputStandBy();
sleepUs = kRecordThreadSleepUs;
}
if (framesRead <= 0) {
goto unlock;
}
ALOG_ASSERT(framesRead > 0); if (mTeeSink != 0) {
(void) mTeeSink->write(&mRsmpInBuffer[rear * mChannelCount], framesRead);
}
// If destination is non-contiguous, we now correct for reading past end of buffer.
{
size_t part1 = mRsmpInFramesP2 - rear;
if ((size_t) framesRead > part1) {
memcpy(mRsmpInBuffer, &mRsmpInBuffer[mRsmpInFramesP2 * mChannelCount],
(framesRead - part1) * mFrameSize);
}
}
rear = mRsmpInRear += framesRead; size = activeTracks.size();
// loop over each active track
for (size_t i = 0; i < size; i++) {
activeTrack = activeTracks[i]; // skip fast tracks, as those are handled directly by FastCapture
if (activeTrack->isFastTrack()) {
continue;
} enum {
OVERRUN_UNKNOWN,
OVERRUN_TRUE,
OVERRUN_FALSE
} overrun = OVERRUN_UNKNOWN; // loop over getNextBuffer to handle circular sink
for (;;) { activeTrack->mSink.frameCount = ~0; status_t status = activeTrack->getNextBuffer(&activeTrack->mSink);
size_t framesOut = activeTrack->mSink.frameCount;
LOG_ALWAYS_FATAL_IF((status == OK) != (framesOut > 0)); int32_t front = activeTrack->mRsmpInFront;
ssize_t filled = rear - front;
size_t framesIn; if (filled < 0) {
// should not happen, but treat like a massive overrun and re-sync
framesIn = 0;
activeTrack->mRsmpInFront = rear;
overrun = OVERRUN_TRUE;
} else if ((size_t) filled <= mRsmpInFrames) {
framesIn = (size_t) filled;
} else {
// client is not keeping up with server, but give it latest data
framesIn = mRsmpInFrames;
activeTrack->mRsmpInFront = front = rear - framesIn;
overrun = OVERRUN_TRUE;
} if (framesOut == 0 || framesIn == 0) {
break;
} if (activeTrack->mResampler == NULL) {
// no resampling
if (framesIn > framesOut) {
framesIn = framesOut;
} else {
framesOut = framesIn;
}
int8_t *dst = activeTrack->mSink.i8;
while (framesIn > 0) {
front &= mRsmpInFramesP2 - 1;
size_t part1 = mRsmpInFramesP2 - front;
if (part1 > framesIn) {
part1 = framesIn;
}
int8_t *src = (int8_t *)mRsmpInBuffer + (front * mFrameSize);
if (mChannelCount == activeTrack->mChannelCount) {
memcpy(dst, src, part1 * mFrameSize);
} else if (mChannelCount == 1) {
upmix_to_stereo_i16_from_mono_i16((int16_t *)dst, (const int16_t *)src,
part1);
} else {
downmix_to_mono_i16_from_stereo_i16((int16_t *)dst, (const int16_t *)src,
part1);
}
dst += part1 * activeTrack->mFrameSize;
front += part1;
framesIn -= part1;
}
activeTrack->mRsmpInFront += framesOut; } else {
// resampling
// FIXME framesInNeeded should really be part of resampler API, and should
// depend on the SRC ratio
// to keep mRsmpInBuffer full so resampler always has sufficient input
size_t framesInNeeded;
// FIXME only re-calculate when it changes, and optimize for common ratios
// Do not precompute in/out because floating point is not associative
// e.g. a*b/c != a*(b/c).
const double in(mSampleRate);
const double out(activeTrack->mSampleRate);
framesInNeeded = ceil(framesOut * in / out) + 1;
ALOGV("need %u frames in to produce %u out given in/out ratio of %.4g",
framesInNeeded, framesOut, in / out);
// Although we theoretically have framesIn in circular buffer, some of those are
// unreleased frames, and thus must be discounted for purpose of budgeting.
size_t unreleased = activeTrack->mRsmpInUnrel;
framesIn = framesIn > unreleased ? framesIn - unreleased : 0;
if (framesIn < framesInNeeded) {
ALOGV("not enough to resample: have %u frames in but need %u in to "
"produce %u out given in/out ratio of %.4g",
framesIn, framesInNeeded, framesOut, in / out);
size_t newFramesOut = framesIn > 0 ? floor((framesIn - 1) * out / in) : 0;
LOG_ALWAYS_FATAL_IF(newFramesOut >= framesOut);
if (newFramesOut == 0) {
break;
}
framesInNeeded = ceil(newFramesOut * in / out) + 1;
ALOGV("now need %u frames in to produce %u out given out/in ratio of %.4g",
framesInNeeded, newFramesOut, out / in);
LOG_ALWAYS_FATAL_IF(framesIn < framesInNeeded);
ALOGV("success 2: have %u frames in and need %u in to produce %u out "
"given in/out ratio of %.4g",
framesIn, framesInNeeded, newFramesOut, in / out);
framesOut = newFramesOut;
} else {
ALOGV("success 1: have %u in and need %u in to produce %u out "
"given in/out ratio of %.4g",
framesIn, framesInNeeded, framesOut, in / out);
} // reallocate mRsmpOutBuffer as needed; we will grow but never shrink
if (activeTrack->mRsmpOutFrameCount < framesOut) {
// FIXME why does each track need it's own mRsmpOutBuffer? can't they share?
delete[] activeTrack->mRsmpOutBuffer;
// resampler always outputs stereo
activeTrack->mRsmpOutBuffer = new int32_t[framesOut * FCC_2];
activeTrack->mRsmpOutFrameCount = framesOut;
} // resampler accumulates, but we only have one source track
memset(activeTrack->mRsmpOutBuffer, 0, framesOut * FCC_2 * sizeof(int32_t));
activeTrack->mResampler->resample(activeTrack->mRsmpOutBuffer, framesOut,
// FIXME how about having activeTrack implement this interface itself?
activeTrack->mResamplerBufferProvider
/*this*/ /* AudioBufferProvider* */);
// ditherAndClamp() works as long as all buffers returned by
// activeTrack->getNextBuffer() are 32 bit aligned which should be always true.
if (activeTrack->mChannelCount == 1) {
// temporarily type pun mRsmpOutBuffer from Q4.27 to int16_t
ditherAndClamp(activeTrack->mRsmpOutBuffer, activeTrack->mRsmpOutBuffer,
framesOut);
// the resampler always outputs stereo samples:
// do post stereo to mono conversion
downmix_to_mono_i16_from_stereo_i16(activeTrack->mSink.i16,
(const int16_t *)activeTrack->mRsmpOutBuffer, framesOut);
} else {
ditherAndClamp((int32_t *)activeTrack->mSink.raw,
activeTrack->mRsmpOutBuffer, framesOut);
}
// now done with mRsmpOutBuffer } if (framesOut > 0 && (overrun == OVERRUN_UNKNOWN)) {
overrun = OVERRUN_FALSE;
} if (activeTrack->mFramesToDrop == 0) {
if (framesOut > 0) {
activeTrack->mSink.frameCount = framesOut;
activeTrack->releaseBuffer(&activeTrack->mSink);
}
} else {
// FIXME could do a partial drop of framesOut
if (activeTrack->mFramesToDrop > 0) {
activeTrack->mFramesToDrop -= framesOut;
if (activeTrack->mFramesToDrop <= 0) {
activeTrack->clearSyncStartEvent();
}
} else {
activeTrack->mFramesToDrop += framesOut;
if (activeTrack->mFramesToDrop >= 0 || activeTrack->mSyncStartEvent == 0 ||
activeTrack->mSyncStartEvent->isCancelled()) {
ALOGW("Synced record %s, session %d, trigger session %d",
(activeTrack->mFramesToDrop >= 0) ? "timed out" : "cancelled",
activeTrack->sessionId(),
(activeTrack->mSyncStartEvent != 0) ?
activeTrack->mSyncStartEvent->triggerSession() : 0);
activeTrack->clearSyncStartEvent();
}
}
} if (framesOut == 0) {
break;
}
}
ALOGE("pngcui - end for(;;)"); switch (overrun) {
case OVERRUN_TRUE:
// client isn't retrieving buffers fast enough
if (!activeTrack->setOverflow()) {
nsecs_t now = systemTime();
// FIXME should lastWarning per track?
if ((now - lastWarning) > kWarningThrottleNs) {
ALOGW("RecordThread: buffer overflow");
lastWarning = now;
}
}
break;
case OVERRUN_FALSE:
activeTrack->clearOverflow();
break;
case OVERRUN_UNKNOWN:
break;
} } unlock:
// enable changes in effect chain
unlockEffectChains(effectChains);
// effectChains doesn't need to be cleared, since it is cleared by destructor at scope end
} standbyIfNotAlreadyInStandby(); {
Mutex::Autolock _l(mLock);
for (size_t i = 0; i < mTracks.size(); i++) {
sp<RecordTrack> track = mTracks[i];
track->invalidate();
}
mActiveTracks.clear();
mActiveTracksGen++;
mStartStopCond.broadcast();
} releaseWakeLock(); ALOGV("RecordThread %p exiting", this);
return false;
}

在这个线程中主要的工作如下:

这个线程其实早在AudioRecord::set函数中就创建好了的,只是一直阻塞在mWaitWorkCV.wait(mLock);中等待mWaitWorkCV的signal,然后再回到reacquire_wakelock位置;

1.这里注意下,在线程启动的时候,调用了inputStandBy方法,他最终会调用mInput->stream->common.standby,但是in->standby初始值为1,所以在hal层中并没有做实质上的工作;

2.调用processConfigEvents_l函数,判断mConfigEvents中是否有事件,若有则向外发送mConfigEvents中的事件;

3.获取mActiveTracks的个数,回顾一下,在Threads.cpp中调用AudioFlinger::RecordThread::start方法时候,会把创建好的RecordThread加入到mActiveTracks中,所以这里的activeTrack就是之前创建好的RecordThread对象了;

4.既然知道mActiveTracks中已经不为null了,所以for循环就不会再进入到mWaitWorkCV.wait中等待了,要真正的开始干活了;

5.从mActiveTracks获取到在RecordThread的start方法中加进去的activeTrack;

6.还记得之前在RecordThread的start方法的时候留的一个悬念吗,这里就揭晓答案,在add到mActiveTracks的时候,mState为STARTING_1,所以这里肯定是STARTING_1,设置sleepUs为10ms后continue,我们再回到for循环开头,这里usleep了!!!我们之前分析到在RecordThread的start方法跑完了的时候才会更新mState为STARTING_2,所以RecordThread::threadLoop也就是在等待RecordThread的start方法跑完,否则会一直在sleep中;

7.标记doBroadcast为true,mStandby为false,mState为ACTIVE了;

8.把activeTrack对象拷贝到activeTracks集合中,然后调用mStartStopCond.broadcast(),这个广播注意下,肯定在后面的代码中有作用;

9.判断是否有音效控制,如有则对该音效做process_l处理;

10.获取当前AudioBuffer缓冲区的位置rear,这里mRsmpInFrames是mFrameCount * 7,即1024*7,和mRsmpInFramesP2是roundup(mRsmpInFrames),即1024*8;

11.如果是FastCapture方式的话,则调用PipeSource->read去获取数据,否则直接调用HAL层的接口mInput->stream->read获取数据保存到mRsmpInBuffer中,这里采用后者,每次读取2048个字节的数据;

12.如果获取失败的话,强制设置HAL层的standby状态为1,然后休眠一会,再重新开启read,具体操作会在hal层中的read函数中体现;

13.获取AudioBuffer剩下的大小part1,如果剩下的大小不足以存下read出来的数据,则把超出的数据拷贝到AudioBuffer环形缓冲区的头部地方,纠正读取缓冲区末尾的错误;

14.更新rear与mRsmpInRear,向后推进framesRead个单位,而framesRead是bytesRead / mFrameSize,而mFrameSize是audio_stream_in_frame_size获取到的(AudioFlinger::RecordThread::readInputParameters_l()中);

15.调用activeTrack->getNextBuffer获取下一个buffer;

16.判断当前Buffer中已经填充了多少数据:filled,如果已经写满则标记OVERRUN_TRUE,如果framesOut或者framesIn为0(这个framesOut是缓冲区中的available frames,如果缓冲区overrun的话,肯定就是0了),则不继续下面了,直接break;

17.判断是否需要重采样,这里不需要重采样

1.不进行重采样了的话,就直接通过memcpy把mRsmpInBuffer数据拷贝到activeTrack->mSink.i8中,也就是通过getNextBuffer获取到的那块buffer;

2.需要重采样的话,会调用activeTrack->mResampler->resample进行重采样之后,通过ditherAndClamp把数据拷贝到activeTrack->mSink中;

18.判断下当前overrun的状态,如果出现OVERRUN的情况,则调用setOverflow设置mOverflow为true,此时说明应用端读取数据的速度不够快,但是依旧会提供最新的pcm数据,所以如果出现了音频播放时出现跳音,可以排查下这里。

这里分析下第11步:mInput->stream->read以及第15步:activeTrack->getNextBuffer

首先分析下第15步:activeTrack->getNextBuffer,获取下一个buffer,因为我们之前就了解AudioBuffer的管理方式,他有一个环形缓冲区,现在这里就是一直在读取底层的数据,他不会在乎应用层有没有去获取我read出来的数据,所以这里就有一个问题,RecordThread线程read出来的数据是怎么写到缓冲区的呢,和后面的AudioRecord.java中read函数去进行交互的。

frameworks\av\services\audioflinger\Tracks.cpp

status_t AudioFlinger::RecordThread::RecordTrack::getNextBuffer(AudioBufferProvider::Buffer* buffer,
int64_t pts __unused)
{
ServerProxy::Buffer buf;
buf.mFrameCount = buffer->frameCount;
status_t status = mServerProxy->obtainBuffer(&buf);
buffer->frameCount = buf.mFrameCount;
buffer->raw = buf.mRaw;
if (buf.mFrameCount == 0) {
// FIXME also wake futex so that overrun is noticed more quickly
(void) android_atomic_or(CBLK_OVERRUN, &mCblk->mFlags);
}
return status;
}

继续调用mServerProxy->obtainBuffer获取buf,这个raw就是指向缓冲区的那块共享内存,如果缓冲区填满了的话,则设置mCblk->mFlags为CBLK_OVERRUN

frameworks\av\media\libmedia\AudioTrackShared.cpp

status_t ServerProxy::obtainBuffer(Buffer* buffer, bool ackFlush)
{
LOG_ALWAYS_FATAL_IF(buffer == NULL || buffer->mFrameCount == 0);
if (mIsShutdown) {
goto no_init;
}
{
audio_track_cblk_t* cblk = mCblk;
// compute number of frames available to write (AudioTrack) or read (AudioRecord),
// or use previous cached value from framesReady(), with added barrier if it omits.
int32_t front;
int32_t rear;
// See notes on barriers at ClientProxy::obtainBuffer()
if (mIsOut) {
int32_t flush = cblk->u.mStreaming.mFlush;
rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
front = cblk->u.mStreaming.mFront;
if (flush != mFlush) {
// effectively obtain then release whatever is in the buffer
size_t mask = (mFrameCountP2 << 1) - 1;
int32_t newFront = (front & ~mask) | (flush & mask);
ssize_t filled = rear - newFront;
// Rather than shutting down on a corrupt flush, just treat it as a full flush
if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
ALOGE("mFlush %#x -> %#x, front %#x, rear %#x, mask %#x, newFront %#x, filled %d=%#x",
mFlush, flush, front, rear, mask, newFront, filled, filled);
newFront = rear;
}
mFlush = flush;
android_atomic_release_store(newFront, &cblk->u.mStreaming.mFront);
// There is no danger from a false positive, so err on the side of caution
if (true /*front != newFront*/) {
int32_t old = android_atomic_or(CBLK_FUTEX_WAKE, &cblk->mFutex);
if (!(old & CBLK_FUTEX_WAKE)) {
(void) syscall(__NR_futex, &cblk->mFutex,
mClientInServer ? FUTEX_WAKE_PRIVATE : FUTEX_WAKE, 1);
}
}
front = newFront;
}
} else {
front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront);
rear = cblk->u.mStreaming.mRear;
}
ssize_t filled = rear - front;
// pipe should not already be overfull
if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
ALOGE("Shared memory control block is corrupt (filled=%zd); shutting down", filled);
mIsShutdown = true;
}
if (mIsShutdown) {
goto no_init;
}
// don't allow filling pipe beyond the nominal size
size_t availToServer;
if (mIsOut) {
availToServer = filled;
mAvailToClient = mFrameCount - filled;
} else {
availToServer = mFrameCount - filled;
mAvailToClient = filled;
}
// 'availToServer' may be non-contiguous, so return only the first contiguous chunk
size_t part1;
if (mIsOut) {
front &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - front;
} else {
rear &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - rear;
}
if (part1 > availToServer) {
part1 = availToServer;
}
size_t ask = buffer->mFrameCount;
if (part1 > ask) {
part1 = ask;
}
// is assignment redundant in some cases?
buffer->mFrameCount = part1;
buffer->mRaw = part1 > 0 ?
&((char *) mBuffers)[(mIsOut ? front : rear) * mFrameSize] : NULL;
buffer->mNonContig = availToServer - part1;
// After flush(), allow releaseBuffer() on a previously obtained buffer;
// see "Acknowledge any pending flush()" in audioflinger/Tracks.cpp.
if (!ackFlush) {
mUnreleased = part1;
}
return part1 > 0 ? NO_ERROR : WOULD_BLOCK;
}
no_init:
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
mUnreleased = 0;
return NO_INIT;
}

这个函数中主要的工作如下:

1.获取mCblk,这里需要回忆下,mCblk是在AudioRecord::openRecord_l中更新的,即sp<IMemory> iMem通过audioFlinger->openRecord获取到共享内存,然后mCblk=iMem->pointer(),所以实际是在AudioFlinger::openRecord函数中获取到的iMem = cblk = recordTrack->getCblk();

frameworks\av\services\audioflinger\TrackBase.h

sp<IMemory> getCblk() const { return mCblkMemory; }
audio_track_cblk_t* cblk() const { return mCblk; }
sp<IMemory> getBuffers() const { return mBufferMemory; }

2.获取mCblk中的front、rear,然后计算出filled;

3.计算出缓冲区中的available frames,然后保存到mFrameCount中;

4.计算出下一块buf的地址,保存到mRaw中;

这里就完成把read出来的数据写入到相应的共享内存,即环形缓冲区中了。

然后再继续简单分析下第12步:调用HAL层的read函数

hardware\aw\audio\tulip\audio_hw.c

static ssize_t in_read(struct audio_stream_in *stream, void* buffer,
size_t bytes)
{
int ret = 0;
struct sunxi_stream_in *in = (struct sunxi_stream_in *)stream;
struct sunxi_audio_device *adev = in->dev;
size_t frames_rq = bytes / audio_stream_frame_size(&stream->common); if (adev->mode == AUDIO_MODE_IN_CALL) {
memset(buffer, 0, bytes);
} /* acquiring hw device mutex systematically is useful if a low priority thread is waiting
* on the input stream mutex - e.g. executing select_mode() while holding the hw device
* mutex
*/
if (adev->af_capture_flag && adev->PcmManager.BufExist) {
pthread_mutex_lock(&adev->lock);
pthread_mutex_lock(&in->lock);
if (in->standby) {
in->standby = 0;
}
pthread_mutex_unlock(&adev->lock); if (ret < 0)
goto exit;
ret = ReadPcmData(buffer, bytes, &adev->PcmManager); if (ret > 0)
ret = 0; if (ret == 0 && adev->mic_mute)
memset(buffer, 0, bytes); pthread_mutex_unlock(&in->lock);
return bytes;
} pthread_mutex_lock(&adev->lock);
pthread_mutex_lock(&in->lock);
if (in->standby) {
ret = start_input_stream(in);
if (ret == 0)
in->standby = 0;
}
pthread_mutex_unlock(&adev->lock); if (ret < 0)
goto exit; if (in->num_preprocessors != 0) {
ret = read_frames(in, buffer, frames_rq); } else if (in->resampler != NULL) {
ret = read_frames(in, buffer, frames_rq); } else {
ret = pcm_read(in->pcm, buffer, bytes);
} if (ret > 0)
ret = 0; if (ret == 0 && adev->mic_mute)
memset(buffer, 0, bytes); exit:
if (ret < 0)
usleep(bytes * 1000000 / audio_stream_frame_size(&stream->common) /
in_get_sample_rate(&stream->common)); pthread_mutex_unlock(&in->lock);
return bytes;
}

这里就直接调用到了HAL层中的in_read函数,这个函数一般soc厂家不一样,实现也不一样,这里做简要介绍

1.如果当前输入流的standby为true的时候,也就是第一次read时,调用start_input_stream函数去打开mic设备节点;

2.如果stream_in的num_preprocessors或者resampler有数据的时候,则调用read_frames函数获取数据,否则直接调用pcm_read获取。其实他们最终都是调用的pcm_read去获取数据的,这个函数是tinyalsa架构提供的,这个库的实现源码位置:external\tinyalsa\,我们在测试音频的时候一般也是通过这几个程序去测试的;

3.把读取到的数据喂给buffer,最后休息一下;

这里再继续分析下start_input_stream函数

static int start_input_stream(struct sunxi_stream_in *in)
{
int ret = 0;
int in_ajust_rate = 0;
struct sunxi_audio_device *adev = in->dev; adev->active_input = in; F_LOG;
adev->in_device = in->device;
select_device(adev); if (in->need_echo_reference && in->echo_reference == NULL)
in->echo_reference = get_echo_reference(adev,
AUDIO_FORMAT_PCM_16_BIT,
in->config.channels,
in->requested_rate); in_ajust_rate = in->requested_rate;
ALOGD(">>>>>> in_ajust_rate is : %d", in_ajust_rate);
// out/in stream should be both 44.1K serial
switch(CASE_NAME){
case 0 :
case 1 :
in_ajust_rate = SAMPLING_RATE_44K;
if((adev->mode == AUDIO_MODE_IN_CALL) && (adev->out_device == AUDIO_DEVICE_OUT_BLUETOOTH_SCO) ){
in_ajust_rate = SAMPLING_RATE_8K;
}
if((adev->mode == AUDIO_MODE_IN_COMMUNICATION) && (adev->out_device == AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET) ){
in_ajust_rate = SAMPLING_RATE_8K;
}
break;
case 2 :
if(adev->mode == AUDIO_MODE_IN_CALL)
in_ajust_rate = in->requested_rate;
else
in_ajust_rate = SAMPLING_RATE_44K; default :
break; }
if (adev->mode == AUDIO_MODE_IN_CALL)
in->pcm = pcm_open(0, PORT_VIR_CODEC, PCM_IN, &in->config);
else
in->pcm = pcm_open(0, PORT_CODEC, PCM_IN, &in->config); if (!pcm_is_ready(in->pcm)) {
ALOGE("cannot open pcm_in driver: %s", pcm_get_error(in->pcm));
pcm_close(in->pcm);
adev->active_input = NULL;
return -ENOMEM;
} if (in->requested_rate != in->config.rate) {
in->buf_provider.get_next_buffer = get_next_buffer;
in->buf_provider.release_buffer = release_buffer; ret = create_resampler(in->config.rate,
in->requested_rate,
in->config.channels,
RESAMPLER_QUALITY_DEFAULT,
&in->buf_provider,
&in->resampler);
if (ret != 0) {
ALOGE("create in resampler failed, %d -> %d", in->config.rate, in->requested_rate);
ret = -EINVAL;
goto err;
} ALOGV("create in resampler OK, %d -> %d", in->config.rate, in->requested_rate);
}
else
{
ALOGV("do not use in resampler");
} /* if no supported sample rate is available, use the resampler */
if (in->resampler) {
in->resampler->reset(in->resampler);
in->frames_in = 0;
}
PLOGV("audio_hw::read end!!!");
return 0; err:
if (in->resampler) {
release_resampler(in->resampler);
} return -1;
}

这个函数的主要工作包括:

1.调用select_device去选择一个输入设备;

2.调用pcm_open函数打开输入设备的节点;

再继续看下select_device函数

static void select_device(struct sunxi_audio_device *adev)
{
int ret = -1;
int output_device_id = 0;
int input_device_id = 0;
const char *output_route = NULL;
const char *input_route = NULL;
const char *phone_route = NULL;
int earpiece_on=0, headset_on=0, headphone_on=0, bt_on=0, speaker_on=0;
int main_mic_on = 0,sub_mic_on = 0;
int bton_temp = 0; if(!adev->ar)
return; audio_route_reset(adev->ar); audio_route_update_mixer_old_value(adev->ar); if(spk_dul_used)
audio_route_apply_path(adev->ar, "media-speaker-off");
else
audio_route_apply_path(adev->ar, "media-single-speaker-off"); if (adev->mode == AUDIO_MODE_IN_CALL){
if(CASE_NAME <= 0){
ALOGV("%s,PHONE CASE ERR!!!!!!!!!!!!!!!!!!!! line: %d,CASE_NAME:%d", __FUNCTION__, __LINE__,CASE_NAME);
//return CASE_NAME;
}
headset_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADSET; // hp4p
headphone_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADPHONE; // hp3p
speaker_on = adev->out_device & AUDIO_DEVICE_OUT_SPEAKER;
earpiece_on = adev->out_device & AUDIO_DEVICE_OUT_EARPIECE;
bt_on = adev->out_device & AUDIO_DEVICE_OUT_ALL_SCO;
//audio_route_reset(adev->ar);
ALOGV("****LINE:%d,FUNC:%s, headset_on:%d, headphone_on:%d, speaker_on:%d, earpiece_on:%d, bt_on:%d",__LINE__,__FUNCTION__, headphone_on, headphone_on, speaker_on, earpiece_on, bt_on);
if (last_call_path_is_bt && !bt_on) {
end_bt_call(adev);
last_call_path_is_bt = 0;
}
if ((headset_on || headphone_on) && speaker_on){
output_device_id = OUT_DEVICE_SPEAKER_AND_HEADSET;
} else if (earpiece_on) {
F_LOG;
if (NO_EARPIECE)
{
F_LOG;
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
else
{F_LOG;
output_device_id = OUT_DEVICE_EARPIECE;
}
} else if (headset_on) {
output_device_id = OUT_DEVICE_HEADSET;
} else if (headphone_on){
output_device_id = OUT_DEVICE_HEADPHONES;
}else if(bt_on){
bton_temp = 1;
//bt_start_call(adev);
//last_call_path_is_bt = 1;
output_device_id = OUT_DEVICE_BT_SCO;
}else if(speaker_on){
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
ALOGV("****** output_id is : %d", output_device_id);
phone_route = phone_route_configs[CASE_NAME-1][output_device_id];
set_incall_device(adev);
}
if (adev->active_output) {
ALOGV("active_output, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
headset_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADSET; // hp4p
headphone_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADPHONE; // hp3p
speaker_on = adev->out_device & AUDIO_DEVICE_OUT_SPEAKER;
earpiece_on = adev->out_device & AUDIO_DEVICE_OUT_EARPIECE;
bt_on = adev->out_device & AUDIO_DEVICE_OUT_ALL_SCO;
//audio_route_reset(adev->ar);
ALOGV("****LINE:%d,FUNC:%s, headset_on:%d, headphone_on:%d, speaker_on:%d, earpiece_on:%d, bt_on:%d",__LINE__,__FUNCTION__, headset_on, headphone_on, speaker_on, earpiece_on, bt_on);
if ((headset_on || headphone_on) && speaker_on){
output_device_id = OUT_DEVICE_SPEAKER_AND_HEADSET;
} else if (earpiece_on) {
if (NO_EARPIECE){
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
else
output_device_id = OUT_DEVICE_EARPIECE;
//output_device_id = OUT_DEVICE_EARPIECE;
} else if (headset_on) {
output_device_id = OUT_DEVICE_HEADSET;
} else if (headphone_on){
output_device_id = OUT_DEVICE_HEADSET;
}else if(bt_on){
output_device_id = OUT_DEVICE_BT_SCO;
}else if(speaker_on){
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
ALOGV("****LINE:%d,FUNC:%s, output_device_id:%d",__LINE__,__FUNCTION__, output_device_id);
switch (adev->mode){
case AUDIO_MODE_NORMAL:
ALOGV("NORMAL mode, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
#if 0
if(sysopen_music())
output_device_id = OUT_DEVICE_HEADSET;
else
output_device_id = OUT_DEVICE_SPEAKER;
//output_device_id = OUT_DEVICE_HEADSET;
#endif
output_route = normal_route_configs[output_device_id];
break;
case AUDIO_MODE_RINGTONE:
ALOGV("RINGTONE mode, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
output_route = ringtone_route_configs[output_device_id];
break;
case AUDIO_MODE_FM:
break;
case AUDIO_MODE_MODE_FACTORY_TEST:
break;
case AUDIO_MODE_IN_CALL:
ALOGV("IN_CALL mode, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
output_route = phone_keytone_route_configs[CASE_NAME-1][output_device_id];
break;
case AUDIO_MODE_IN_COMMUNICATION:
output_route = normal_route_configs[output_device_id];
F_LOG;
if (output_device_id == OUT_DEVICE_BT_SCO && !last_communication_is_bt) {
F_LOG;
/* Open modem PCM channels */
if (adev->pcm_modem_dl == NULL) {
adev->pcm_modem_dl = pcm_open(0, 4, PCM_OUT, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_modem_dl)) {
ALOGE("cannot open PCM modem DL stream: %s", pcm_get_error(adev->pcm_modem_dl));
//goto err_open_dl;
}
}
if (adev->pcm_modem_ul == NULL) {
adev->pcm_modem_ul = pcm_open(0, 4, PCM_IN, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_modem_ul)) {
ALOGE("cannot open PCM modem UL stream: %s", pcm_get_error(adev->pcm_modem_ul));
//goto err_open_ul;
}
}
/* Open bt PCM channels */
if (adev->pcm_bt_dl == NULL) {
adev->pcm_bt_dl = pcm_open(0, PORT_bt, PCM_OUT, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_bt_dl)) {
ALOGE("cannot open PCM bt DL stream: %s", pcm_get_error(adev->pcm_bt_dl));
//goto err_open_bt_dl;
}
}
if (adev->pcm_bt_ul == NULL) {
adev->pcm_bt_ul = pcm_open(0, PORT_bt, PCM_IN, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_bt_ul)) {
ALOGE("cannot open PCM bt UL stream: %s", pcm_get_error(adev->pcm_bt_ul));
//goto err_open_bt_ul;
}
}
pcm_start(adev->pcm_modem_dl);
pcm_start(adev->pcm_modem_ul);
pcm_start(adev->pcm_bt_dl);
pcm_start(adev->pcm_bt_ul);
last_communication_is_bt = true; }
break;
default:
break;
} }
if (adev->active_input) {
if(adev->out_device & AUDIO_DEVICE_OUT_ALL_SCO){
adev->in_device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
}
int bt_on = adev->in_device & AUDIO_DEVICE_IN_ALL_SCO;
ALOGV("record,****LINE:%d,FUNC:%s, adev->in_device:%x,AUDIO_DEVICE_IN_ALL_SCO:%x",__LINE__,__FUNCTION__, adev->in_device,AUDIO_DEVICE_IN_ALL_SCO);
if (!bt_on) {
if ((adev->mode != AUDIO_MODE_IN_CALL) && (adev->active_input != 0)) {
/* sub mic is used for camcorder or VoIP on speaker phone */
sub_mic_on = (adev->active_input->source == AUDIO_SOURCE_CAMCORDER) ||
((adev->out_device & AUDIO_DEVICE_OUT_SPEAKER) &&
(adev->active_input->source == AUDIO_SOURCE_VOICE_COMMUNICATION));
}
if (!sub_mic_on) {
headset_on = adev->in_device & AUDIO_DEVICE_IN_WIRED_HEADSET;
main_mic_on = adev->in_device & AUDIO_DEVICE_IN_BUILTIN_MIC;
}
}
if (headset_on){
input_device_id = IN_SOURCE_HEADSETMIC;
} else if (main_mic_on) {
input_device_id = IN_SOURCE_MAINMIC;
}else if (bt_on && (adev->mode == AUDIO_MODE_IN_COMMUNICATION || adev->mode == AUDIO_MODE_IN_CALL)) {
input_device_id = IN_SOURCE_BTMIC;
}else{
input_device_id = IN_SOURCE_MAINMIC;
}
ALOGV("fm record,****LINE:%d,FUNC:%s,bt_on:%d,headset_on:%d,main_mic_on;%d,adev->in_device:%x,AUDIO_DEVICE_IN_ALL_SCO:%x",__LINE__,__FUNCTION__,bt_on,headset_on,main_mic_on,adev->in_device,AUDIO_DEVICE_IN_ALL_SCO); if (adev->mode == AUDIO_MODE_IN_CALL) {
input_route = cap_phone_normal_route_configs[CASE_NAME-1][input_device_id];
ALOGV("phone record,****LINE:%d,FUNC:%s, adev->in_device:%x",__LINE__,__FUNCTION__, adev->in_device);
} else if (adev->mode == AUDIO_MODE_FM) {
//fm_record_enable(true);
//fm_record_route(adev->in_device);
ALOGV("fm record,****LINE:%d,FUNC:%s",__LINE__,__FUNCTION__);
} else if (adev->mode == AUDIO_MODE_NORMAL) {//1
if(dmic_used)
input_route = dmic_cap_normal_route_configs[input_device_id];
else
input_route = cap_normal_route_configs[input_device_id];
ALOGV("normal record,****LINE:%d,FUNC:%s,adev->in_device:%d",__LINE__,__FUNCTION__,adev->in_device);
} else if (adev->mode == AUDIO_MODE_IN_COMMUNICATION) {
if(dmic_used)
input_route = dmic_cap_normal_route_configs[input_device_id];
else
input_route = cap_normal_route_configs[input_device_id];
F_LOG;
} }
if (phone_route)
audio_route_apply_path(adev->ar, phone_route);
if (output_route)
audio_route_apply_path(adev->ar, output_route);
if (input_route)
audio_route_apply_path(adev->ar, input_route);
audio_route_update_mixer(adev->ar);
if (adev->mode == AUDIO_MODE_IN_CALL ){
if(bton_temp && last_call_path_is_bt == 0){
bt_start_call(adev);
last_call_path_is_bt = 1;
}
}
}

所以说,我们如果需要修改输入设备的时候,可以在这个函数中根据相应的参数去更改,同样,对于其他方案来说,也是可以参考这一套方法去实现的。

具体获取pcm数据的方法介绍到这里,总结下RecordThread线程的作用:这个线程中,就会真正去获取pcm数据,更新缓冲区中的数据,判断当前是否处于overrun的状态等等。

总结:

在startRecording函数中,他建立起了录音通道路由route,并且开启了应用层的录音线程,并把录音数据从驱动中读取到AudioBuffer环形缓冲区来。此时录音设备节点已经被open了,并开始read数据了

由于作者内功有限,若文章中存在错误或不足的地方,还请给位大佬指出,不胜感激!

(三)Audio子系统之AudioRecord.startRecording的更多相关文章

  1. (四)Audio子系统之AudioRecord.read

      在上一篇文章<(三)Audio子系统之AudioRecord.startRecording>中已经介绍了AudioRecord如何开始录制音频,接下来,继续分析AudioRecord方 ...

  2. (五)Audio子系统之AudioRecord.stop

    在上一篇文章<(四)Audio子系统之AudioRecord.read>中已经介绍了AudioRecord如何获取音频数据,接下来,继续分析AudioRecord方法中的stop的实现 函 ...

  3. (六)Audio子系统之AudioRecord.release

      在上一篇文章<(五)Audio子系统之AudioRecord.stop>中已经介绍了AudioRecord如何暂停录制,接下来,继续分析AudioRecord方法中的release的实 ...

  4. (一)Audio子系统之AudioRecord.getMinBufferSize

    在文章<基于Allwinner的Audio子系统分析(Android-5.1)>中已经介绍了Audio的系统架构以及应用层调用的流程,接下来,继续分析AudioRecorder方法中的ge ...

  5. (二)Audio子系统之new AudioRecord()

    在上一篇文章<(一)Audio子系统之AudioRecord.getMinBufferSize>中已经介绍了AudioRecord如何获取最小缓冲区大小,接下来,继续分析AudioReco ...

  6. (二)Audio子系统之new AudioRecord()(Android4.4)

    在上一篇文章<(一)Audio子系统之AudioRecord.getMinBufferSize>中已经介绍了AudioRecord如何获取最小缓冲区大小,接下来,继续分析AudioReco ...

  7. 基于Allwinner的Audio子系统分析(Android-5.1)

    前言 一直想总结下Audio子系统的博客,但是各种原因(主要还是自己懒>_<),一直拖到现在才开始重新整理,期间看过H8(Android-4.4),T3(Android-4.4),A64( ...

  8. [置顶] 我的Android进阶之旅------>Android解决异常: startRecording() called on an uninitialized AudioRecord.

    今天使用AudioRecord进行录音操作时候,报了下面的异常. E/AndroidRuntime(22775): java.lang.IllegalStateException: startReco ...

  9. Android 4.4KitKat AudioRecord 流程分析

    Android是架构分为三层: 底层      Linux Kernel 中间层  主要由C++实现 (Android 60%源码都是C++实现) 应用层  主要由JAVA开发的应用程序 应用程序执行 ...

随机推荐

  1. Java操作HDFS代码样例

    代码在GitHub上. 包括如下几种样例代码: 新建文件夹 删除文件/文件夹 重命名文件/文件夹 查看指定路径下的所有文件 新建文件 读文件 写文件 下载文件至本地 上传本地文件 https://gi ...

  2. 【笔记】metasploit渗透测试魔鬼训练营-信息搜集

    exploit 漏洞利用代码 编码器模块:免杀.控制 help [cmd] msfcli适合对网络中大量系统统一测试. 打开数据包路由转发功能:/etc/sysctl.conf /etc/rc.loc ...

  3. (二分搜索 数论)(求阶乘里零个数对应的阶乘)light oj -- 1138

    链接 Description You task is to find minimal natural number N, so that N! contains exactly Q zeroes on ...

  4. 团体程序设计天梯赛L2-023 图着色问题 2017-04-17 09:28 269人阅读 评论(0) 收藏

    L2-023. 图着色问题 时间限制 300 ms 内存限制 65536 kB 代码长度限制 8000 B 判题程序 Standard 作者 陈越 图着色问题是一个著名的NP完全问题.给定无向图 G ...

  5. ETL工作流缓慢原因查找方法

    What steps do you take to determine the bottleneck of a slow running ETL process? 如果ETL进程运行较慢,需要分哪几步 ...

  6. 洛谷 P1967 货车运输(克鲁斯卡尔重构树)

    题目描述 AAA国有nn n座城市,编号从 11 1到n nn,城市之间有 mmm 条双向道路.每一条道路对车辆都有重量限制,简称限重.现在有 qqq 辆货车在运输货物, 司机们想知道每辆车在不超过车 ...

  7. Ultimate guide to learning AngularJS in one day

    What is AngularJS? Angular is a client-side MVC/MVVM framework built in JavaScript, essential for mo ...

  8. pipeline构建时报错问题解决

    问题: 1.No such field found: field java.lang.String sh. Administrators can decide whether to approve o ...

  9. winform 版本号比较

    Version now_v = new Version(strval); Version load_v = new Version(model.version.ToString()); if (now ...

  10. 简述System.Windows.Forms.Timer 与System.Timers.Timer用法区别

    System.Windows.Forms.Timer 基于窗体应用程序 阻塞同步 单线程 timer中处理时间较长则导致定时误差极大. System.Timers.Timer 基于服务 非阻塞异步 多 ...