上一次讲解了一下startPreview过程,主要是为了画出一条大致的从上到下的线条,今天我们看一下Camera在Framework的sendCommand和dataCallback,这部分属于衔接过程,可以看到上下是如何交流沟通的。

首先,sendCommand

Camera.java中并没有sendCommand方法,在Camera.cpp中存在sendCommand函数,所以这个sendCommand是从android_hardware_interface.cpp中开始使用的

android_hardware_Camera.cpp (base\core\jni)

startSmoothZoom

android_hardware_Camera_startSmoothZoom

——>CAMERA_CMD_START_SMOOTH_ZOOM

stopSmoothZoom

android_hardware_Camera_stopSmoothZoom

——>CAMERA_CMD_STOP_SMOOTH_ZOOM

setDisplayOrientation

android_hardware_Camera_setDisplayOrientation

——>CAMERA_CMD_SET_DISPLAY_ORIENTATION

_enableShutterSound

android_hardware_Camera_enableShutterSound

——>CAMERA_CMD_ENABLE_SHUTTER_SOUND

_startFaceDetection

android_hardware_Camera_startFaceDetection

——>CAMERA_CMD_START_FACE_DETECTION

_stopFaceDetection

android_hardware_Camera_stopFaceDetection

——>CAMERA_CMD_STOP_FACE_DETECTION

enableFocusMoveCallback

android_hardware_Camera_enableFocusMoveCallback

——>CAMERA_CMD_ENABLE_FOCUS_MOVE_MSG

诸如此类的命令类型定义在Camera.h (system\core\include\system)

enum {

CAMERA_CMD_START_SMOOTH_ZOOM = 1,

CAMERA_CMD_STOP_SMOOTH_ZOOM = 2,

CAMERA_CMD_SET_DISPLAY_ORIENTATION = 3,

CAMERA_CMD_ENABLE_SHUTTER_SOUND = 4,

CAMERA_CMD_PLAY_RECORDING_SOUND = 5,

CAMERA_CMD_START_FACE_DETECTION = 6,

CAMERA_CMD_STOP_FACE_DETECTION = 7,

CAMERA_CMD_ENABLE_FOCUS_MOVE_MSG = 8,

CAMERA_CMD_PING = 9,

CAMERA_CMD_SET_VIDEO_BUFFER_COUNT = 10,

};

以上者几种操作都是采用sendCommand()的函数来实现的,对应的命令类型也列举出来了,HAL层会根据这个消息类型做出判断,然后做出对应的操作。

来看下sendCommand的实现:

Camera.cpp (frameworks\av\camera)

// send command to camera driver
status_t Camera::sendCommand(int32_t cmd, int32_t arg1, int32_t arg2)
{
ALOGV("sendCommand");
sp <ICamera> c = mCamera;
if (c == 0) return NO_INIT;
return c->sendCommand(cmd, arg1, arg2); //三个参数,第一个是消息类型,后面两个参数有时候使用有时不使用,这里应该是具有扩展性的,如果需要添加更多的参数,上下接口同时修改就可以了。
}

然后通过Binder机制,

    virtual status_t sendCommand(int32_t cmd, int32_t arg1, int32_t arg2)
{
ALOGV("sendCommand");
Parcel data, reply;
data.writeInterfaceToken(ICamera::getInterfaceDescriptor());
data.writeInt32(cmd);
data.writeInt32(arg1);
data.writeInt32(arg2);
remote()->transact(SEND_COMMAND, data, &reply);
return reply.readInt32();
}
status_t BnCamera::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
......
case SEND_COMMAND: {
ALOGV("SEND_COMMAND");
CHECK_INTERFACE(ICamera, data, reply);
int command = data.readInt32();
int arg1 = data.readInt32();
int arg2 = data.readInt32();
reply->writeInt32(sendCommand(command, arg1, arg2));
return NO_ERROR;
} break;
}

然后调用到

status_t CameraClient::sendCommand(int32_t cmd, int32_t arg1, int32_t arg2) {
......
if (cmd == CAMERA_CMD_SET_DISPLAY_ORIENTATION) {
// Mirror the preview if the camera is front-facing.
orientation = getOrientation(arg1, mCameraFacing == CAMERA_FACING_FRONT);
if (orientation == -1) return BAD_VALUE; if (mOrientation != orientation) {
mOrientation = orientation;
if (mPreviewWindow != 0) {
native_window_set_buffers_transform(mPreviewWindow.get(),
mOrientation);
}
}
return OK;
} else if (cmd == CAMERA_CMD_ENABLE_SHUTTER_SOUND) {
switch (arg1) { //这里就是对参数arg1的使用,通过参数来开关拍照声音
case 0:
return enableShutterSound(false);
case 1:
return enableShutterSound(true);
default:
return BAD_VALUE;
}
return OK;
} else if (cmd == CAMERA_CMD_PLAY_RECORDING_SOUND) { //录像声音
mCameraService->playSound(CameraService::SOUND_RECORDING);
} else if (cmd == CAMERA_CMD_SET_VIDEO_BUFFER_COUNT) {
// Silently ignore this command
return INVALID_OPERATION;
} else if (cmd == CAMERA_CMD_PING) {
// If mHardware is 0, checkPidAndHardware will return error.
return OK;
} //以上是framework可以处理的命令
return mHardware->sendCommand(cmd, arg1, arg2); //HAL层处理
}

在HAL层中处理的消息类型主要是打开和停止人脸检测过程,

QCamera2HWI.cpp (\device\asus\flo\camera\qcamera2\hal)

int QCamera2HardwareInterface::sendCommand(int32_t command, int32_t /*arg1*/, int32_t /*arg2*/)
{
int rc = NO_ERROR;
switch (command) {
case CAMERA_CMD_START_FACE_DETECTION:
case CAMERA_CMD_STOP_FACE_DETECTION:
//开关人脸检测
rc = setFaceDetection(command == CAMERA_CMD_START_FACE_DETECTION? true : false);
break;
default:
rc = NO_ERROR;
break;
}
return rc;
}

综上,如上就是sendCommand的过程

然后,回调Callback

从之前的文章可以看到callback主要有三种类型

notifyCallback

dataCallback

dataTimestampCallback

这个过程我们就不能按照之前的从上至下的跟过程了,这次需要反着来,从HAL层回调到

CameraHardwareInterface.h (frameworks\av\services\camera\libcameraservice\device1)

    static void __notify_cb(int32_t msg_type, int32_t ext1,
int32_t ext2, void *user)
{
ALOGV("%s", __FUNCTION__);
CameraHardwareInterface *__this =
static_cast<CameraHardwareInterface *>(user);
__this->mNotifyCb(msg_type, ext1, ext2, __this->mCbUser);
} static void __data_cb(int32_t msg_type,
const camera_memory_t *data, unsigned int index,
camera_frame_metadata_t *metadata,
void *user)
{
ALOGV("%s", __FUNCTION__);
CameraHardwareInterface *__this =
static_cast<CameraHardwareInterface *>(user);
......
__this->mDataCb(msg_type, mem->mBuffers[index], metadata, __this->mCbUser);
} static void __data_cb_timestamp(nsecs_t timestamp, int32_t msg_type,
const camera_memory_t *data, unsigned index,
void *user)
{
ALOGV("%s", __FUNCTION__);
CameraHardwareInterface *__this =
static_cast<CameraHardwareInterface *>(user);
......
__this->mDataCbTimestamp(timestamp, msg_type, mem->mBuffers[index], __this->mCbUser);
}

其中的mNotifyCb,mDataCb,mDataCbTimestamp是在CameraClient::initialize函数中设置的

    void setCallbacks(notify_callback notify_cb,
data_callback data_cb,
data_callback_timestamp data_cb_timestamp,
void* user)
{
mNotifyCb = notify_cb;
mDataCb = data_cb;
mDataCbTimestamp = data_cb_timestamp;
mCbUser = user; ALOGV("%s(%s)", __FUNCTION__, mName.string()); if (mDevice->ops->set_callbacks) {
mDevice->ops->set_callbacks(mDevice,
__notify_cb,
__data_cb,
__data_cb_timestamp,
__get_memory,
this);
}
}

回调自然是到CameraClient中去找了

void CameraClient::notifyCallback(int32_t msgType, int32_t ext1,
int32_t ext2, void* user) {
......
} void CameraClient::dataCallback(int32_t msgType,
const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
LOG2("dataCallback(%d)", msgType); Mutex* lock = getClientLockFromCookie(user);
if (lock == NULL) return;
Mutex::Autolock alock(*lock); CameraClient* client =
static_cast<CameraClient*>(getClientFromCookie(user));
if (client == NULL) return; if (!client->lockIfMessageWanted(msgType)) return;
if (dataPtr == 0 && metadata == NULL) {
ALOGE("Null data returned in data callback");
client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
return;
} switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
case CAMERA_MSG_PREVIEW_FRAME:
client->handlePreviewData(msgType, dataPtr, metadata);
break;
case CAMERA_MSG_POSTVIEW_FRAME:
client->handlePostview(dataPtr);
break;
case CAMERA_MSG_RAW_IMAGE:
client->handleRawPicture(dataPtr);
break;
case CAMERA_MSG_COMPRESSED_IMAGE:
client->handleCompressedPicture(dataPtr);
break;
default:
client->handleGenericData(msgType, dataPtr, metadata);
break;
}
} void CameraClient::dataCallbackTimestamp(nsecs_t timestamp,
int32_t msgType, const sp<IMemory>& dataPtr, void* user) {
......
}

这里主要看下dataCallback的过程吧,

    switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
case CAMERA_MSG_PREVIEW_FRAME: //预览帧数据
client->handlePreviewData(msgType, dataPtr, metadata);
break;
case CAMERA_MSG_POSTVIEW_FRAME: //postview image
client->handlePostview(dataPtr);
break;
case CAMERA_MSG_RAW_IMAGE: //原始数据
client->handleRawPicture(dataPtr);
break;
case CAMERA_MSG_COMPRESSED_IMAGE: //真实图片
client->handleCompressedPicture(dataPtr);
break;
default:
client->handleGenericData(msgType, dataPtr, metadata);
break;
}

这里最后会调用到

c->dataCallback,然后根据消息类型来做处理,然后在通过binder机制

    // generic data callback from camera service to app with image data
void dataCallback(int32_t msgType, const sp<IMemory>& imageData,
camera_frame_metadata_t *metadata)
{
ALOGV("dataCallback");
Parcel data, reply;
data.writeInterfaceToken(ICameraClient::getInterfaceDescriptor());
data.writeInt32(msgType);
data.writeStrongBinder(imageData->asBinder());
if (metadata) {
data.writeInt32(metadata->number_of_faces);
data.write(metadata->faces, sizeof(camera_face_t) * metadata->number_of_faces);
}
remote()->transact(DATA_CALLBACK, data, &reply, IBinder::FLAG_ONEWAY);
}
status_t BnCameraClient::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
......
case DATA_CALLBACK: {
ALOGV("DATA_CALLBACK");
CHECK_INTERFACE(ICameraClient, data, reply);
int32_t msgType = data.readInt32();
sp<IMemory> imageData = interface_cast<IMemory>(data.readStrongBinder());
camera_frame_metadata_t *metadata = NULL;
if (data.dataAvail() > 0) {
metadata = new camera_frame_metadata_t;
metadata->number_of_faces = data.readInt32();
metadata->faces = (camera_face_t *) data.readInplace(
sizeof(camera_face_t) * metadata->number_of_faces);
}
dataCallback(msgType, imageData, metadata);
if (metadata) delete metadata;
return NO_ERROR;
} break;
......
}

这里转到Camera.cpp

// callback from camera service when frame or image is ready
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata)
{
sp<CameraListener> listener;
{
Mutex::Autolock _l(mLock);
listener = mListener;
}
if (listener != NULL) {
listener->postData(msgType, dataPtr, metadata);
}
}

通过listener的方式来往上层甩数据,那么问题来了,这个listener是什么时候设置的?

回想一下第一篇FWK分析博客中的native_setup过程中有这么一段

    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
context->incStrong((void*)android_hardware_Camera_native_setup);
camera->setListener(context);

就是在这里设置的listener,JNICameraContext继承CameraListener,复写父类的方法

class JNICameraContext: public CameraListener{
......
} class CameraListener: virtual public RefBase
{
public:
virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2) = 0;
virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata) = 0;
virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr) = 0;
};

承接上面那一段,这里我们只看postData()

android_hardware_Camera.cpp (frameworks\base\core\jni)

void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata)
{
......
int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA; // return data based on callback type
switch (dataMsgType) {
case CAMERA_MSG_VIDEO_FRAME:
// should never happen
break; // For backward-compatibility purpose, if there is no callback
// buffer for raw image, the callback returns null.
case CAMERA_MSG_RAW_IMAGE:
ALOGV("rawCallback");
if (mRawImageCallbackBuffers.isEmpty()) {
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
} else {
copyAndPost(env, dataPtr, dataMsgType);
}
break; // There is no data.
case 0:
break; default:
ALOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
copyAndPost(env, dataPtr, dataMsgType);
break;
}
// post frame metadata to Java
if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);//这里有人脸数据
}
}

这里涉及到的

env->CallStaticVoidMethod(mCameraJClass, fields.post_event,

mCameraJObjectWeak, dataMsgType, 0, 0, NULL);

copyAndPost(env, dataPtr, dataMsgType);

postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);

直接或间接的使用到fileds.post_event函数,这里是JNI中的方法注册,

fields.post_event = env->GetStaticMethodID(clazz, “postEventFromNative”,

“(Ljava/lang/Object;IIILjava/lang/Object;)V”);

这个是在register_android_hardware_Camera()函数中调用的,这里不做过多停留,这个实际会调用到

Camera.java (frameworks\base\core\java\android\hardware)

    private static void postEventFromNative(Object camera_ref,
int what, int arg1, int arg2, Object obj)
{
Camera c = (Camera)((WeakReference)camera_ref).get();
if (c == null)
return; if (c.mEventHandler != null) {
Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
c.mEventHandler.sendMessage(m);
}
}

这里也是采用了JAVA中很常用的handler message处理

    private class EventHandler extends Handler
{
private final Camera mCamera; public EventHandler(Camera c, Looper looper) {
super(looper);
mCamera = c;
} @Override
public void handleMessage(Message msg) {
switch(msg.what) {
case CAMERA_MSG_SHUTTER:
if (mShutterCallback != null) {
mShutterCallback.onShutter();
}
return; case CAMERA_MSG_RAW_IMAGE:
if (mRawImageCallback != null) {
mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return; case CAMERA_MSG_COMPRESSED_IMAGE:
if (mJpegCallback != null) {
mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return; case CAMERA_MSG_PREVIEW_FRAME:
PreviewCallback pCb = mPreviewCallback;
if (pCb != null) {
if (mOneShot) {
// Clear the callback variable before the callback
// in case the app calls setPreviewCallback from
// the callback function
mPreviewCallback = null;
} else if (!mWithBuffer) {
// We're faking the camera preview mode to prevent
// the app from being flooded with preview frames.
// Set to oneshot mode again.
setHasPreviewCallback(true, false);
}
pCb.onPreviewFrame((byte[])msg.obj, mCamera);
}
return; case CAMERA_MSG_POSTVIEW_FRAME:
if (mPostviewCallback != null) {
mPostviewCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return; case CAMERA_MSG_FOCUS:
AutoFocusCallback cb = null;
synchronized (mAutoFocusCallbackLock) {
cb = mAutoFocusCallback;
}
if (cb != null) {
boolean success = msg.arg1 == 0 ? false : true;
cb.onAutoFocus(success, mCamera);
}
return; case CAMERA_MSG_ZOOM:
if (mZoomListener != null) {
mZoomListener.onZoomChange(msg.arg1, msg.arg2 != 0, mCamera);
}
return; case CAMERA_MSG_PREVIEW_METADATA:
if (mFaceListener != null) {
mFaceListener.onFaceDetection((Face[])msg.obj, mCamera);
}
return; case CAMERA_MSG_ERROR :
Log.e(TAG, "Error " + msg.arg1);
if (mErrorCallback != null) {
mErrorCallback.onError(msg.arg1, mCamera);
}
return; case CAMERA_MSG_FOCUS_MOVE:
if (mAutoFocusMoveCallback != null) {
mAutoFocusMoveCallback.onAutoFocusMoving(msg.arg1 == 0 ? false : true, mCamera);
}
return; default:
Log.e(TAG, "Unknown message type " + msg.what);
return;
}
}
}

到这里基本上就是上层的处理了,callback都是在相机应用中设置的,然后各种数据就在相机应用中得到对应的处理。

具体的每一个数据怎么处理,这里我们不做分析,后续有需要,可以在细讲一下,旨在弄清楚代码是怎么走的。

本文中代码使用的是Android5.1原始代码,欢迎大家留言交流。

版权声明:本文为博主原创文章,未经博主允许不得转载。

<Android Framework 之路>Android5.1 Camera Framework(三)的更多相关文章

  1. <Android Framework 之路>Android5.1 Camera Framework(一)

    Android5.0 Camera Framework 简介 CameraService启动 CameraService是在MediaServer启动过程中进行的 main_mediaserver.c ...

  2. <Android Framework 之路>Android5.1 Camera Framework(四)——框架总结

    前言 从之前的几篇文件,可以基本弄清楚 Camera从APK,经过framework的衔接,与HAL层进行交互,最终通过驱动完成Camera的一些动作. Camera层次分析 APP层 Framewo ...

  3. <Android Framework 之路>Android5.1 Camera Framework(二)

    上一次讲解了一下CameraService的启动过程,今天梳理一下Camera预览的过程 StartPreview过程 首先,我们还是从应用层的使用入手 Camera.java (packages\a ...

  4. <Android Framework 之路>Android5.1 MediaScanner

    前言 MediaScanner是Android系统中针对媒体文件的扫描过程,将储存空间中的媒体文件通过扫描的方式遍历并存储在数据库中,然后通过MediaProvider提供接口使用,在Android多 ...

  5. Android学习之路——简易版微信为例(一)

    这是“Android学习之路”系列文章的开篇,可能会让大家有些失望——这篇文章中我们不介绍简易版微信的实现(不过不是标题党哦,我会在后续博文中一步步实现这个应用程序的).这里主要是和广大园友们聊聊一个 ...

  6. Android高薪之路-Android程序员面试宝典

    Android高薪之路-Android程序员面试宝典

  7. android从应用到驱动之—camera(2)---cameraHAL的实现

    本文是camera系列博客,上一篇是: android从应用到驱动之-camera(1)---程序调用流程 本来想用这一篇博客把cameraHAL的实现和流程都给写完的.搞了半天,东西实在是太多了.这 ...

  8. 小猪的Android入门之路 Day 3 - part 3

    小猪的Android入门之路 Day 3 - part 3 各种UI组件的学习 Part 3 本节引言: 在前面两个部分中我们对Android中一些比較经常使用的基本组件进行了一个了解, part 1 ...

  9. Android 举例说明自己的定义Camera图片和预览,以及前后摄像头切换

    如何调用本地图片,并调用系统拍摄的图像上一博文解释(http://blog.csdn.net/a123demi/article/details/40003695)的功能. 而本博文将通过实例实现自己定 ...

随机推荐

  1. (转)基于MVC4+EasyUI的Web开发框架经验总结(6)--在页面中应用下拉列表的处理

    http://www.cnblogs.com/wuhuacong/p/3840321.html 在很多Web界面中,我们都可以看到很多下拉列表的元素,有些是固定的,有些是动态的:有些是字典内容,有些是 ...

  2. Git 常用命令速查(转载)

    git branch 查看本地所有分支git status 查看当前状态 git commit 提交 git branch -a 查看所有的分支git branch -r 查看远程所有分支git co ...

  3. element table 组件内容换行方案

    element table 组件内容换行方案 white-space的值: normal 默认.空白会被浏览器忽略.pre 空白会被浏览器保留.其行为方式类似 HTML 中的<pre> 标 ...

  4. Vue学习之路第十四篇:v-for指令中key的使用注意事项

    1.学前准备: JavaScript中有一个方法:unshift() ,其作用是向数组的开头添加一个或更多元素,并返回新的长度.该方法的第一个参数将成为数组的新元素 0,如果还有第二个参数,它将成为新 ...

  5. css+div 绘制多边形

    /*1.正方形*/ <div id="square"></div> #square { width: 100px; height: 100px; backg ...

  6. Centos6.5安装Seafile,遇到的问题处理记录。

    问题1:启动Seafile安装脚本时,提示找不到MySQL-python模块,使用yum安装成功也提示未安装该软件包 问题1解决方法:需要通过 python 的工具pip来安装MySQL-python ...

  7. maven引入jsp相关依赖

    <!--引入Servlet开始--> <dependency> <groupId>javax.servlet</groupId> <artifac ...

  8. Vue接口日常学习

    最近使用uni.app 进行app的开发  页面搭完之后,发现不会调接口,今天学习了下 各个程序运行时,都会发起网络请求,网络相关的API在使用之前都会在使用前配置域名白名单 首先   现在中间件上一 ...

  9. 【POJ】3122 Pie [二分查找]

    题目地址:http://poj.org/problem?id=3122 二分每块饼的体积.为了保证精度,可以先二分半径的平方r*r,最后再乘以PI.要注意一点,要分的人数要包括自己,及f+1. #in ...

  10. EL表达式取整问题

    一般来说我们是无法实现EL表达式取整的.对于EL表达式的除法而言,他的结果是浮点型. 如:${6/7},他的结果是:0.8571428571428571.对于这个我们是无法直接来实现取整的. 这时就可 ...