初探Camera HAL 2.0

Android在4.2的时候对Camera HAL做了比较大的改动,基本是废弃了原先的CameraHardwareInterface,又弄了一套新的。所以它提供了两种方式实现,根据厂商实现HAL的版本在Camera Service层自动加载对应版本的fwk HAL。目前这块的介绍还是比较少,实现的厂商也比较少,大概有Qualcomm和Samsung在其platform上有去实现。
以下文字是以Qualcomm平台为基础,在AOSP代码基础上得出的一些理解,存在错误的地方请指出,本文也会随着进一步的学习而对错误的地方进行修正!

我有观看这样一个介绍的视频(题外话,开挂国家的男人讲的英语基本听不懂),自己想法出去看,因为视频无法下载下来,所以我截图了一些,放在Flickr上。
www.youtube.com/watch?v=Lald5txnoHw
以下是一段有关这段视频以及HAL 2.0的简单介绍

The Linux Foundation
Android Builders Summit 2013
Camera 2.0: The New Camera Hardware Interface in Android 4.2
By Balwinder Kaur & Ashutosh Gupta
San Francisco, California

Android 4.2 was released with a new Camera Hardware Abstraction Layer (HAL) Camera 2.0. Camera 2.0 has a big emphasis on collection and providing metadata associated with each frame. It also provides the ability to re-process streams. Although the APIs at the SDK level are yet to expose any new APIS to the end user, the Camera HAL and the Camera Service architecture has been revamped to a different architecture. This presentation provides an insight into the new architecture as well as covering some of the challenges faced in building production quality Camera HAL implementations.
The intended audience for this conference session is engineers who want to learn more about the Android Camera Internals. This talk should facilitate engineers wanting to integrate, improve or innovate using the Camera subsystem.

以下是我自己的一些理解。

HAL2如何同DRV沟通的?

HAL2 fwk和Vendor implementation是通过/path/to/aosp/hardware/qcom/camera/QCamera/HAL2/wrapper/QualcommCamera.cpp来绑定的,初始化名字为HMI的struct,这个跟原先的是一样的。

static hw_module_methods_t camera_module_methods = {
    open: camera_device_open,
};

static hw_module_t camera_common  = {
    tag: HARDWARE_MODULE_TAG,
    module_api_version: CAMERA_MODULE_API_VERSION_2_0,
    hal_api_version: HARDWARE_HAL_API_VERSION,
    id: CAMERA_HARDWARE_MODULE_ID,
    name: "Qcamera",
    author:"Qcom",
    methods: &camera_module_methods,
    dso: NULL,
    reserved:  {0},
};

camera_module_t HAL_MODULE_INFO_SYM = {
    common: camera_common,
    get_number_of_cameras: get_number_of_cameras,
    get_camera_info: get_camera_info,
};

camera2_device_ops_t camera_ops = {
    set_request_queue_src_ops:           android::set_request_queue_src_ops,
    notify_request_queue_not_empty:      android::notify_request_queue_not_empty,
    set_frame_queue_dst_ops:             android::set_frame_queue_dst_ops,
    get_in_progress_count:               android::get_in_progress_count,
    flush_captures_in_progress:          android::flush_captures_in_progress,
    construct_default_request:           android::construct_default_request,

    allocate_stream:                     android::allocate_stream,
    register_stream_buffers:             android::register_stream_buffers,
    release_stream:                      android::release_stream,

    allocate_reprocess_stream:           android::allocate_reprocess_stream,
    allocate_reprocess_stream_from_stream: android::allocate_reprocess_stream_from_stream,
    release_reprocess_stream:            android::release_reprocess_stream,

    trigger_action:                      android::trigger_action,
    set_notify_callback:                 android::set_notify_callback,
    get_metadata_vendor_tag_ops:         android::get_metadata_vendor_tag_ops,
    dump:                                android::dump,
};

QCameraHWI当中

QCameraHardwareInterface::
QCameraHardwareInterface(int cameraId, int mode)
                  : mCameraId(cameraId)
{

    cam_ctrl_dimension_t mDimension;

    /* Open camera stack! */
    memset(&mMemHooks, 0, sizeof(mm_camear_mem_vtbl_t));
    mMemHooks.user_data=this;
    mMemHooks.get_buf=get_buffer_hook;
    mMemHooks.put_buf=put_buffer_hook;

    mCameraHandle=camera_open(mCameraId, &mMemHooks);
    ALOGV("Cam open returned %p",mCameraHandle);
    if(mCameraHandle == NULL) {
        ALOGE("startCamera: cam_ops_open failed: id = %d", mCameraId);
        return;
    }
    mCameraHandle->ops->sync(mCameraHandle->camera_handle);

    mChannelId=mCameraHandle->ops->ch_acquire(mCameraHandle->camera_handle);
    if(mChannelId<=0)
    {
        ALOGE("%s:Channel aquire failed",__func__);
        mCameraHandle->ops->camera_close(mCameraHandle->camera_handle);
        return;
    }

    /* Initialize # of frame requests the HAL is handling to zero*/
    mPendingRequests=0;
}

调用mm_camera_interface当中的camera_open()

进而调用mm_camera当中的mm_camera_open()

这里还是是通过V4L2去同DRV沟通的

但是为什么wrapper当中有一些操作Qualcomm没有去实现呢?

int trigger_action(const struct camera2_device *,
        uint32_t trigger_id,
        int32_t ext1,
        int32_t ext2)
{
    return INVALID_OPERATION;
}

比如这个上面理论上来说auto focus等等一些操作需要通过它来触发,但是它却是stub实现,这些不是必须的,还是藏在了某个角落我没有发现?

目前用的是libmmcamera_interface还是libmmcamera_interface2?
从代码看应该是libmmcamera_interface

再看一个实例,start preview 这是一个怎样的过程?
Camera2Client::startPreview(…)
Camera2Client::startPreviewL(…)
StreamingProcessor::updatePreviewStream(…)
Camera2Device::createStream(…)
Camera2Device::StreamAdapter::connectToDevice(…)
camera2_device_t->ops->allocate_stream(…)

这个allocate_stream是Vendor实现的,对于Qualcomm的Camera,位于/path/to/aosp/hardware/qcom/camera/QCamera/HAL2/wrapper/QualcommCamera.cpp

android::allocate_stream(…)
QCameraHardwareInterface::allocate_stream(…)
QCameraStream::createInstance(…)

QCameraStream_preview::createInstance(uint32_t CameraHandle,
                        uint32_t ChannelId,
                        uint32_t Width,
                        uint32_t Height,
                        int requestedFormat,
                        mm_camera_vtbl_t *mm_ops,
                        camera_mode_t mode)
{
  QCameraStream* pme = new QCameraStream_preview(CameraHandle,
                        ChannelId,
                        Width,
                        Height,
                        requestedFormat,
                        mm_ops,
                        mode);
  return pme;
}

尽管这改进目前还是还是算比较“新”的一个东西,但这是趋势,所以了解下也无妨!
P.S. 4.3 在今天都发布了,我还在看4.2的东西,吼吼

FLAC encoder是如何集成到Stagefright的(OpenMAX IL)

举个例子来看下,codec是如何和OpenMAX IL沟通的,这其实应该是Stagefright和codec的交互这篇博客当中的,想想比较独立,又比较长,干脆单独拿出来写一下。

通常来说一个codec(encoder或者decoder)无论怎么复杂,它总会提供一个接口来是我们输入一段数据,并且提供一个接口我们来获取一段数据,因为它的功能就是这样的。
所以我们要把一个codec使用OpenMAX IL的方式集成进来,对我们来说工作量不算很大。这里以FLAC encoder来举个例子(因为它比较简单),FLAC本身这个codec相关的信息请参见http://xiph.org/flac/,在Android当中源码位于/path/to/aosp/external/flac/,在Android当中OpenMAX IL与它交互是通过libstagefright_soft_flacenc.so这个东西,我们今天就重点关注这个,它的源码位于/path/to/aosp/frameworks/av/media/libstagefright/codecs/flac/enc/。

对Android当中Stagefright和OpenMAX IL不熟悉的话,还是建议先看文章开始提到的那篇。

首先SoftFlacEncoder继承自SimpleSoftOMXComponent,重写了4个方法,分别是

virtual OMX_ERRORTYPE initCheck() const; // 在SoftOMXComponent当中是空实现,自己的codec需要实现来让他人通过这个方法来探测你的codec是否正常
virtual OMX_ERRORTYPE internalGetParameter(
        OMX_INDEXTYPE index, OMX_PTR params); // 就是OpenMax IL组件的getParameter

virtual OMX_ERRORTYPE internalSetParameter(
        OMX_INDEXTYPE index, const OMX_PTR params); // 就是OpenMax IL组件的setParameter

virtual void onQueueFilled(OMX_U32 portIndex); // 就是OpenMax IL组件的emptyThisBuffer和fillThisBuffer,如果你不清楚,并且还在看这篇文章的话,那你一定是在当散文看 ^_^

这四个方法如果你不明白,就先翻翻OpenMax IL/Stagefright相关代码或者规范。

另外它还有几个私有方法

void initPorts(); // OpenMAX IL通信需要port

OMX_ERRORTYPE configureEncoder(); // FLAC本身也需要些配置,比如把下面的callback设置进去

// FLAC encoder callbacks
// maps to encoderEncodeFlac()
static FLAC__StreamEncoderWriteStatus flacEncoderWriteCallback(
        const FLAC__StreamEncoder *encoder, const FLAC__byte buffer[],
        size_t bytes, unsigned samples, unsigned current_frame, void *client_data); // 这个是实际丢下去的callback

FLAC__StreamEncoderWriteStatus onEncodedFlacAvailable(
            const FLAC__byte buffer[],
            size_t bytes, unsigned samples, unsigned current_frame); // 这个只是做了个C向C++的转换

上面这两个方法是callback函数,其实就一个callback,因为如下

// static
FLAC__StreamEncoderWriteStatus SoftFlacEncoder::flacEncoderWriteCallback(
            const FLAC__StreamEncoder *encoder, const FLAC__byte buffer[],
            size_t bytes, unsigned samples, unsigned current_frame, void *client_data) {
    return ((SoftFlacEncoder*) client_data)->onEncodedFlacAvailable(
            buffer, bytes, samples, current_frame);
}

为什么要有callback?之前有讲过一个codec总有一个输入和输出,在这里FLAC有给我们一个函数FLAC__stream_encoder_process_interleaved(…)来输入数据,提供了一个callback来输出数据,仅仅这样!

可以参照下FLAC头文件的说明

/** Submit data for encoding.
 *  This version allows you to supply the input data where the channels
 *  are interleaved into a single array (i.e. channel0_sample0,
 *  channel1_sample0, ... , channelN_sample0, channel0_sample1, ...).
 *  The samples need not be block-aligned but they must be
 *  sample-aligned, i.e. the first value should be channel0_sample0
 *  and the last value channelN_sampleM.  Each sample should be a signed
 *  integer, right-justified to the resolution set by
 *  FLAC__stream_encoder_set_bits_per_sample().  For example, if the
 *  resolution is 16 bits per sample, the samples should all be in the
 *  range [-32768,32767].
 *
 *  For applications where channel order is important, channels must
 *  follow the order as described in the
 *  <A HREF="../format.html#frame_header">frame header</A>.
 *
 * \param  encoder  An initialized encoder instance in the OK state.
 * \param  buffer   An array of channel-interleaved data (see above).
 * \param  samples  The number of samples in one channel, the same as for
 *                  FLAC__stream_encoder_process().  For example, if
 *                  encoding two channels, \c 1000 \a samples corresponds
 *                  to a \a buffer of 2000 values.
 * \assert
 *    \code encoder != NULL \endcode
 *    \code FLAC__stream_encoder_get_state(encoder) == FLAC__STREAM_ENCODER_OK \endcode
 * \retval FLAC__bool
 *    \c true if successful, else \c false; in this case, check the
 *    encoder state with FLAC__stream_encoder_get_state() to see what
 *    went wrong.
 */
FLAC_API FLAC__bool FLAC__stream_encoder_process_interleaved(FLAC__StreamEncoder *encoder, const FLAC__int32 buffer[], unsigned samples);

这个就是注册callback的地方,我们注册的是FLAC__StreamEncoderWriteCallback这个callback。

/** Initialize the encoder instance to encode native FLAC streams.
 *
 *  This flavor of initialization sets up the encoder to encode to a
 *  native FLAC stream. I/O is performed via callbacks to the client.
 *  For encoding to a plain file via filename or open \c FILE*,
 *  FLAC__stream_encoder_init_file() and FLAC__stream_encoder_init_FILE()
 *  provide a simpler interface.
 *
 *  This function should be called after FLAC__stream_encoder_new() and
 *  FLAC__stream_encoder_set_*() but before FLAC__stream_encoder_process()
 *  or FLAC__stream_encoder_process_interleaved().
 *  initialization succeeded.
 *
 *  The call to FLAC__stream_encoder_init_stream() currently will also
 *  immediately call the write callback several times, once with the \c fLaC
 *  signature, and once for each encoded metadata block.
 *
 * \param  encoder            An uninitialized encoder instance.
 * \param  write_callback     See FLAC__StreamEncoderWriteCallback.  This
 *                            pointer must not be \c NULL.
 * \param  seek_callback      See FLAC__StreamEncoderSeekCallback.  This
 *                            pointer may be \c NULL if seeking is not
 *                            supported.  The encoder uses seeking to go back
 *                            and write some some stream statistics to the
 *                            STREAMINFO block; this is recommended but not
 *                            necessary to create a valid FLAC stream.  If
 *                            \a seek_callback is not \c NULL then a
 *                            \a tell_callback must also be supplied.
 *                            Alternatively, a dummy seek callback that just
 *                            returns \c FLAC__STREAM_ENCODER_SEEK_STATUS_UNSUPPORTED
 *                            may also be supplied, all though this is slightly
 *                            less efficient for the encoder.
 * \param  tell_callback      See FLAC__StreamEncoderTellCallback.  This
 *                            pointer may be \c NULL if seeking is not
 *                            supported.  If \a seek_callback is \c NULL then
 *                            this argument will be ignored.  If
 *                            \a seek_callback is not \c NULL then a
 *                            \a tell_callback must also be supplied.
 *                            Alternatively, a dummy tell callback that just
 *                            returns \c FLAC__STREAM_ENCODER_TELL_STATUS_UNSUPPORTED
 *                            may also be supplied, all though this is slightly
 *                            less efficient for the encoder.
 * \param  metadata_callback  See FLAC__StreamEncoderMetadataCallback.  This
 *                            pointer may be \c NULL if the callback is not
 *                            desired.  If the client provides a seek callback,
 *                            this function is not necessary as the encoder
 *                            will automatically seek back and update the
 *                            STREAMINFO block.  It may also be \c NULL if the
 *                            client does not support seeking, since it will
 *                            have no way of going back to update the
 *                            STREAMINFO.  However the client can still supply
 *                            a callback if it would like to know the details
 *                            from the STREAMINFO.
 * \param  client_data        This value will be supplied to callbacks in their
 *                            \a client_data argument.
 * \assert
 *    \code encoder != NULL \endcode
 * \retval FLAC__StreamEncoderInitStatus
 *    \c FLAC__STREAM_ENCODER_INIT_STATUS_OK if initialization was successful;
 *    see FLAC__StreamEncoderInitStatus for the meanings of other return values.
 */
FLAC_API FLAC__StreamEncoderInitStatus FLAC__stream_encoder_init_stream(FLAC__StreamEncoder *encoder, FLAC__StreamEncoderWriteCallback write_callback, FLAC__StreamEncoderSeekCallback seek_callback, FLAC__StreamEncoderTellCallback tell_callback, FLAC__StreamEncoderMetadataCallback metadata_callback, void *client_data);

简单介绍下这代码的意思
initPorts()初始化两个ports,0为输入,1为输出,记住一点很重要,谁申请的内存,谁释放,这点对于理解层次复杂的这套系统来说很重要。

最重要的是下面这个方法,上层把buffer满之后会call到这里,上次需要获取处理好的buffer也会call到这里

void SoftFlacEncoder::onQueueFilled(OMX_U32 portIndex) {

    ALOGV("SoftFlacEncoder::onQueueFilled(portIndex=%ld)", portIndex);

    if (mSignalledError) { // 出错了
        return;
    }

    List<BufferInfo *> &inQueue = getPortQueue(0); // 输入口,好奇这里一次会不会有多个buffer,理论上来同一次就一个
    List<BufferInfo *> &outQueue = getPortQueue(1); // 输出的

    while (!inQueue.empty() && !outQueue.empty()) {
        BufferInfo *inInfo = *inQueue.begin();
        OMX_BUFFERHEADERTYPE *inHeader = inInfo->mHeader;

        BufferInfo *outInfo = *outQueue.begin();
        OMX_BUFFERHEADERTYPE *outHeader = outInfo->mHeader;

        if (inHeader->nFlags & OMX_BUFFERFLAG_EOS) { // 编解码结束的时候,都清空标记,notifyEmptyBufferDone和notifyFillBufferDone都返回
            inQueue.erase(inQueue.begin()); // 这是我们自己申请的内存,自己释放
            inInfo->mOwnedByUs = false;
            notifyEmptyBufferDone(inHeader);

            outHeader->nFilledLen = 0;
            outHeader->nFlags = OMX_BUFFERFLAG_EOS;

            outQueue.erase(outQueue.begin()); // 这是我们自己申请的内存,自己释放
            outInfo->mOwnedByUs = false;
            notifyFillBufferDone(outHeader);

            return;
        }

        if (inHeader->nFilledLen > kMaxNumSamplesPerFrame * sizeof(FLAC__int32) * 2) {
            ALOGE("input buffer too large (%ld).", inHeader->nFilledLen);
            mSignalledError = true;
            notify(OMX_EventError, OMX_ErrorUndefined, 0, NULL);
            return;
        }

        assert(mNumChannels != 0);
        mEncoderWriteData = true;
        mEncoderReturnedEncodedData = false;
        mEncoderReturnedNbBytes = 0;
        mCurrentInputTimeStamp = inHeader->nTimeStamp;

        const unsigned nbInputFrames = inHeader->nFilledLen / (2 * mNumChannels);
        const unsigned nbInputSamples = inHeader->nFilledLen / 2;
        const OMX_S16 * const pcm16 = reinterpret_cast<OMX_S16 *>(inHeader->pBuffer); // OMX_BUFFERHEADERTYPE当中pBuffer默认为8位,这里强制按16为重新分组,所以一个buffer当中samples的数量就是nFilledLen/2
        // 可是为什么要按照16位重新分组?因为设置给codec的FLAC__stream_encoder_set_bits_per_sample(mFlacStreamEncoder, 16);

        for (unsigned i=0 ; i < nbInputSamples ; i++) {
            mInputBufferPcm32[i] = (FLAC__int32) pcm16[i]; // 按32位解析,因为FLAC需要
        }
        ALOGV(" about to encode %u samples per channel", nbInputFrames);
        FLAC__bool ok = FLAC__stream_encoder_process_interleaved(
                        mFlacStreamEncoder,
                        mInputBufferPcm32,
                        nbInputFrames /*samples per channel*/ ); // 在这里编码,因为FLAC对齐的关系,对输入的buffer需要做一定的转换,也就是上面标记出来的两处

        if (ok) {
            // 之前注册的callback应该是block的,也就是在FLAC__stream_encoder_process_interleaved返回callback因该也返回了
            if (mEncoderReturnedEncodedData && (mEncoderReturnedNbBytes != 0)) {
                ALOGV(" dequeueing buffer on output port after writing data");
                outInfo->mOwnedByUs = false;
                outQueue.erase(outQueue.begin());
                outInfo = NULL;
                notifyFillBufferDone(outHeader); // 编码好的数据通过这里就回到了上层
                outHeader = NULL;
                mEncoderReturnedEncodedData = false;
            } else {
                ALOGV(" encoder process_interleaved returned without data to write");
            }
        } else {
            ALOGE(" error encountered during encoding");
            mSignalledError = true;
            notify(OMX_EventError, OMX_ErrorUndefined, 0, NULL);
            return;
        }

        inInfo->mOwnedByUs = false;
        inQueue.erase(inQueue.begin());
        inInfo = NULL;
        notifyEmptyBufferDone(inHeader); // 告诉上层,FLAC codec已经取走了in buffer里面的数据,这样看来这里就是上层送数据,codec编码,codec送数据,然后开始第二轮
        // 不知道有没有那种实现,就是上层送数据,只要codec告诉上层已经取走了数据,上层也可以继续送数据,即使codec这个时候还没有编码完毕,还没有把上一次的数据传送给上层,也许这样在某些情况下可以加快互等拷贝/准备数据的时间,不过这是soft编码,都会占用CPU的时间,所以多CPU上也许可以改进,但那样应该会复杂些
        inHeader = NULL;
    }
}

接下来看callback里面

FLAC__StreamEncoderWriteStatus SoftFlacEncoder::onEncodedFlacAvailable(
            const FLAC__byte buffer[],
            size_t bytes, unsigned samples, unsigned current_frame) {
    ALOGV("SoftFlacEncoder::onEncodedFlacAvailable(bytes=%d, samples=%d, curr_frame=%d)",
            bytes, samples, current_frame);

#ifdef WRITE_FLAC_HEADER_IN_FIRST_BUFFER
    if (samples == 0) {
        ALOGI(" saving %d bytes of header", bytes);
        memcpy(mHeader + mHeaderOffset, buffer, bytes);
        mHeaderOffset += bytes;// will contain header size when finished receiving header
        return FLAC__STREAM_ENCODER_WRITE_STATUS_OK;
    }

#endif

    if ((samples == 0) || !mEncoderWriteData) {
        // called by the encoder because there's header data to save, but it's not the role
        // of this component (unless WRITE_FLAC_HEADER_IN_FIRST_BUFFER is defined)
        ALOGV("ignoring %d bytes of header data (samples=%d)", bytes, samples);
        return FLAC__STREAM_ENCODER_WRITE_STATUS_OK;
    }

    List<BufferInfo *> &outQueue = getPortQueue(1);
    CHECK(!outQueue.empty());
    BufferInfo *outInfo = *outQueue.begin();
    OMX_BUFFERHEADERTYPE *outHeader = outInfo->mHeader;

#ifdef WRITE_FLAC_HEADER_IN_FIRST_BUFFER
    if (!mWroteHeader) {
        ALOGI(" writing %d bytes of header on output port", mHeaderOffset);
        memcpy(outHeader->pBuffer + outHeader->nOffset + outHeader->nFilledLen,
                mHeader, mHeaderOffset);
        outHeader->nFilledLen += mHeaderOffset;
        outHeader->nOffset    += mHeaderOffset;
        mWroteHeader = true;
    }
#endif

    // write encoded data
    ALOGV(" writing %d bytes of encoded data on output port", bytes);
    if (bytes > outHeader->nAllocLen - outHeader->nOffset - outHeader->nFilledLen) {
        ALOGE(" not enough space left to write encoded data, dropping %u bytes", bytes);
        // a fatal error would stop the encoding
        return FLAC__STREAM_ENCODER_WRITE_STATUS_OK;
    }
    memcpy(outHeader->pBuffer + outHeader->nOffset, buffer, bytes); // 拷贝数据到OpenMAX IL的buffer里面

    outHeader->nTimeStamp = mCurrentInputTimeStamp;
    outHeader->nOffset = 0;
    outHeader->nFilledLen += bytes;
    outHeader->nFlags = 0;

    mEncoderReturnedEncodedData = true;
    mEncoderReturnedNbBytes += bytes;

    return FLAC__STREAM_ENCODER_WRITE_STATUS_OK;
}

需要注意的地方

// FLAC takes samples aligned on 32bit boundaries, use this buffer for the conversion
// before passing the input data to the encoder
FLAC__int32* mInputBufferPcm32; // 这个问题上面已经分析过了

短短几百行代码,这个算简单!

Driver回来的数据是如何显示出来的

想要知道Driver回来的数据是如何显示出来的吗?
这个工作是在哪一层做的?

跟踪代码就可以理清楚这一过程,这里代码版本是Jelly Bean。

CameraHardwareInterface初始化的时候会初始化好window有关的数据

struct camera_preview_window {
    struct preview_stream_ops nw;
    void *user;
};

struct camera_preview_window mHalPreviewWindow;

status_t initialize(hw_module_t *module)
{
    ALOGI("Opening camera %s", mName.string());
    int rc = module->methods->open(module, mName.string(),
                                   (hw_device_t **)&mDevice);
    if (rc != OK) {
        ALOGE("Could not open camera %s: %d", mName.string(), rc);
        return rc;
    }
    initHalPreviewWindow();
    return rc;
}

void initHalPreviewWindow()
{
    mHalPreviewWindow.nw.cancel_buffer = __cancel_buffer;
    mHalPreviewWindow.nw.lock_buffer = __lock_buffer;
    mHalPreviewWindow.nw.dequeue_buffer = __dequeue_buffer;
    mHalPreviewWindow.nw.enqueue_buffer = __enqueue_buffer;
    mHalPreviewWindow.nw.set_buffer_count = __set_buffer_count;
    mHalPreviewWindow.nw.set_buffers_geometry = __set_buffers_geometry;
    mHalPreviewWindow.nw.set_crop = __set_crop;
    mHalPreviewWindow.nw.set_timestamp = __set_timestamp;
    mHalPreviewWindow.nw.set_usage = __set_usage;
    mHalPreviewWindow.nw.set_swap_interval = __set_swap_interval;

    mHalPreviewWindow.nw.get_min_undequeued_buffer_count =
            __get_min_undequeued_buffer_count;
}

static int __dequeue_buffer(struct preview_stream_ops* w,
                            buffer_handle_t** buffer, int *stride)
{
    int rc;
    ANativeWindow *a = anw(w);
    ANativeWindowBuffer* anb;
    rc = native_window_dequeue_buffer_and_wait(a, &anb);
    if (!rc) {
        *buffer = &anb->handle;
        *stride = anb->stride;
    }
    return rc;
}

static int __enqueue_buffer(struct preview_stream_ops* w,
                  buffer_handle_t* buffer)
{
    ANativeWindow *a = anw(w);
    return a->queueBuffer(a,
              container_of(buffer, ANativeWindowBuffer, handle), -1);
}

继而在Vendor的HAL当中,这里以TI为例子

/**
   @brief Sets ANativeWindow object.

   Preview buffers provided to CameraHal via this object. DisplayAdapter will be interfacing with it
   to render buffers to display.

   @param[in] window The ANativeWindow object created by Surface flinger
   @return NO_ERROR If the ANativeWindow object passes validation criteria
   @todo Define validation criteria for ANativeWindow object. Define error codes for scenarios

 */
status_t CameraHal::setPreviewWindow(struct preview_stream_ops *window)
{
    status_t ret = NO_ERROR;
    CameraAdapter::BuffersDescriptor desc;

    LOG_FUNCTION_NAME;
    mSetPreviewWindowCalled = true;

   ///If the Camera service passes a null window, we destroy existing window and free the DisplayAdapter
    if(!window)
    {
        if(mDisplayAdapter.get() != NULL)
        {
            ///NULL window passed, destroy the display adapter if present
            CAMHAL_LOGDA("NULL window passed, destroying display adapter");
            mDisplayAdapter.clear();
            ///@remarks If there was a window previously existing, we usually expect another valid window to be passed by the client
            ///@remarks so, we will wait until it passes a valid window to begin the preview again
            mSetPreviewWindowCalled = false;
        }
        CAMHAL_LOGDA("NULL ANativeWindow passed to setPreviewWindow");
        return NO_ERROR;
    }else if(mDisplayAdapter.get() == NULL)
    {
        // Need to create the display adapter since it has not been created
        // Create display adapter
        mDisplayAdapter = new ANativeWindowDisplayAdapter();

        ...

        // DisplayAdapter needs to know where to get the CameraFrames from inorder to display
        // Since CameraAdapter is the one that provides the frames, set it as the frame provider for DisplayAdapter
        mDisplayAdapter->setFrameProvider(mCameraAdapter);

        ...

        // Update the display adapter with the new window that is passed from CameraService
        // 这里的window(ANativeWindowDisplayAdapter::mANativeWindow)就是CameraHardwareInterface::mHalPreviewWindow::nw
        ret  = mDisplayAdapter->setPreviewWindow(window);
        if(ret!=NO_ERROR)
            {
            CAMHAL_LOGEB("DisplayAdapter setPreviewWindow returned error %d", ret);
            }

        if(mPreviewStartInProgress)
        {
            CAMHAL_LOGDA("setPreviewWindow called when preview running");
            // Start the preview since the window is now available
            ret = startPreview();
        }
    } else {
        // Update the display adapter with the new window that is passed from CameraService
        ret = mDisplayAdapter->setPreviewWindow(window);
        if ( (NO_ERROR == ret) && previewEnabled() ) {
            restartPreview();
        } else if (ret == ALREADY_EXISTS) {
            // ALREADY_EXISTS should be treated as a noop in this case
            ret = NO_ERROR;
        }
    }
    LOG_FUNCTION_NAME_EXIT;

    return ret;

}

注意看setPreviewWindow之前我加的一行中文注释

在TI的HAL当中有去做包一些adaper出来,实际就是取数据的(比如V4L2CameraAdapter,也就是CameraAdapter),和数据发其他地方(比如ANativeWindowDisplayAdapter,也就是DisplayAdapter)去用的。

在TI的HW当中,有previewThread(位于V4LCameraAdapter当中),previewThread会不断的通过IOCTL从V4L2获取frame

int V4LCameraAdapter::previewThread()
{
    status_t ret = NO_ERROR;
    int width, height;
    CameraFrame frame;

    if (mPreviewing)
        {
        int index = 0;
        char *fp = this->GetFrame(index);

        uint8_t* ptr = (uint8_t*) mPreviewBufs.keyAt(index);

        int width, height;
        uint16_t* dest = (uint16_t*)ptr;
        uint16_t* src = (uint16_t*) fp;
        mParams.getPreviewSize(&width, &height);

        ...

        mParams.getPreviewSize(&width, &height);
        frame.mFrameType = CameraFrame::PREVIEW_FRAME_SYNC;
        frame.mBuffer = ptr;

        ...

        ret = sendFrameToSubscribers(&frame);

        }

    return ret;
}

char * V4LCameraAdapter::GetFrame(int &index)
{
    int ret;

    mVideoInfo->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    mVideoInfo->buf.memory = V4L2_MEMORY_MMAP;

    /* DQ */
    ret = ioctl(mCameraHandle, VIDIOC_DQBUF, &mVideoInfo->buf);
    if (ret < 0) {
        CAMHAL_LOGEA("GetFrame: VIDIOC_DQBUF Failed");
        return NULL;
    }
    nDequeued++;

    index = mVideoInfo->buf.index;

    return (char *)mVideoInfo->mem[mVideoInfo->buf.index];
}



class CameraFrame
{
    public:

    enum FrameType
        {
            PREVIEW_FRAME_SYNC = 0x1, ///SYNC implies that the frame needs to be explicitly returned after consuming in order to be filled by camera again
            PREVIEW_FRAME = 0x2   , ///Preview frame includes viewfinder and snapshot frames
            IMAGE_FRAME_SYNC = 0x4, ///Image Frame is the image capture output frame
            IMAGE_FRAME = 0x8,
            VIDEO_FRAME_SYNC = 0x10, ///Timestamp will be updated for these frames
            VIDEO_FRAME = 0x20,
            FRAME_DATA_SYNC = 0x40, ///Any extra data assosicated with the frame. Always synced with the frame
            FRAME_DATA= 0x80,
            RAW_FRAME = 0x100,
            SNAPSHOT_FRAME = 0x200,
            ALL_FRAMES = 0xFFFF   ///Maximum of 16 frame types supported
        };

    enum FrameQuirks
    {
        ENCODE_RAW_YUV422I_TO_JPEG = 0x1 << 0,
        HAS_EXIF_DATA = 0x1 << 1,
    };

    //default contrustor
    CameraFrame():
        ...
        {
        ...
    }

    //copy constructor
    CameraFrame(const CameraFrame &frame) :
        ...
        {
        ...
    }

    void *mCookie;
    void *mCookie2;
    void *mBuffer;
    int mFrameType;
    nsecs_t mTimestamp;
    unsigned int mWidth, mHeight;
    uint32_t mOffset;
    unsigned int mAlignment;
    int mFd;
    size_t mLength;
    unsigned mFrameMask;
    unsigned int mQuirks;
    unsigned int mYuv[2];
    ///@todo add other member vars like  stride etc
};

对于ANativeWindowDisplayAdapter,它有给自己配置一个FrameProvider,实际就是会去注册一种callback,当从driver回来数据的时候会去通知对应的callback,当然这个时候还没有真正的去注册

int ANativeWindowDisplayAdapter::setFrameProvider(FrameNotifier *frameProvider)
{
    LOG_FUNCTION_NAME;

    // Check for NULL pointer
    if ( !frameProvider ) {
        CAMHAL_LOGEA("NULL passed for frame provider");
        LOG_FUNCTION_NAME_EXIT;
        return BAD_VALUE;
    }

    //Release any previous frame providers
    if ( NULL != mFrameProvider ) {
        delete mFrameProvider;
    }

    /** Dont do anything here, Just save the pointer for use when display is
         actually enabled or disabled
    */
    mFrameProvider = new FrameProvider(frameProvider, this, frameCallbackRelay);

    LOG_FUNCTION_NAME_EXIT;

    return NO_ERROR;
}

并且我们在开始start preview的时候会去注册这个callback,当然同时在start preview的时候我们也会去分配一块buffer,这样driver回来的数据才有地方放,申请buffer如下
CameraHal::allocateBuffer(…) __dequeue_buffer

下面是启动living preview,注册callback

int ANativeWindowDisplayAdapter::enableDisplay(int width, int height, struct timeval *refTime, S3DParameters *s3dParams)
{
    Semaphore sem;
    TIUTILS::Message msg;

    LOG_FUNCTION_NAME;

    if ( mDisplayEnabled )
        {
        CAMHAL_LOGDA("Display is already enabled");
        LOG_FUNCTION_NAME_EXIT;

        return NO_ERROR;
    }

#if 0 //TODO: s3d is not part of bringup...will reenable
    if (s3dParams)
        mOverlay->set_s3d_params(s3dParams->mode, s3dParams->framePacking,
                                    s3dParams->order, s3dParams->subSampling);
#endif

#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS

    if ( NULL != refTime )
        {
        Mutex::Autolock lock(mLock);
        memcpy(&mStandbyToShot, refTime, sizeof(struct timeval));
        mMeasureStandby = true;
    }

#endif

    //Send START_DISPLAY COMMAND to display thread. Display thread will start and then wait for a message
    sem.Create();
    msg.command = DisplayThread::DISPLAY_START;

    // Send the semaphore to signal once the command is completed
    msg.arg1 = &sem;

    ///Post the message to display thread
    mDisplayThread->msgQ().put(&msg);

    ///Wait for the ACK - implies that the thread is now started and waiting for frames
    sem.Wait();

    // Register with the frame provider for frames
    mFrameProvider->enableFrameNotification(CameraFrame::PREVIEW_FRAME_SYNC);

    mDisplayEnabled = true;
    mPreviewWidth = width;
    mPreviewHeight = height;

    CAMHAL_LOGVB("mPreviewWidth = %d mPreviewHeight = %d", mPreviewWidth, mPreviewHeight);

    LOG_FUNCTION_NAME_EXIT;

    return NO_ERROR;
}



int FrameProvider::enableFrameNotification(int32_t frameTypes)
{
    LOG_FUNCTION_NAME;
    status_t ret = NO_ERROR;

    ///Enable the frame notification to CameraAdapter (which implements FrameNotifier interface)
    mFrameNotifier->enableMsgType(frameTypes<<MessageNotifier::FRAME_BIT_FIELD_POSITION
                                    , mFrameCallback
                                    , NULL
                                    , mCookie
                                    );

    // 把frameCallbackRelay这个callback添加到BaseCameraAdapter::mFrameSubscribers当中
    // 详见 preview_stream_ops_t*  mANativeWindow; 以及 ANativeWindowDisplayAdapter::PostFrame(...)
    // __enqueue_buffer
    // 最终会把数据更新到CameraHardwareInterface的sp<ANativeWindow> mPreviewWindow;
    // 这样每过来一个frame,也就是去更新我们看到的preview数据

    LOG_FUNCTION_NAME_EXIT;
    return ret;
}

见上面代码中嵌的中文注释,这是我加的。

另外displayThread(位于ANativeWindowDisplayAdapter当中),displayThread会听一些从Camera HAL来的消息,然后执行,这个不是很复杂。

也就是说数据是怎么贴到view上去,这是HAL这层做掉的,虽然只是一个调用ANativeWindow的过程!

这只是一个粗糙的过程,再深层细节的我们暂时不研究。

No prelinker in Android anymore

Ice Cream Sandwich当中,Prelinker这个东西就被移除了,理由如下:

commit b375e71d306f2fd356b9b356b636e568c4581fa1
Author: Iliyan Malchev
Date: Tue Mar 8 16:19:48 2011 -0800

build: remove prelinker build build system

This patch removes support for prelinking from the build system. By now, the
prelinker has outlived its usefulness for several reasons. Firstly, the
speedup that it afforded in the early days of Android is now nullified by the
speed of hardware, as well as by the presence of Zygote. Secondly, the space
savings that come with prelinking (measued at 17MB on a recent honeycomb
stingray build) are no longer important either. Thirdly, prelinking reduces
the effectiveness of Address-Space-Layout Randomization. Finally, since it is
not part of the gcc suite, the prelinker needs to be maintained separately.

The patch deletes apriori, soslim, lsd, isprelinked, and iself from the source
tree. It also removes the prelink map.

LOCAL_PRELINK_MODULE becomes a no-op. Individual Android.mk will get cleaned
separately. Support for prelinking will have to be removed from the recovery
code and from the dynamic loader as well.

Change-Id: I5839c9c25f7772d5183eedfe20ab924f2a7cd411

有关的参考(为何而生,为何而死):
https://groups.google.com/forum/#!topic/android-building/UGJvYMZaVqw
http://en.wikipedia.org/wiki/Address_space_layout_randomization
http://people.redhat.com/jakub/prelink/prelink.pdf
http://jserv.blogspot.com/2010/10/android.html

所以现在还在讨论这个的用处的应该可以收手了。

Android的HAL和Vendor的HAL实现是如何关联起来的

以TI的platform为例子,也就是这两个文件
/path/to/aosp/frameworks/av/services/camera/libcameraservice/CameraHardwareInterface.h

/path/to/aosp/hardware/ti/omap4xxx/camera/CameraHal.cpp
是如何关联的。

谁来加载camera.$platform$.so,这个是真正的HAL的实现,里面实际最终会去调用Driver(中间透过rpmsg-omx1)。
好突然,可能你现在还不知道camera.$platform$.so是什么?是这样的,但是到你看到后面你就会发现Vendor的HAL实现是包成一个so档的,也就是这个。

void CameraService::onFirstRef()
{
    BnCameraService::onFirstRef();

    if (hw_get_module(CAMERA_HARDWARE_MODULE_ID,
                (const hw_module_t **)&mModule) < 0) {
        ALOGE("Could not load camera HAL module");
        mNumberOfCameras = 0;
    }
    else {
        mNumberOfCameras = mModule->get_number_of_cameras();
        if (mNumberOfCameras > MAX_CAMERAS) {
            ALOGE("Number of cameras(%d) > MAX_CAMERAS(%d).",
                    mNumberOfCameras, MAX_CAMERAS);
            mNumberOfCameras = MAX_CAMERAS;
        }
        for (int i = 0; i < mNumberOfCameras; i++) {
            setCameraFree(i);
        }
    }
}

在/path/to/aosp/hardware/libhardware/hardware.c当中有下面这些方法

hw_get_module(const char *id, const struct hw_module_t **module) // 这里定义的是hw_module_t,但是Vendor HAL当中实现的是camera_module_t
/* Loop through the configuration variants looking for a module */
for (i=0 ; i<HAL_VARIANT_KEYS_COUNT+1 ; i++) {
    if (i < HAL_VARIANT_KEYS_COUNT) {
        if (property_get(variant_keys[i], prop, NULL) == 0) {
            continue;
        }
        snprintf(path, sizeof(path), "%s/%s.%s.so",
                 HAL_LIBRARY_PATH2, name, prop);
        if (access(path, R_OK) == 0) break;

        snprintf(path, sizeof(path), "%s/%s.%s.so",
                 HAL_LIBRARY_PATH1, name, prop);
        if (access(path, R_OK) == 0) break;
    } else {
        snprintf(path, sizeof(path), "%s/%s.default.so",
                 HAL_LIBRARY_PATH1, name);
        if (access(path, R_OK) == 0) break;
    }
}
static const char *variant_keys[] = {
    "ro.hardware",  /* This goes first so that it can pick up a different
                       file on the emulator. */
    "ro.product.board",
    "ro.board.platform",
    "ro.arch"
};

static const int HAL_VARIANT_KEYS_COUNT =
    (sizeof(variant_keys)/sizeof(variant_keys[0]));

/path/to/aosp/out/target/product/panda/system/build.prop
这里面有platform或者arch相关的数据

/*
 * load the symbols resolving undefined symbols before
 * dlopen returns. Since RTLD_GLOBAL is not or'd in with
 * RTLD_NOW the external symbols will not be global
 */
handle = dlopen(path, RTLD_NOW);

/* Get the address of the struct hal_module_info. */
const char *sym = HAL_MODULE_INFO_SYM_AS_STR;
hmi = (struct hw_module_t *)dlsym(handle, sym);
if (hmi == NULL) {
    ALOGE("load: couldn't find symbol %s", sym);
    status = -EINVAL;
    goto done;
}

*pHmi = hmi; // 动态加载Vendor HAL之后要返回这个hw_module_t结构体供Android Service/HAL层使用

这个load出来的HAL_MODULE_INFO_SYM_AS_STR是什么?

先看它是什么,定义位于hardware.h当中

/**
 * Name of the hal_module_info
 */
#define HAL_MODULE_INFO_SYM         HMI

/**
 * Name of the hal_module_info as a string
 */
#define HAL_MODULE_INFO_SYM_AS_STR  "HMI"

理论上来说dlsym就是会去load一个名字为”HMI”的变量/函数,也就是说在Vendor HAL当中必然有名字为”HMI”的这样一个东西。
看看hw_module_t的定义。

/**
 * Every hardware module must have a data structure named HAL_MODULE_INFO_SYM
 * and the fields of this data structure must begin with hw_module_t
 * followed by module specific information.
 */
typedef struct hw_module_t {
    /** tag must be initialized to HARDWARE_MODULE_TAG */
    uint32_t tag;

    /**
     * The API version of the implemented module. The module owner is
     * responsible for updating the version when a module interface has
     * changed.
     *
     * The derived modules such as gralloc and audio own and manage this field.
     * The module user must interpret the version field to decide whether or
     * not to inter-operate with the supplied module implementation.
     * For example, SurfaceFlinger is responsible for making sure that
     * it knows how to manage different versions of the gralloc-module API,
     * and AudioFlinger must know how to do the same for audio-module API.
     *
     * The module API version should include a major and a minor component.
     * For example, version 1.0 could be represented as 0x0100. This format
     * implies that versions 0x0100-0x01ff are all API-compatible.
     *
     * In the future, libhardware will expose a hw_get_module_version()
     * (or equivalent) function that will take minimum/maximum supported
     * versions as arguments and would be able to reject modules with
     * versions outside of the supplied range.
     */
    uint16_t module_api_version;
#define version_major module_api_version
    /**
     * version_major/version_minor defines are supplied here for temporary
     * source code compatibility. They will be removed in the next version.
     * ALL clients must convert to the new version format.
     */

    /**
     * The API version of the HAL module interface. This is meant to
     * version the hw_module_t, hw_module_methods_t, and hw_device_t
     * structures and definitions.
     *
     * The HAL interface owns this field. Module users/implementations
     * must NOT rely on this value for version information.
     *
     * Presently, 0 is the only valid value.
     */
    uint16_t hal_api_version;
#define version_minor hal_api_version

    /** Identifier of module */
    const char *id;

    /** Name of this module */
    const char *name;

    /** Author/owner/implementor of the module */
    const char *author;

    /** Modules methods */
    struct hw_module_methods_t* methods;

    /** module's dso */
    void* dso;

    /** padding to 128 bytes, reserved for future use */
    uint32_t reserved[32-7];

} hw_module_t;

定义位于hardware.h当中

对于TI的hardware来讲,load的就是camera.omap4.so(也就是前面所说的camera.$platform$.so)的名字为HMI的结构体。
这个需要仔细看下才能发现HAL_MODULE_INFO_SYM实际就是HMI,hardware.h当中定义的那个macro。
HMI位于
/path/to/aosp/hardware/ti/omap4xxx/camera/CameraHal_Module.cpp
当中

camera_module_t HAL_MODULE_INFO_SYM = {
    common: {
         tag: HARDWARE_MODULE_TAG,
         version_major: 1,
         version_minor: 0,
         id: CAMERA_HARDWARE_MODULE_ID,
         name: "TI OMAP CameraHal Module",
         author: "TI",
         methods: &camera_module_methods,
         dso: NULL, /* remove compilation warnings */
         reserved: {0}, /* remove compilation warnings */
    },
    get_number_of_cameras: camera_get_number_of_cameras,
    get_camera_info: camera_get_camera_info,
};

通过’arm-eabi-objdump’ -t camera.omap4.so | grep ‘HMI’
也可以验证这一点

而再仔细看看这个common实际上就是hw_module_t这个结构体,这也就印证了前面hw_module_t定义的地方所说的

/**
* Every hardware module must have a data structure named HAL_MODULE_INFO_SYM
* and the fields of this data structure must begin with hw_module_t
* followed by module specific information.
*/

这里需要注意的是hardware.h当中定义的都是hw_module_t,但是Vendor HAL实现的都是自己的xxx_module_t,比如方法定义的类型和实际类型不一样,这个能工作有什么样的奥秘?
其实就是指针的第一个地址,Vendor HAL当中定义的结构必须要求第一个域是hw_module_t,这样地址就可以对应起来。
总之Android使用的这种技法需要理解,要不然都无法知道他们之间是怎么串起来的,指针强大,但小心使用,否则也可能伤身!
camera_device_t和hw_device_t也是一样的用法!

#define CAMERA_HARDWARE_MODULE_ID "camera"

定义位于camera_common.h当中

其他一些使用到的宏之类的

#ifndef _LINUX_LIMITS_H
#define _LINUX_LIMITS_H
#define NR_OPEN 1024
#define NGROUPS_MAX 65536
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
#define ARG_MAX 131072
#define CHILD_MAX 999
#define OPEN_MAX 256
#define LINK_MAX 127
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
#define MAX_CANON 255
#define MAX_INPUT 255
#define NAME_MAX 255
#define PATH_MAX 4096
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
#define PIPE_BUF 4096
#define XATTR_NAME_MAX 255
#define XATTR_SIZE_MAX 65536
#define XATTR_LIST_MAX 65536
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
#define RTSIG_MAX 32
#endif

对于Camera HAL来说,你需要找到

typedef struct camera_device {
    /**
     * camera_device.common.version must be in the range
     * HARDWARE_DEVICE_API_VERSION(0,0)-(1,FF). CAMERA_DEVICE_API_VERSION_1_0 is
     * recommended.
     */
    hw_device_t common;
    camera_device_ops_t *ops;
    void *priv;
} camera_device_t;

位于camera.h当中

typedef struct camera_module {
    hw_module_t common;
    int (*get_number_of_cameras)(void);
    int (*get_camera_info)(int camera_id, struct camera_info *info);
} camera_module_t;

位于camera_common.h当中

扫平这些障碍之后,后面就会顺利多了。
比如takePicture这个动作,在CameraHal_Module当中会被于CameraHal关联起来,上层调用就会进入到CameraHal当中对应的方法

status_t CameraHal::takePicture( )
{
    status_t ret = NO_ERROR;
    CameraFrame frame;
    CameraAdapter::BuffersDescriptor desc;
    int burst;
    const char *valstr = NULL;
    unsigned int bufferCount = 1;

    Mutex::Autolock lock(mLock);

#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS

    gettimeofday(&mStartCapture, NULL);

#endif

    LOG_FUNCTION_NAME;

    if(!previewEnabled() && !mDisplayPaused)
        {
        LOG_FUNCTION_NAME_EXIT;
        CAMHAL_LOGEA("Preview not started...");
        return NO_INIT;
        }

    // return error if we are already capturing
    if ( (mCameraAdapter->getState() == CameraAdapter::CAPTURE_STATE &&
          mCameraAdapter->getNextState() != CameraAdapter::PREVIEW_STATE) ||
         (mCameraAdapter->getState() == CameraAdapter::VIDEO_CAPTURE_STATE &&
          mCameraAdapter->getNextState() != CameraAdapter::VIDEO_STATE) ) {
        CAMHAL_LOGEA("Already capturing an image...");
        return NO_INIT;
    }

    // we only support video snapshot if we are in video mode (recording hint is set)
    valstr = mParameters.get(TICameraParameters::KEY_CAP_MODE);
    if ( (mCameraAdapter->getState() == CameraAdapter::VIDEO_STATE) &&
         (valstr && strcmp(valstr, TICameraParameters::VIDEO_MODE)) ) {
        CAMHAL_LOGEA("Trying to capture while recording without recording hint set...");
        return INVALID_OPERATION;
    }

    if ( !mBracketingRunning )
        {

         if ( NO_ERROR == ret )
            {
            burst = mParameters.getInt(TICameraParameters::KEY_BURST);
            }

         //Allocate all buffers only in burst capture case
         if ( burst > 1 )
             {
             bufferCount = CameraHal::NO_BUFFERS_IMAGE_CAPTURE;
             if ( NULL != mAppCallbackNotifier.get() )
                 {
                 mAppCallbackNotifier->setBurst(true);
                 }
             }
         else
             {
             if ( NULL != mAppCallbackNotifier.get() )
                 {
                 mAppCallbackNotifier->setBurst(false);
                 }
             }

        // 这就是为什么我们正常拍照,preview就会stop住,拍完一张之后需要去startPreview
        // pause preview during normal image capture
        // do not pause preview if recording (video state)
        if (NO_ERROR == ret &&
                NULL != mDisplayAdapter.get() &&
                burst < 1) {
            if (mCameraAdapter->getState() != CameraAdapter::VIDEO_STATE) {
                mDisplayPaused = true;
                mPreviewEnabled = false;
                ret = mDisplayAdapter->pauseDisplay(mDisplayPaused);
                // since preview is paused we should stop sending preview frames too
                if(mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME) {
                    mAppCallbackNotifier->disableMsgType (CAMERA_MSG_PREVIEW_FRAME);
                }
            }

#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
            mDisplayAdapter->setSnapshotTimeRef(&mStartCapture);
#endif
        }

        // if we taking video snapshot...
        if ((NO_ERROR == ret) && (mCameraAdapter->getState() == CameraAdapter::VIDEO_STATE)) {
            // enable post view frames if not already enabled so we can internally
            // save snapshot frames for generating thumbnail
            if((mMsgEnabled & CAMERA_MSG_POSTVIEW_FRAME) == 0) {
                mAppCallbackNotifier->enableMsgType(CAMERA_MSG_POSTVIEW_FRAME);
            }
        }

        if ( (NO_ERROR == ret) && (NULL != mCameraAdapter) )
            {
            if ( NO_ERROR == ret )
                ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_IMAGE_CAPTURE,
                                                  ( int ) &frame,
                                                  bufferCount);

            if ( NO_ERROR != ret )
                {
                CAMHAL_LOGEB("CAMERA_QUERY_BUFFER_SIZE_IMAGE_CAPTURE returned error 0x%x", ret);
                }
            }

        if ( NO_ERROR == ret )
            {
            mParameters.getPictureSize(( int * ) &frame.mWidth,
                                       ( int * ) &frame.mHeight);

            // 分配buffer
            ret = allocImageBufs(frame.mWidth,
                                 frame.mHeight,
                                 frame.mLength,
                                 mParameters.getPictureFormat(),
                                 bufferCount);
            if ( NO_ERROR != ret )
                {
                CAMHAL_LOGEB("allocImageBufs returned error 0x%x", ret);
                }
            }

        if (  (NO_ERROR == ret) && ( NULL != mCameraAdapter ) )
            {
            desc.mBuffers = mImageBufs;
            desc.mOffsets = mImageOffsets;
            desc.mFd = mImageFd;
            desc.mLength = mImageLength;
            desc.mCount = ( size_t ) bufferCount;
            desc.mMaxQueueable = ( size_t ) bufferCount;

            ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_IMAGE_CAPTURE,
                                              ( int ) &desc);
            }
        }

    if ( ( NO_ERROR == ret ) && ( NULL != mCameraAdapter ) )
        {

#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS

         //pass capture timestamp along with the camera adapter command
        ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_IMAGE_CAPTURE,  (int) &mStartCapture);

#else

        ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_IMAGE_CAPTURE);

#endif

        }

    return ret;
}

经过BaseCameraAdapter(这个当中的takePicture实现只是Stub)会到
status_t OMXCameraAdapter::takePicture(),然后会将该动作加入到Command队列当中去执行,最终实现在OMXCapture::startImageCapture()当中。
这里就会透过OMX_FillThisBuffer压入buffer到capture port,当capture完成之后会收到FillBufferDone这个callback,进入回调函数。

OMX_ERRORTYPE OMXCameraAdapter::OMXCameraGetHandle(OMX_HANDLETYPE *handle, OMX_PTR pAppData )
{
    OMX_ERRORTYPE eError = OMX_ErrorUndefined;

    for ( int i = 0; i < 5; ++i ) {
        if ( i > 0 ) {
            // sleep for 100 ms before next attempt
            usleep(100000);
        }

        // setup key parameters to send to Ducati during init
        OMX_CALLBACKTYPE oCallbacks;

        // initialize the callback handles // 注册callback
        oCallbacks.EventHandler    = android::OMXCameraAdapterEventHandler;
        oCallbacks.EmptyBufferDone = android::OMXCameraAdapterEmptyBufferDone;
        oCallbacks.FillBufferDone  = android::OMXCameraAdapterFillBufferDone;

        // get handle
        eError = OMX_GetHandle(handle, (OMX_STRING)"OMX.TI.DUCATI1.VIDEO.CAMERA", pAppData, &oCallbacks);
        // 注意这个library,实际就是/path/to/aosp/hardware/ti/omap4xxx/domx/omx_proxy_component/omx_camera/src/omx_proxy_camera.c
        // 最终通过pRPCCtx->fd_omx = open("/dev/rpmsg-omx1", O_RDWR);跟底层交互
        if ( eError == OMX_ErrorNone ) {
            return OMX_ErrorNone;
        }

        CAMHAL_LOGEB("OMX_GetHandle() failed, error: 0x%x", eError);
    }

    *handle = 0;
    return eError;
}

有了回调函数就会一直往上层传,最终就用户就得到了拍摄的照片数据。
CameraAdapter_Factory(size_t sensor_index)在OMXCameraAdapter当中。

另外TI Omap还有实现一套通过V4L2交互的方式,只是默认没有启用。
它是经过V4LCameraAdapter::UseBuffersPreview
最终会走向V4L2中去

而CameraAdapter_Factory()在V4LCameraAdapter当中。

Stagefright和codec的交互

Stagefright和codec的交互是通过OpenMAX IL来完成的。

OpenMAX IL的规范头文件位于
/path/to/aosp/frameworks/native/include/media/openmax

概述下OpenMAX IL,它包括客户端(Client),组件(Component),端口(Port),隧道化(Tunneled)。
Client是调用Component的对象,Port和Tunneled是Component之间通信的方式,Component之间还可以存在私有的通信方式。
组件(Component)的功能和其定义的端口类型密切相关,通常情况下:只有一个输出端口的,为Source组件;只有一个输入端口的,为Sink组件;有多个输入端口,一个输出端口的为Mux组件;有一个输入端口,多个输出端口的为DeMux组件;输入输出端口各一个组件的为中间处理环节,这是最常见的组件。

在Android 2.2以及之后,Stagefright是作为OpenMAX IL的上层存在的,它跟OpenMAX IL对接,然后OpenMAX IL又封装了具体的codec。

之前我有在http://guoh.org/lifelog/2013/06/android-mediarecorder-architecture/分析过StagefrightRecorder,当时跟codec的集成我没有仔细分析。

这篇文章会详细分析下这个逻辑过程。

首先我们会碰到几个关键的类,第一个就是OMX,并且它的构造方法当中会去创建一个OMXMaster,继续追踪下,发现OMXMaster正如其名字所示,是一个管理者角色。
OMXMaster作为管理者,每添加一个plugin进来,都会记录在

KeyedVector<String8, OMXPluginBase *> mPluginByComponentName; // 重复名字的codec不会再让其加进来,也就是它会去确定该plugin支持多少codec,然后记录下来
List<OMXPluginBase *> mPlugins; // 主要负责释放内存(不与业务相关,只管有添加进来的plugin)

OMXMaster到底会加多少个OMXPlugin进来?
以及到底会有多少个OMXMaster?
有多少个OMX就有多少个OMXMaster,那有多少个OMX呢?
只有一个,参见

sp<IOMX> MediaPlayerService::getOMX() {
    Mutex::Autolock autoLock(mLock);

    if (mOMX.get() == NULL) {
        mOMX = new OMX;
    }

    return mOMX;
}

那会有多少个OMXPlugin呢?。

一个SoftOMXPlugin,当中可以加载很多codec(s)进来
/path/to/aosp/frameworks/av/media/libstagefright/codecs/
有多少soft codec(s)都在上面这个目录

这里需要说明的是真实的codec的类结构,以FLAC为例子(箭头的方向为类继承的方向)
SoftFlacEncoder -> SimpleSoftOMXComponent -> SoftOMXComponent,SoftOMXComponent把OpenMAX IL定义的组件封装成一个C++类,里面有绑定组件操作的方法,但是它实都是以stub方式实现的,也就是说具体的实现在其子类。
SimpleSoftOMXComponent提供一个用于消息传递的ALooper和AHandlerReflector,实现了比如sendCommand,emptyThisBuffer和fillThisBuffer这三个异步方法(为什么只有这三个是异步,因为OpenMAX IL的规范要求这三个是non-blocking的,在我们这里实现的也就是通过looper/handler发送处理消息),以及useBuffer,allocateBuffer和freeBuffer等等一些其他方法。
这样OMXCodec当中针对OpenMAX IL组件的一个操作就会通过IOMX,进而到OMXNodeInstance,最终进入我们SoftOMXComponent里面,刚刚我们有讲过方法具体实现是在其子类,这样只要找到对应的地方就可以了。

下面的这一段实例代码很清晰,以sendCommand为例子

status_t OMXCodec::init() {
    // mLock is held.
    ...
    err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
    ...
}
status_t OMX::sendCommand(
        node_id node, OMX_COMMANDTYPE cmd, OMX_S32 param) {
    return findInstance(node)->sendCommand(cmd, param);
}
status_t OMXNodeInstance::sendCommand(
        OMX_COMMANDTYPE cmd, OMX_S32 param) {
    Mutex::Autolock autoLock(mLock);

    OMX_ERRORTYPE err = OMX_SendCommand(mHandle, cmd, param, NULL); // 注意这个宏的定义,实际就是去调用组件的SendCommand方法
    return StatusFromOMXError(err);
}
/** Send a command to the component.  This call is a non-blocking call.
    The component should check the parameters and then queue the command
    to the component thread to be executed.  The component thread shall 
    send the EventHandler() callback at the conclusion of the command. 
    This macro will go directly from the application to the component (via
    a core macro).  The component will return from this call within 5 msec.
    
    When the command is "OMX_CommandStateSet" the component will queue a
    state transition to the new state idenfied in nParam.
    
    When the command is "OMX_CommandFlush", to flush a port's buffer queues,
    the command will force the component to return all buffers NOT CURRENTLY 
    BEING PROCESSED to the application, in the order in which the buffers 
    were received.
    
    When the command is "OMX_CommandPortDisable" or 
    "OMX_CommandPortEnable", the component's port (given by the value of
    nParam) will be stopped or restarted. 
    
    When the command "OMX_CommandMarkBuffer" is used to mark a buffer, the
    pCmdData will point to a OMX_MARKTYPE structure containing the component
    handle of the component to examine the buffer chain for the mark.  nParam1
    contains the index of the port on which the buffer mark is applied.

    Specification text for more details. 
    
    @param [in] hComponent
        handle of component to execute the command
    @param [in] Cmd
        Command for the component to execute
    @param [in] nParam
        Parameter for the command to be executed.  When Cmd has the value 
        OMX_CommandStateSet, value is a member of OMX_STATETYPE.  When Cmd has 
        the value OMX_CommandFlush, value of nParam indicates which port(s) 
        to flush. -1 is used to flush all ports a single port index will 
        only flush that port.  When Cmd has the value "OMX_CommandPortDisable"
        or "OMX_CommandPortEnable", the component's port is given by 
        the value of nParam.  When Cmd has the value "OMX_CommandMarkBuffer"
        the components pot is given by the value of nParam.
    @param [in] pCmdData
        Parameter pointing to the OMX_MARKTYPE structure when Cmd has the value
        "OMX_CommandMarkBuffer".     
    @return OMX_ERRORTYPE
        If the command successfully executes, the return code will be
        OMX_ErrorNone.  Otherwise the appropriate OMX error will be returned.
    @ingroup comp
 */
#define OMX_SendCommand(                                    \
         hComponent,                                        \
         Cmd,                                               \
         nParam,                                            \
         pCmdData)                                          \
     ((OMX_COMPONENTTYPE*)hComponent)->SendCommand(         \
         hComponent,                                        \
         Cmd,                                               \
         nParam,                                            \
         pCmdData)                          /* Macro End */
OMX_ERRORTYPE SimpleSoftOMXComponent::sendCommand(
        OMX_COMMANDTYPE cmd, OMX_U32 param, OMX_PTR data) {
    CHECK(data == NULL);

    sp<AMessage> msg = new AMessage(kWhatSendCommand, mHandler->id());
    msg->setInt32("cmd", cmd);
    msg->setInt32("param", param);
    msg->post();

    return OMX_ErrorNone;
}

当然这是异步的话,就还会有需要异步执行的处理函数回来,比如在onMessageReceived方法当中就有类型为kWhatSendCommand,kWhatEmptyThisBuffer和kWhatFillThisBuffer的消息,分别会去调用自己的处理方法。
其他同步的方法调用过程类似,这里我们就不具体再说,对照代码看看应该就能理解!

还有一点需要提一下,就是具体codec被创建的方式和时间,soft codec是在SoftOMXPlugin::makeComponentInstance当中通过调用该codec的createSoftOMXComponent方法来创建出来的,换句话说我们的codec实现必须要包含这样名字的一个方法,对于FLAC就是

android::SoftOMXComponent *createSoftOMXComponent(
        const char *name, const OMX_CALLBACKTYPE *callbacks,
        OMX_PTR appData, OMX_COMPONENTTYPE **component) {
    return new android::SoftFlacEncoder(name, callbacks, appData, component);
}

会有多少个Vendor plugin(即libstagefrighthw)?
答案也是一个,但是它同样也包含有很多个codec(s)进来,以TI的HW为例子:
TIOMXPlugin(即Vendor plugin)在/path/to/aosp/hardware/ti/omap4xxx/libstagefrighthw/这里
但是具体的实现在/path/to/aosp/hardware/ti/omap4xxx/domx/omx_core/(即libOMX_Core.so)当中
假设要实例化一个codec,就需要经过如下步骤:
TIOMXPlugin.cpp(makeComponentInstance) -> OMX_Core_Wrapper.c(TIOMX_GetHandle) -> OMX_Core.c(OMX_GetHandle) -> 动态去load指定的codec的so档案,然后调用它的初始化方法(OMX_ComponentInit),至于这个方法名,应该是遵循规范得来的(规范上有写这么句话,有这么一个error msg code)。

/** The component specified did not have a "OMX_ComponentInit" or
  "OMX_ComponentDeInit entry point */
OMX_ErrorInvalidComponent = (OMX_S32) 0x80001004,

其支持的codec(s)在/path/to/aosp/hardware/ti/omap4xxx/domx/omx_proxy_component/
当然codec最终的实现可能还是在其他地方!

以下这些方法都是OpenMAX IL所定义的于OMX Core有关的标准方法,详见OMX_Core.h,TI在OMX_Core.c当中有去实现这些方法,这些可以说是管理OpenMAX IL这一层所封装的Component的接口,当然包括自身的生命周期的管理,以及Component之间的沟通。

OMX_API OMX_ERRORTYPE OMX_Init
OMX_API OMX_ERRORTYPE OMX_Deinit
OMX_API OMX_ERRORTYPE OMX_ComponentNameEnum
OMX_API OMX_ERRORTYPE OMX_GetHandle
OMX_API OMX_ERRORTYPE OMX_FreeHandle
OMX_API OMX_ERRORTYPE OMX_SetupTunnel
OMX_API OMX_ERRORTYPE OMX_GetContentPipe
OMX_API OMX_ERRORTYPE OMX_GetComponentsOfRole
OMX_API OMX_ERRORTYPE OMX_GetRolesOfComponent

也就是说OMX是操作所有codec的窗口,它透过OMXMaster来管理Soft/Vendor plugin,每个plugin当中有属于自己的codec(s)。
比如典型的,OMX有allocateNode这个方法,这是增加codec时需要的,当然它实质上是去调用OMXMaster -> OMXPlugin -> OMX_Core -> 特定的codec的实例化方法。

status_t OMX::allocateNode(
        const char *name, const sp<IOMXObserver> &observer, node_id *node) {
    Mutex::Autolock autoLock(mLock);

    *node = 0;

    OMXNodeInstance *instance = new OMXNodeInstance(this, observer);

    OMX_COMPONENTTYPE *handle; // 声明一个OMX_COMPONENTTYPE指针

    // 接着makeComponentInstance创建出来,双指针OUT传值
    // 最终会发现在codec当中会去调用这样一个方法(对于TI的HW,这个方法位于omx_proxy_common.c当中)
    // /*Calling Proxy Common Init() */
	// eError = OMX_ProxyCommonInit(hComponent);
	// 它会去初始化OMX_COMPONENTTYPE的各个属性,方法,例如OMX_COMPONENTTYPE::SetCallbacks等等

    OMX_ERRORTYPE err = mMaster->makeComponentInstance(
            name, &OMXNodeInstance::kCallbacks,
            instance, &handle);

    if (err != OMX_ErrorNone) {
        ALOGV("FAILED to allocate omx component '%s'", name);

        instance->onGetHandleFailed();

        return UNKNOWN_ERROR;
    }

    *node = makeNodeID(instance);
    mDispatchers.add(*node, new CallbackDispatcher(instance));

    instance->setHandle(*node, handle);

    mLiveNodes.add(observer->asBinder(), instance);
    observer->asBinder()->linkToDeath(this);

    return OK;
}

有两点值得注意的是
a)

status_t OMXClient::connect() {
    sp<IServiceManager> sm = defaultServiceManager();
    sp<IBinder> binder = sm->getService(String16("media.player"));
    sp<IMediaPlayerService> service = interface_cast<IMediaPlayerService>(binder);

    // defaultServiceManager(位于/path/to/aosp/frameworks/native/include/binder/IServiceManager.h)
    // 这样调用就跟本地的一个service指针来调用getOMX是一样的,因为本来StatefrightRecorder是在MediaPlayerService当中new出来的
    // 这个我们在之前的研究讨论中做过实验,详细参见http://guoh.org/lifelog/2013/06/sample-of-customized-service-on-android-platform/怎样为Android添加一个系统级服务

    CHECK(service.get() != NULL);

    mOMX = service->getOMX(); // BnOMX
    CHECK(mOMX.get() != NULL);

    // 什么情况会有不是在本地,貌似OMX是在MediaPlayerService当中new出来的,
    // OMXClient是在StagefrightRecorder当中new出来的,而StagefrightRecorder又是在MediaPlayerService当中new出来
    // 所以什么情况下会走到如下的这个using client-side OMX mux当中去(MuxOMX位于OMXClient.cpp当中)?
    if (!mOMX->livesLocally(NULL /* node */, getpid())) {
        ALOGI("Using client-side OMX mux.");
        mOMX = new MuxOMX(mOMX); // 继承自IOMX
    }

    return OK;
}

容易看出OMXClient是获取OMX的窗口

b)

sp<MediaSource> encoder = OMXCodec::Create(
        client.interface(), enc_meta,
        true /* createEncoder */, cameraSource,
        NULL, encoder_flags);

那OMXCodec/
/path/to/aosp/frameworks/av/include/media/stagefright
/path/to/aosp/frameworks/av/media/libstagefright
OMXNodeInstance
/path/to/aosp/frameworks/av/media/libstagefright/include
/path/to/aosp/frameworks/av/media/libstagefright/omx
是做什么的?
OMXCodec,OMXCodecObserver和OMXNodeInstance是一一对应的,
简单的可以理解它们3个构成了OpenMAX IL的一个Component,每一个node就是一个codec在OMX服务端的标识。
当然还有CallbackDispatcher,用于处理codec过来的消息,通过它的post/loop/dispatch来发起接收,最终透过IOMX::onMessage -> OMXNodeInstance::onMessage -> OMXCodecObserver::onMessage -> OMXCodec::on_message一路往上,当然消息的来源是因为我们有向codec注册OMXNodeInstance::kCallbacks,请看

status_t OMX::allocateNode(
        const char *name, const sp<IOMXObserver> &observer, node_id *node) {
    ...
    OMX_ERRORTYPE err = mMaster->makeComponentInstance(
            name, &OMXNodeInstance::kCallbacks,
            instance, &handle);
    ...
    return OK;
}

kCallbacks包含3种事件

OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks = {
    &OnEvent, &OnEmptyBufferDone, &OnFillBufferDone
};

它们分别都会调用到自己owner的OnEvent/OnEmptyBufferDone/OnFillBufferDone

// static
OMX_ERRORTYPE OMXNodeInstance::OnEvent(
        OMX_IN OMX_HANDLETYPE hComponent,
        OMX_IN OMX_PTR pAppData,
        OMX_IN OMX_EVENTTYPE eEvent,
        OMX_IN OMX_U32 nData1,
        OMX_IN OMX_U32 nData2,
        OMX_IN OMX_PTR pEventData) {
    OMXNodeInstance *instance = static_cast<OMXNodeInstance *>(pAppData);
    if (instance->mDying) {
        return OMX_ErrorNone;
    }
    return instance->owner()->OnEvent(
            instance->nodeID(), eEvent, nData1, nData2, pEventData);
}

而owner相应的又会调用自己dispatcher的post方法,如下:

OMX_ERRORTYPE OMX::OnEvent(
        node_id node,
        OMX_IN OMX_EVENTTYPE eEvent,
        OMX_IN OMX_U32 nData1,
        OMX_IN OMX_U32 nData2,
        OMX_IN OMX_PTR pEventData) {
    ALOGV("OnEvent(%d, %ld, %ld)", eEvent, nData1, nData2);

    omx_message msg;
    msg.type = omx_message::EVENT;
    msg.node = node;
    msg.u.event_data.event = eEvent;
    msg.u.event_data.data1 = nData1;
    msg.u.event_data.data2 = nData2;

    findDispatcher(node)->post(msg);

    return OMX_ErrorNone;
}

这样所有的事件都串起来了,消息有来源,有最终的去处!

结合这些信息所以我们就可以认为在这里是创建出了一个Component出来。

另外值得提一下的就是我们前面已经讲过Component之间也是可以有信息交互的,比如我们在录影的时候,CameraSource就作为数据的来源被OMXCodec持有,那么从OMXCodec读取的数据实际都是来自于CameraSource,然后就被MediaWriter,例如MPEG4Writer写到文件当中。

OMX_GetComponentVersion
OMX_SendCommand
OMX_GetParameter
OMX_SetParameter
OMX_GetConfig
OMX_SetConfig
OMX_GetExtensionIndex
OMX_GetState
OMX_UseBuffer
OMX_AllocateBuffer
OMX_FreeBuffer
OMX_EmptyThisBuffer
OMX_FillThisBuffer
OMX_UseEGLImage

以上这些macro位于OMX_Core.h当中,这是OpenMax所定义的
针对每一个Component封装,他们负责与具体的codec进行沟通,对于TI的HW,这些方法都实现于omx_proxy_common.c当中

其他内容:
1、HardwareAPI.h
extern android::OMXPluginBase *createOMXPlugin();
定义一个实例化plugin的入口方法,在OMXMaster当中回去动态调用这个方法

2、对于Qualcomm的HW来说,它的libstagefrighthw实现位于如下位置:
libomxcore.so /path/to/aosp/hardware/qcom/media/mm-core/omxcore

3、还有比较关键的就是OpenMAX IL之间本身的原理,还有相互沟通是怎么样的,这个比较重要。
都知道OpenMAX IL最大的作用就是一个适配层作用,它封装了底层Vendor容易变化的部分(不同codec/不同Vendor,或者其他形式的变化),对上层提供一个统一的接口(比如在这里就是Stagefright)。

一个Component的状态分为这么多种,这些定义在OMXCodec当中

enum State {
    DEAD,
    LOADED,
    LOADED_TO_IDLE,
    IDLE_TO_EXECUTING,
    EXECUTING,
    EXECUTING_TO_IDLE,
    IDLE_TO_LOADED,
    RECONFIGURING,
    ERROR
};

Component之间的交互,比如Tunneled communication,是通过Component的ComponentTunnelRequest方法来完成的(TI的HW实现在omx_proxy_common.c当中),但是从代码上看起来在Qualcomm或者TI的Camera这块都没有用到Tunneled communication,Qualcomm根本就没有去实现ComponentTunnelRequest方法。

Buffer的传输过程,这块初看很复杂,其实流程还是比较清楚的。在Camera录影这个场景中,MediaWriter会不断的请求MediaSource的read方法,从中间读取数据,但是在MediaSource内部会对其进行控制,用的就是signal(Condition mFrameAvailableCondition;)。OMXCodec的drainInputBuffer/fillOutputBuffer这两个方法最重要,排空input缓冲(source过来的数据)和填充output缓冲(准备给writer的数据)。drainInputBuffer把数据发到实际的codec,codec收完之后发送EMPTY_BUFFER_DONE消息过来,OMXCodec收到该消息继续发送数据到codec。另外同时OMXCodec还会执行fillOutputBuffer,即喂给codec一个空的buffer,codec编码完成之后会发送FILL_BUFFER_DONE过来,这个消息带有编码好的数据,即被填满的buffer,这样MediaWriter就收到编码完成的数据,然后保存就可以了。细节性的东西就需要自己追踪代码可能更好理解,聪明的你可能就想到了那这里是不是存在两个Buffer,是的,你是对的,请看List mFilledBuffers;

这里再罗嗦的讲解一遍实际过程(MPEG4录影为例子):
MPEG4Writer:
开始read之后,会调用drainInputBuffers(),它会从kPortIndexInput取出一组buffer出来,先把数据从CameraSource读取到这个buffer当中,然后调用IOMX的emptyBuffer把buffer交给实际的encoder,然后while loop等待encoder把output buffer(从kPortIndexOutput取出)填满(encoder的FILL_BUFFER_DONE回来会将通过msg带回来的数据放到output buffer,还会在mFilledBuffers当中记录下该buffer的index,并通过mBufferFilled.signal()广播出来),也就是说OMXCodec的read方法是同步的,必须要等到从CameraSource读取数据,然后encoder编码完毕返回数据才算read结束,然后会MPEG4Writer会将该buffer拷贝一份出来,并且释放掉原来的buffer(这个buffer是大家共同可以访问的,所以MPEG4Writer不想或者不应该长期持有它,写文件所消耗的时间并不是一定的,所以这里应该算是以空间换时间的一种做法)。

buffer的分配:
input buffer的MediaBuffer是在CameraSource当中分配,OMXCodec当中release(EMPTY_BUFFER_DONE回来如果buffer没有被release掉就执行release过程)
output buffer的MediaBuffer分多种情况,有可能是在allocateBuffer的时候分配的,有可能是FILL_BUFFER_DONE回来的时候分配,在MediaWriter读取到处理好之后的数据自己拷贝该数据一份之后release。

End Of Stream:
一般情况decoder或encoder告诉我们EOS,但是有些时候不告诉我们的时候OMXCodec也有做workaround,通过计算不在OMX Component的buffer的个数,如果所有buffer都不被OMX Component所持有,就认为已经到EOS了。

Android标记贴

这里记录些经常遇到的有关Android/AOSP/PandaBoard的问题(可能比较小,并且又不知道放哪里的事情)!

1. PandaBoard获取root privilege(adb remount)

/path/to/aosp/out/target/product/panda/root/default.prop

修改设定为ro.secure=0
再次打包镜像,理论上这应该对所有的ROM都管用!
当然我这里是AOSP源码编译然后烧录的,如果只是普通的ROM,想root的话,请参考官方或者网络上的方法!

Android MediaRecorder系统结构

前面有分析过Camera的实现,现在来看看MediaRecorder的实现,这里我不会太去关注它的分层结构,我更关注它的逻辑!

APP层 /path/to/aosp/frameworks/base/media/java/android/media/MediaRecorder.java
JNI层 /path/to/aosp/frameworks/base/media/jni/android_media_MediaRecorder.cpp
调用NATIVE层的MediaRecorder(这里是BnMediaRecorderClient)
header /path/to/aosp/frameworks/av/include/media/mediarecorder.h
implementation /path/to/aosp/frameworks/av/media/libmedia/mediarecorder.cpp

MediaRecorder::MediaRecorder() : mSurfaceMediaSource(NULL)
{
    ALOGV("constructor");

    const sp<IMediaPlayerService>& service(getMediaPlayerService());
    if (service != NULL) {
        mMediaRecorder = service->createMediaRecorder(getpid());
    }
    if (mMediaRecorder != NULL) {
        mCurrentState = MEDIA_RECORDER_IDLE;
    }

    doCleanUp();
}

getMediaPlayerService()这个方法位于/path/to/aosp/frameworks/av/include/media/IMediaDeathNotifier.h

获取到MediaPlayerService(这个是BpMediaPlayerService)之后
调用IMediaPlayerService当中的

sp<IMediaRecorder> MediaPlayerService::createMediaRecorder(pid_t pid)
{
    sp<MediaRecorderClient> recorder = new MediaRecorderClient(this, pid);
    wp<MediaRecorderClient> w = recorder;
    Mutex::Autolock lock(mLock);
    mMediaRecorderClients.add(w);
    ALOGV("Create new media recorder client from pid %d", pid);
    return recorder;
}

创建MediaRecorderClient(这里是BnMediaRecorder)

但是通过binder拿到的是BpMediaRecorder
因为有如下的interface_cast过程

virtual sp<IMediaRecorder> createMediaRecorder(pid_t pid)
{
    Parcel data, reply;
    data.writeInterfaceToken(IMediaPlayerService::getInterfaceDescriptor());
    data.writeInt32(pid);
    remote()->transact(CREATE_MEDIA_RECORDER, data, &reply);
    return interface_cast<IMediaRecorder>(reply.readStrongBinder());
}

而MediaRecorderClient当中又会创建StagefrightRecorder(MediaRecorderBase),它位于
/path/to/aosp/frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp

目前我们可以认为在APP/JNI/NATIVE这边是在一个进程当中,在MediaPlayerService当中的MediaRecorderClient/StagefrightRecorder是在另外一个进程当中,他们之间通过binder通信,而且Bp和Bn我们也都有拿到,后面我们将不再仔细区分Bp和Bn。

客户端这边
BnMediaRecorderClient
BpMediaRecorder
BpMediaPlayerService

服务端这边
BpMediaRecorderClient(如果需要通知客户端的话,它可以获得这个Bp)
BnMediaRecorder
BnMediaPlayerService

这有张图(点过去看原始大图)
Android MediaRecorder Diagram

我们以开始录影为例子,比如start()

在这里就兵分两路,一个CameraSource,一个MPEG4Writer(sp mWriter)
这两个class都位于/path/to/aosp/frameworks/av/media/libstagefright/当中

status_t StagefrightRecorder::startMPEG4Recording() {
    int32_t totalBitRate;
    status_t err = setupMPEG4Recording(
            mOutputFd, mVideoWidth, mVideoHeight,
            mVideoBitRate, &totalBitRate, &mWriter);
    if (err != OK) {
        return err;
    }

    int64_t startTimeUs = systemTime() / 1000;
    sp<MetaData> meta = new MetaData;
    setupMPEG4MetaData(startTimeUs, totalBitRate, &meta);

    err = mWriter->start(meta.get());
    if (err != OK) {
        return err;
    }

    return OK;
}
status_t StagefrightRecorder::setupMPEG4Recording(
        int outputFd,
        int32_t videoWidth, int32_t videoHeight,
        int32_t videoBitRate,
        int32_t *totalBitRate,
        sp<MediaWriter> *mediaWriter) {
    mediaWriter->clear();
    *totalBitRate = 0;
    status_t err = OK;
    sp<MediaWriter> writer = new MPEG4Writer(outputFd);

    if (mVideoSource < VIDEO_SOURCE_LIST_END) {

        sp<MediaSource> mediaSource;
        err = setupMediaSource(&mediaSource); // very important
        if (err != OK) {
            return err;
        }

        sp<MediaSource> encoder;
        err = setupVideoEncoder(mediaSource, videoBitRate, &encoder); // very important
        if (err != OK) {
            return err;
        }

        writer->addSource(encoder);
        *totalBitRate += videoBitRate;
    }

    // Audio source is added at the end if it exists.
    // This help make sure that the "recoding" sound is suppressed for
    // camcorder applications in the recorded files.
    if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
        err = setupAudioEncoder(writer); // very important
        if (err != OK) return err;
        *totalBitRate += mAudioBitRate;
    }

    ...

    writer->setListener(mListener);
    *mediaWriter = writer;
    return OK;
}
// Set up the appropriate MediaSource depending on the chosen option
status_t StagefrightRecorder::setupMediaSource(
                      sp<MediaSource> *mediaSource) {
    if (mVideoSource == VIDEO_SOURCE_DEFAULT
            || mVideoSource == VIDEO_SOURCE_CAMERA) {
        sp<CameraSource> cameraSource;
        status_t err = setupCameraSource(&cameraSource);
        if (err != OK) {
            return err;
        }
        *mediaSource = cameraSource;
    } else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) {
        // If using GRAlloc buffers, setup surfacemediasource.
        // Later a handle to that will be passed
        // to the client side when queried
        status_t err = setupSurfaceMediaSource();
        if (err != OK) {
            return err;
        }
        *mediaSource = mSurfaceMediaSource;
    } else {
        return INVALID_OPERATION;
    }
    return OK;
}
status_t StagefrightRecorder::setupCameraSource(
        sp<CameraSource> *cameraSource) {
    status_t err = OK;
    if ((err = checkVideoEncoderCapabilities()) != OK) {
        return err;
    }
    Size videoSize;
    videoSize.width = mVideoWidth;
    videoSize.height = mVideoHeight;
    if (mCaptureTimeLapse) {
        if (mTimeBetweenTimeLapseFrameCaptureUs < 0) {
            ALOGE("Invalid mTimeBetweenTimeLapseFrameCaptureUs value: %lld",
                mTimeBetweenTimeLapseFrameCaptureUs);
            return BAD_VALUE;
        }

        mCameraSourceTimeLapse = CameraSourceTimeLapse::CreateFromCamera(
                mCamera, mCameraProxy, mCameraId,
                videoSize, mFrameRate, mPreviewSurface,
                mTimeBetweenTimeLapseFrameCaptureUs);
        *cameraSource = mCameraSourceTimeLapse;
    } else {
        *cameraSource = CameraSource::CreateFromCamera(
                mCamera, mCameraProxy, mCameraId, videoSize, mFrameRate,
                mPreviewSurface, true /*storeMetaDataInVideoBuffers*/);
    }
    mCamera.clear();
    mCameraProxy.clear();
    if (*cameraSource == NULL) {
        return UNKNOWN_ERROR;
    }

    if ((*cameraSource)->initCheck() != OK) {
        (*cameraSource).clear();
        *cameraSource = NULL;
        return NO_INIT;
    }

    // When frame rate is not set, the actual frame rate will be set to
    // the current frame rate being used.
    if (mFrameRate == -1) {
        int32_t frameRate = 0;
        CHECK ((*cameraSource)->getFormat()->findInt32(
                    kKeyFrameRate, &frameRate));
        ALOGI("Frame rate is not explicitly set. Use the current frame "
             "rate (%d fps)", frameRate);
        mFrameRate = frameRate;
    }

    CHECK(mFrameRate != -1);

    mIsMetaDataStoredInVideoBuffers =
        (*cameraSource)->isMetaDataStoredInVideoBuffers();

    return OK;
}
status_t StagefrightRecorder::setupVideoEncoder(
        sp<MediaSource> cameraSource,
        int32_t videoBitRate,
        sp<MediaSource> *source) {
    source->clear();

    sp<MetaData> enc_meta = new MetaData;
    enc_meta->setInt32(kKeyBitRate, videoBitRate);
    enc_meta->setInt32(kKeyFrameRate, mFrameRate);

    switch (mVideoEncoder) {
        case VIDEO_ENCODER_H263:
            enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263);
            break;

        case VIDEO_ENCODER_MPEG_4_SP:
            enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4);
            break;

        case VIDEO_ENCODER_H264:
            enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
            break;

        default:
            CHECK(!"Should not be here, unsupported video encoding.");
            break;
    }

    sp<MetaData> meta = cameraSource->getFormat();

    int32_t width, height, stride, sliceHeight, colorFormat;
    CHECK(meta->findInt32(kKeyWidth, &width));
    CHECK(meta->findInt32(kKeyHeight, &height));
    CHECK(meta->findInt32(kKeyStride, &stride));
    CHECK(meta->findInt32(kKeySliceHeight, &sliceHeight));
    CHECK(meta->findInt32(kKeyColorFormat, &colorFormat));

    enc_meta->setInt32(kKeyWidth, width);
    enc_meta->setInt32(kKeyHeight, height);
    enc_meta->setInt32(kKeyIFramesInterval, mIFramesIntervalSec);
    enc_meta->setInt32(kKeyStride, stride);
    enc_meta->setInt32(kKeySliceHeight, sliceHeight);
    enc_meta->setInt32(kKeyColorFormat, colorFormat);
    if (mVideoTimeScale > 0) {
        enc_meta->setInt32(kKeyTimeScale, mVideoTimeScale);
    }
    if (mVideoEncoderProfile != -1) {
        enc_meta->setInt32(kKeyVideoProfile, mVideoEncoderProfile);
    }
    if (mVideoEncoderLevel != -1) {
        enc_meta->setInt32(kKeyVideoLevel, mVideoEncoderLevel);
    }

    OMXClient client;
    CHECK_EQ(client.connect(), (status_t)OK);

    uint32_t encoder_flags = 0;
    if (mIsMetaDataStoredInVideoBuffers) {
        encoder_flags |= OMXCodec::kStoreMetaDataInVideoBuffers;
    }

    // Do not wait for all the input buffers to become available.
    // This give timelapse video recording faster response in
    // receiving output from video encoder component.
    if (mCaptureTimeLapse) {
        encoder_flags |= OMXCodec::kOnlySubmitOneInputBufferAtOneTime;
    }

    sp<MediaSource> encoder = OMXCodec::Create(
            client.interface(), enc_meta,
            true /* createEncoder */, cameraSource,
            NULL, encoder_flags);
    if (encoder == NULL) {
        ALOGW("Failed to create the encoder");
        // When the encoder fails to be created, we need
        // release the camera source due to the camera's lock
        // and unlock mechanism.
        cameraSource->stop();
        return UNKNOWN_ERROR;
    }

    *source = encoder;

    return OK;
}

这里和OMXCodec关联起来
有一个叫media_codecs.xml的配置文件来表明设备支持哪些codec

我们录制MPEG 4的时候还会有声音,所以后面还有个setupAudioEncoder,具体的方法就不展开了,总之就是把声音也作为一个Track加入到MPEG4Writer当中去。
这里插个题外话,Google说把setupAudioEncoder放到后面是为了避免开始录影的那一个提示声音也被录制进去,但是实际发现它这样做还是会有bug,在一些设备上还是会把那声录制进去,这个遇到的都是靠APP自己来播放声音来绕过这个问题的。

另外MPEG4Writer当中有个
start(MetaData*)
启动两个方法
a) startWriterThread

启动一个thread去写

    void MPEG4Writer::threadFunc() {
        ALOGV("threadFunc");

        prctl(PR_SET_NAME, (unsigned long)"MPEG4Writer", 0, 0, 0);

        Mutex::Autolock autoLock(mLock);
        while (!mDone) {
            Chunk chunk;
            bool chunkFound = false;

            while (!mDone && !(chunkFound = findChunkToWrite(&chunk))) {
                mChunkReadyCondition.wait(mLock);
            }

            // Actual write without holding the lock in order to
            // reduce the blocking time for media track threads.
            if (chunkFound) {
                mLock.unlock();
                writeChunkToFile(&chunk);
                mLock.lock();
            }
        }

        writeAllChunks();
    }

b) startTracks

    status_t MPEG4Writer::startTracks(MetaData *params) {
        for (List<Track *>::iterator it = mTracks.begin();
             it != mTracks.end(); ++it) {
            status_t err = (*it)->start(params);

            if (err != OK) {
                for (List<Track *>::iterator it2 = mTracks.begin();
                     it2 != it; ++it2) {
                    (*it2)->stop();
                }

                return err;
            }
        }
        return OK;
    }

然后调用每个Track的start方法

    status_t MPEG4Writer::Track::start(MetaData *params) {
        ...

        initTrackingProgressStatus(params);

        ...

        status_t err = mSource->start(meta.get()); // 这里会去执行CameraSource(start),这两个是相互关联的

        ...

        pthread_create(&mThread, &attr, ThreadWrapper, this);
        return OK;
    }

    void *MPEG4Writer::Track::ThreadWrapper(void *me) {
        Track *track = static_cast<Track *>(me);

        status_t err = track->threadEntry();
        return (void *) err;
    }

通过status_t MPEG4Writer::Track::threadEntry()
是新启动另外一个thread,它里面会通过一个循环来不断读取CameraSource(read)里面的数据,CameraSource里面的数据当然是从driver返回过来的(可以参见CameraSourceListener,CameraSource用一个叫做mFrameReceived的List专门存放从driver过来的数据,如果收到数据会调用mFrameAvailableCondition.signal,若还没有开始录影,这个时候收到的数据是被丢弃的,当然MediaWriter先启动的是CameraSource的start方法,再启动写Track),然后写到文件当中。
注意:准确来说这里MPEG4Writer读取的是OMXCodec里的数据,因为数据先到CameraSource,codec负责编码之后,MPEG4Writer才负责写到文件当中!关于数据在CameraSource/OMXCodec/MPEG4Writer之间是怎么传递的,可以参见http://guoh.org/lifelog/2013/06/interaction-between-stagefright-and-codec/当中讲Buffer的传输过程。

回头再来看,Stagefright做了什么事情?我更觉得它只是一个粘合剂(glue)的用处,它工作在MediaPlayerService这一层,把MediaSource,MediaWriter,Codec以及上层的MediaRecorder绑定在一起,这应该就是它最大的作用,Google用它来替换Opencore也是符合其一贯的工程派作风(相比复杂的学术派而言,虽然Google很多东西也很复杂,但是它一般都是以尽量简单的方式来解决问题)。
让大家觉得有点不习惯的是,它把MediaRecorder放在MediaPlayerService当中,这两个看起来是对立的事情,或者某一天它们会改名字,或者是两者分开,不知道~~

当然这只是个简单的大体介绍,Codec相关的后面争取专门来分析一下!

有些细节的东西在这里没有列出,需要的话会把一些注意点列出来:

1. 时光流逝录影
CameraSource对应的就是CameraSourceTimeLapse

具体做法就是在
dataCallbackTimestamp
当中有skipCurrentFrame

当然它是用些变量来记录和计算
mTimeBetweenTimeLapseVideoFramesUs(1E6/videoFrameRate) // 两个frame之间的间隔时间
记录上一个frame的(mLastTimeLapseFrameRealTimestampUs) // 上一个frame发生的时间
然后通过frame rate计算出两个frame之间的相距离时间,中间的都透过releaseOneRecordingFrame来drop掉
也就是说driver返回的东西都不变,只是在SW这层我们自己来处理掉

关于Time-lapse相关的可以参阅
https://en.wikipedia.org/wiki/Time-lapse_photography

2. 录影当中需要用到Camera的话是通过ICameraRecordingProxy,即Camera当中的RecordingProxy(这是一个BnCameraRecordingProxy)
当透过binder,将ICameraRecordingProxy传到服务端进程之后,它就变成了Bp,如下:

case SET_CAMERA: {
    ALOGV("SET_CAMERA");
    CHECK_INTERFACE(IMediaRecorder, data, reply);
    sp<ICamera> camera = interface_cast<ICamera>(data.readStrongBinder());
    sp<ICameraRecordingProxy> proxy =
        interface_cast<ICameraRecordingProxy>(data.readStrongBinder());
    reply->writeInt32(setCamera(camera, proxy));
    return NO_ERROR;
} break;

在CameraSource当中会这样去使用

// We get the proxy from Camera, not ICamera. We need to get the proxy
// to the remote Camera owned by the application. Here mCamera is a
// local Camera object created by us. We cannot use the proxy from
// mCamera here.
mCamera = Camera::create(camera);
if (mCamera == 0) return -EBUSY;
mCameraRecordingProxy = proxy;
mCameraFlags |= FLAGS_HOT_CAMERA;

疑问点:

CameraSource当中这个
List > mFramesBeingEncoded;
有什么用?
每编码完一个frame,CameraSource就会将其保存起来,Buffer被release的时候,会反过来release掉这些frame(s),这种做法是为了效率么?为什么不编码完一个frame就将其release掉?
另外不得不再感叹下Google经常的delete this;行为,精妙,但是看起来反常!

怎样为Android添加一个系统级服务

内容均来自于网络资料,源码以及自己的理解,如有错误的地方还请指出!示例源码可以随意使用。

我这里使用的环境如下

PLATFORM_VERSION_CODENAME=AOSP
PLATFORM_VERSION=4.0.9.99.999.9999.99999
TARGET_PRODUCT=full_panda
TARGET_BUILD_VARIANT=userdebug
TARGET_BUILD_TYPE=release

Android提供了一些基础服务,但是如果你想给它添加一些特定的服务的话,也是可以的(比如在一直到某些特定的平台下,需要增加/裁剪一些服务)

比如我们都知道Android上用的最广泛的应该是多媒体服务,其对应的有一个mediaserver进程,然后比如我们的
AudioFlinger/MediaPlayerService/CameraService/AudioPolicyService
都是跑在这个进程里面的

那如果我们想自己增加一个类似的东西可不可以呢?答案当然是可以的。

很简单的几点就可以帮你完成

启动进程

/path/to/aosp/frameworks/base/customized_service/server

具体服务实现

/path/to/aosp/frameworks/base/customized_service/libcustomizedservice

系统开机启动

/path/to/aosp/system/core/rootdir/init.rc

添加如下内容到init.rc文件

service customized /system/bin/customizedserver
    class main
    user media
    group audio camera

我是用的Pandaboard
编译整个ROM,然后fastboot flashall

I/        ( 1080): ServiceManager: 0x40ce4f00
D/        ( 1080): CustomizedService instantiate
D/        ( 1080): CustomizedService created
E/ServiceManager(   94): add_service('test.customized',0x44) uid=xxxx - PERMISSION DENIED
D/        ( 1080): CustomizedService r = -1
D/        ( 1080): CustomizedService destroyed

或者你想从shell当中启动这个服务的进程也会出现类似权限不足的问题,追一下log就很快会发现问题

int do_add_service(struct binder_state *bs,
                   uint16_t *s, unsigned len,
                   void *ptr, unsigned uid, int allow_isolated)
{
    struct svcinfo *si;

    if (!ptr || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(uid, s)) {
        ALOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
             str8(s), ptr, uid);
        return -1;
    }

    si = find_svc(s, len);
 
    ......

    binder_acquire(bs, ptr);
    binder_link_to_death(bs, ptr, &si->death);
    return 0;
}
int svc_can_register(unsigned uid, uint16_t *name)
{
    unsigned n;
    
    if ((uid == 0) || (uid == AID_SYSTEM))
        return 1;

    for (n = 0; n < sizeof(allowed) / sizeof(allowed[0]); n++)
        if ((uid == allowed[n].uid) && str16eq(name, allowed[n].name))
            return 1;

    return 0;
}

可以看出能注册service的用户的uid只能是0即ROOT或者是AID_SYSTEM,或者是allowed列表当中的服务,最简单的就是把所要添加的服务加到列表当中

{ AID_MEDIA, "test.customized" },

而服务allowed列表位于

/path/to/aosp/frameworks/base/cmds/servicemanager/service_manager.c

重新编译ROM,烧完之后启动看到customizedservice这个进程起来的话,就多半表明服务加好了

App端怎么调用?

写个Activity利用android.os.ServiceManager的getService获取到指定的服务,然后调用就可以了。
这里需要说明下的就是目前android.os.ServiceManager对于app来说是hide掉的,我们暂且可以不管,用个反射先用着。
至于为什么要hide掉,难道Android不想让我们自己定义系统级别的service来使用?还是说用其他的方式可以完成,目前我也不清楚!

目前结果就是通过getService能获取到我们的服务,并且成功调用,但是因为我返回的参数有String有int,而String没有获取出来,都知道Parcel的参数顺序是很重要的,可能是这个的原因,后面再来看看到底是什么原因,目前就是把customized service这一套先跑通。

P.S. 后来一次发现可能是我每次编译的so档案没有进到ROM当中去,所以烧到设备的so档案不是最新的,就没有把后面加的String获取出来(之前是两个int型数据)。
另外我还做了个实验,就是在service端通过ServiceManager的getService(…)方法再次访问当前service,这个就相当于在当前代码中通过一个指针或者引用访问的效果是一样的,这就是Binder的强大之处吧!此改动可以参见以下commit:

commit 1875ad41d5f8e505cb4c26599e4626944005f26b
Date: Tue Jun 25 12:10:59 2013 +0800

Symptom: test invoking getService in same process with service
Issue ID: NA
Root Cause: NA
Solution: NA

完整代码参见
https://github.com/guohai/and-customized-service.git

这里是Java端调用native端的服务,其实也可以native调用native的服务,这里暂时就不再去具体实现了,需要的话,会再来写。

无图无真相
add-customized-service-for-android
add-customized-service-for-android-1

Android Camera系统结构

说到架构/构架就觉得心虚,感觉这两个字太高端了,所以我只敢用结构来表达。

以下表述如果存在错误,请指教!

之前有在一篇博客中提到Camera相关的东西
http://guoh.org/lifelog/2012/09/asop-camera-source-code-reading-video-snapshot/

比如

$ANDROID_SRC_HOME/frameworks/base/core/java/android/hardware/Camera.java
$ANDROID_SRC_HOME/frameworks/base/core/jni/android_hardware_Camera.cpp

// 注:目前以下这两个文件在JB当中应该被移动位置了,由此可见Android也是个活跃的项目

$ANDROID_SRC_HOME/frameworks/base/include/camera/Camera.h
$ANDROID_SRC_HOME/frameworks/base/libs/camera/Camera.cpp

这篇博客是阅读http://source.android.com/devices/camera.html,结合在JB代码的基础上写的笔记,可能不是那么详细和有逻辑,只关注重点和要注意的地方,所以如果你刚好读到,但是你又没有什么Android Camera的基础的话,建议还是先看看http://developer.android.com/guide/topics/media/camera.htmlReference

正式开始
大体来说Camera分为这么多部分
APP — JNI — NATIVE — BINDER — CAMERA SERVICE — HAL — DRIVER

下面列出这每一部分对应于AOSP当中的代码

APP($ANDROID_SRC_HOME/frameworks/base/core/java/android/hardware/Camera.java)

JNI($ANDROID_SRC_HOME/frameworks/base/core/jni/android_hardware_Camera.cpp)

NATIVE($ANDROID_SRC_HOME/frameworks/av/camera/Camera.cpp)

BINDER($ANDROID_SRC_HOME/frameworks/av/camera/ICameraService.cpp
       $ANDROID_SRC_HOME/frameworks/av/camera/ICameraClient.cpp)

CAMERA SERVICE($ANDROID_SRC_HOME/frameworks/av/services/camera/libcameraservice/CameraService.cpp)

HAL($ANDROID_SRC_HOME/hardware/libhardware/include/hardware/camera.h
    $ANDROID_SRC_HOME/hardware/libhardware/include/hardware/camera_common.h
    $ANDROID_SRC_HOME/frameworks/av/services/camera/libcameraservice/CameraHardwareInterface.h)

对于Galaxy Nexus来讲,HAL的具体实现位于

$ANDROID_SRC_HOME/hardware/ti/omap4xxx/camera/CameraHal.cpp
$ANDROID_SRC_HOME/hardware/ti/omap4xxx/camera/CameraHalCommon.cpp

头文件位于

$ANDROID_SRC_HOME/hardware/ti/omap4xxx/camera/inc/

对于Galaxy Nexus来讲

DRIVER(https://developers.google.com/android/nexus/drivers)

不知道看了对Camera有没有个大体的认识?
大致了解了的话,就翻代码出来看看自己感兴趣的部分吧

你可以通过repo拉取所有代码到本地看,也可以在线查看。
android/hardware/Camera.java

public static Camera open(int cameraId) {
    return new Camera(cameraId);
}
Camera(int cameraId) {
    mShutterCallback = null;
    mRawImageCallback = null;
    mJpegCallback = null;
    mPreviewCallback = null;
    mPostviewCallback = null;
    mZoomListener = null;

    Looper looper;
    if ((looper = Looper.myLooper()) != null) {
        mEventHandler = new EventHandler(this, looper);
    } else if ((looper = Looper.getMainLooper()) != null) {
        mEventHandler = new EventHandler(this, looper);
    } else {
        mEventHandler = null;
    }

    native_setup(new WeakReference<Camera>(this), cameraId);
}

以上代码描述了我们所要研究的东西的基础,开启camera,跟所有的硬件一样,使用之前需要先开启。
这里有个需要注意的地方就是mEventHandler,这个变量是做什么用的,文档有解释的比较清楚,如下:

Callbacks from other methods are delivered to the event loop of the
thread which called open(). If this thread has no event loop, then
callbacks are delivered to the main application event loop. If there
is no main application event loop, callbacks are not delivered.

如果你不是写一个玩具程序的话,你就应当注意这里的描述。

随着native_setup(new WeakReference(this), cameraId);,代码走到了
android_hardware_Camera.cpp

// connect to camera service
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
    jobject weak_this, jint cameraId)
{
    sp<Camera> camera = Camera::connect(cameraId);

    if (camera == NULL) {
        jniThrowRuntimeException(env, "Fail to connect to camera service");
        return;
    }

    // make sure camera hardware is alive
    if (camera->getStatus() != NO_ERROR) {
        jniThrowRuntimeException(env, "Camera initialization failed");
        return;
    }

    jclass clazz = env->GetObjectClass(thiz);
    if (clazz == NULL) {
        jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
        return;
    }

    // We use a weak reference so the Camera object can be garbage collected.
    // The reference is only used as a proxy for callbacks.
    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
    context->incStrong(thiz);
    camera->setListener(context);

    // save context in opaque field
    env->SetIntField(thiz, fields.context, (int)context.get());
}

随着sp camera = Camera::connect(cameraId);,代码走到了
av/include/camera/Camera.h,当然实现都在av/camera/Camera.cpp

sp<Camera> Camera::connect(int cameraId)
{
    ALOGV("connect");
    sp<Camera> c = new Camera();
    const sp<ICameraService>& cs = getCameraService();
    if (cs != 0) {
        c->mCamera = cs->connect(c, cameraId);
    }
    if (c->mCamera != 0) {
        c->mCamera->asBinder()->linkToDeath(c);
        c->mStatus = NO_ERROR;
    } else {
        c.clear();
    }
    return c;
}

这里getCameraService()是一个比较关键的东西,需要仔细看看。

// establish binder interface to camera service
const sp<ICameraService>& Camera::getCameraService()
{
    Mutex::Autolock _l(mLock);
    if (mCameraService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.camera"));
            if (binder != 0)
                break;
            ALOGW("CameraService not published, waiting...");
            usleep(500000); // 0.5 s
        } while(true);
        if (mDeathNotifier == NULL) {
            mDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(mDeathNotifier);
        mCameraService = interface_cast<ICameraService>(binder);
    }
    ALOGE_IF(mCameraService==0, "no CameraService!?");
    return mCameraService;
}

然后就是av/camera/ICameraService.cpp当中代理类BpCameraService

// connect to camera service
virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient, int cameraId)
{
    Parcel data, reply;
    data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
    data.writeStrongBinder(cameraClient->asBinder());
    data.writeInt32(cameraId);
    remote()->transact(BnCameraService::CONNECT, data, &reply);
    return interface_cast<ICamera>(reply.readStrongBinder());
}

最终转移到BnCameraServiceconnect方法,但是这是个纯虚函数,而且CameraService继承了BnCameraService,所以请看
av/services/camera/libcameraservice/CameraService.h,当然具体实现还是在av/services/camera/libcameraservice/CameraService.cpp当中。

sp<ICamera> CameraService::connect(
        const sp<ICameraClient>& cameraClient, int cameraId) {
    int callingPid = getCallingPid();

    LOG1("CameraService::connect E (pid %d, id %d)", callingPid, cameraId);

    if (!mModule) {
        ALOGE("Camera HAL module not loaded");
        return NULL;
    }

    sp<Client> client;
    if (cameraId < 0 || cameraId >= mNumberOfCameras) {
        ALOGE("CameraService::connect X (pid %d) rejected (invalid cameraId %d).",
            callingPid, cameraId);
        return NULL;
    }

    char value[PROPERTY_VALUE_MAX];
    property_get("sys.secpolicy.camera.disabled", value, "0");
    if (strcmp(value, "1") == 0) {
        // Camera is disabled by DevicePolicyManager.
        ALOGI("Camera is disabled. connect X (pid %d) rejected", callingPid);
        return NULL;
    }

    Mutex::Autolock lock(mServiceLock);
    if (mClient[cameraId] != 0) {
        client = mClient[cameraId].promote();
        if (client != 0) {
            if (cameraClient->asBinder() == client->getCameraClient()->asBinder()) {
                LOG1("CameraService::connect X (pid %d) (the same client)",
                     callingPid);
                return client;
            } else {
                ALOGW("CameraService::connect X (pid %d) rejected (existing client).",
                      callingPid);
                return NULL;
            }
        }
        mClient[cameraId].clear();
    }

    if (mBusy[cameraId]) {
        ALOGW("CameraService::connect X (pid %d) rejected"
                " (camera %d is still busy).", callingPid, cameraId);
        return NULL;
    }

    struct camera_info info;
    if (mModule->get_camera_info(cameraId, &info) != OK) {
        ALOGE("Invalid camera id %d", cameraId);
        return NULL;
    }

    int deviceVersion;
    if (mModule->common.module_api_version == CAMERA_MODULE_API_VERSION_2_0) {
        deviceVersion = info.device_version;
    } else {
        deviceVersion = CAMERA_DEVICE_API_VERSION_1_0;
    }

    switch(deviceVersion) {
      case CAMERA_DEVICE_API_VERSION_1_0:
        client = new CameraClient(this, cameraClient, cameraId,
                info.facing, callingPid, getpid());
        break;
      case CAMERA_DEVICE_API_VERSION_2_0:
        client = new Camera2Client(this, cameraClient, cameraId,
                info.facing, callingPid, getpid());
        break;
      default:
        ALOGE("Unknown camera device HAL version: %d", deviceVersion);
        return NULL;
    }

    if (client->initialize(mModule) != OK) {
        return NULL;
    }

    cameraClient->asBinder()->linkToDeath(this);

    mClient[cameraId] = client;
    LOG1("CameraService::connect X (id %d, this pid is %d)", cameraId, getpid());
    return client;
}

比如我们这里以HAL 1.0来看,参见
av/services/camera/libcameraservice/CameraClient.h

av/services/camera/libcameraservice/CameraClient.cpp

CameraClient::CameraClient(const sp<CameraService>& cameraService,
        const sp<ICameraClient>& cameraClient,
        int cameraId, int cameraFacing, int clientPid, int servicePid):
        Client(cameraService, cameraClient,
                cameraId, cameraFacing, clientPid, servicePid)
{
    int callingPid = getCallingPid();
    LOG1("CameraClient::CameraClient E (pid %d, id %d)", callingPid, cameraId);

    mHardware = NULL;
    mMsgEnabled = 0;
    mSurface = 0;
    mPreviewWindow = 0;
    mDestructionStarted = false;

    // Callback is disabled by default
    mPreviewCallbackFlag = CAMERA_FRAME_CALLBACK_FLAG_NOOP;
    mOrientation = getOrientation(0, mCameraFacing == CAMERA_FACING_FRONT);
    mPlayShutterSound = true;
    LOG1("CameraClient::CameraClient X (pid %d, id %d)", callingPid, cameraId);
}
status_t CameraClient::initialize(camera_module_t *module) {
    int callingPid = getCallingPid();
    LOG1("CameraClient::initialize E (pid %d, id %d)", callingPid, mCameraId);

    char camera_device_name[10];
    status_t res;
    snprintf(camera_device_name, sizeof(camera_device_name), "%d", mCameraId);

    mHardware = new CameraHardwareInterface(camera_device_name);
    res = mHardware->initialize(&module->common);
    if (res != OK) {
        ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",
                __FUNCTION__, mCameraId, strerror(-res), res);
        mHardware.clear();
        return NO_INIT;
    }

    mHardware->setCallbacks(notifyCallback,
            dataCallback,
            dataCallbackTimestamp,
            (void *)mCameraId);

    // Enable zoom, error, focus, and metadata messages by default
    enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
                  CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE);

    LOG1("CameraClient::initialize X (pid %d, id %d)", callingPid, mCameraId);
    return OK;
}

当然还会用到
libhardware//include/hardware/camera.h

libhardware//include/hardware/camera_common.h

av/services/camera/libcameraservice/CameraHardwareInterface.h

status_t initialize(hw_module_t *module)
{
    ALOGI("Opening camera %s", mName.string());
    int rc = module->methods->open(module, mName.string(),
                                   (hw_device_t **)&mDevice);
    if (rc != OK) {
        ALOGE("Could not open camera %s: %d", mName.string(), rc);
        return rc;
    }
    initHalPreviewWindow();
    return rc;
}

这只是一个接口,具体的实现是厂商来做的啦

以上是从上层调用到底层,接着我们还会看一个从底层调用到上层的情况。

从代码看出有绑定3个callback到driver

mHardware->setCallbacks(notifyCallback,
        dataCallback,
        dataCallbackTimestamp,
        (void *)mCameraId);

所以这三个callback会有机会被call到
以dataCallback为例子,其中有一个是

// picture callback - compressed picture ready
void CameraClient::handleCompressedPicture(const sp<IMemory>& mem) {
    disableMsgType(CAMERA_MSG_COMPRESSED_IMAGE);

    sp<ICameraClient> c = mCameraClient;
    mLock.unlock();
    if (c != 0) {
        c->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE, mem, NULL);
    }
}

通过ICameraClient调用会app,这里的ICameraClient是BpCameraClient(为什么是Bp,后面会稍加解释)

// generic data callback from camera service to app with image data
void BpCameraClient::dataCallback(int32_t msgType, const sp<IMemory>& imageData,
                  camera_frame_metadata_t *metadata)
{
    ALOGV("dataCallback");
    Parcel data, reply;
    data.writeInterfaceToken(ICameraClient::getInterfaceDescriptor());
    data.writeInt32(msgType);
    data.writeStrongBinder(imageData->asBinder());
    if (metadata) {
        data.writeInt32(metadata->number_of_faces);
        data.write(metadata->faces, sizeof(camera_face_t) * metadata->number_of_faces);
    }
    remote()->transact(DATA_CALLBACK, data, &reply, IBinder::FLAG_ONEWAY);
}

于是

status_t BnCameraClient::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch(code) {
        case NOTIFY_CALLBACK: {
            ...
        } break;
        case DATA_CALLBACK: {
            ALOGV("DATA_CALLBACK");
            CHECK_INTERFACE(ICameraClient, data, reply);
            int32_t msgType = data.readInt32();
            sp<IMemory> imageData = interface_cast<IMemory>(data.readStrongBinder());
            camera_frame_metadata_t *metadata = NULL;
            if (data.dataAvail() > 0) {
                metadata = new camera_frame_metadata_t;
                metadata->number_of_faces = data.readInt32();
                metadata->faces = (camera_face_t *) data.readInplace(
                        sizeof(camera_face_t) * metadata->number_of_faces);
            }
            dataCallback(msgType, imageData, metadata); // 因为pure virtual function的关系,所以会call到Camera当中的实现
            if (metadata) delete metadata;
            return NO_ERROR;
        } break;
        case DATA_CALLBACK_TIMESTAMP: {
            ...
        } break;
        default:
            return BBinder::onTransact(code, data, reply, flags);
    }
}

于是

// callback from camera service when frame or image is ready
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
                          camera_frame_metadata_t *metadata)
{
    sp<CameraListener> listener;
    {
        Mutex::Autolock _l(mLock);
        listener = mListener;
    }
    if (listener != NULL) {
        listener->postData(msgType, dataPtr, metadata);
    }
}

从app call到service是BnCameraClient(client是实体,service是代理,从new的代码就可以观察出来)
从service call到app是BpCameraClient(service是实体,client是代理)
而这个Bn到Bp的转换是通过

sp<ICameraClient> cameraClient = interface_cast<ICameraClient>(data.readStrongBinder());

完成的。

后面我们会列举些需要注意的点:

ICameraClient继承自IInterface
BnCameraClient继承自BnInterface,然而BnInterface又继承自ICameraClient,BBinder

Camera继承自BnCameraClient

ICameraService继承自IInterface
BnCameraService继承自BnInterface,然而BnInterface又继承自ICameraService,BBinder

ICamera继承自IInterface
BnCamera继承自BnInterface,然而BnInterface又继承自ICamera,BBinder

CameraService继承自BinderService,BnCameraService,IBinder::DeathRecipient
而BinderService主要是用来实例化service的统一接口

CameraService有个嵌套类Client继承自BnCamera
真正的实现大部分在CameraClient当中(Camera2Client是Google提供的调用新的HAL的一种方式,部分厂商会用这样的方式来实现自己的HAL,比如Qualcomm)
这才是真正的client,直接调用HW或者被HW调用。

sendCommand往driver送参数的时候会先检验是不是CameraService这层可以处理的东西(有些东西不需要到driver的)

cmd == CAMERA_CMD_SET_DISPLAY_ORIENTATION
cmd == CAMERA_CMD_ENABLE_SHUTTER_SOUND
cmd == CAMERA_CMD_PLAY_RECORDING_SOUND
cmd == CAMERA_CMD_SET_VIDEO_BUFFER_COUNT
cmd == CAMERA_CMD_PING
以上这些都CameraService自己处理了
其他均丢给driver(mHardware->sendCommand(cmd, arg1, arg2);)

接下来是一些疑问点:

// these are initialized in the constructor.
sp mHardware; // cleared after disconnect()
mHardware在disconnect当中被clear

// Make sure disconnect() is done once and once only, whether it is called
// from the user directly, or called by the destructor.
if (mHardware == 0) return;
这是防止disconnect调用一次之后,析构函数又调用一次。