1.概述
Render渲染器的作用是:将音频、视频数据按照一定的同步策略通过对应的设备输出。这是所有的播放器都不可或缺的模块。NuPlayer的渲染类为Renderer,定义在NuPlayerRenderer.h文件中。它的主要功能有:
缓存数据;
音频设备初始化&数据播放;
视频数据播放;
音视频同步功能。
先来看Render在NuPlayer框架中所处的位置:
Renderer会根据传过来数据帧的时间来判断这一帧是否需要渲染,并进行音视频的同步。但是真正硬件渲染的代码在MediaCodec和ACodec中。
2.缓存数据
在分析缓存逻辑之前,先介绍一下NuPlayerRenderer缓存数据的结构:
struct QueueEntry {
sp<MediaCodecBuffer> mBuffer;//如果不为NULL,则包含了真实数据
sp<AMessage> mMeta;
sp<AMessage> mNotifyConsumed;//如果为NULL,表示QueueEntry是最后一个(EOS)。
size_t mOffset;
status_t mFinalResult;
int32_t mBufferOrdinal; // 当前队列实体在队列中的序号
};
-------------------
List<QueueEntry> mAudioQueue;//用以缓存音频解码数据的队列,队列实体为QueueEntry
List<QueueEntry> mVideoQueue;//用以缓存视频解码数据的队列,队列实体为QueueEntry
在NuPlayer播放器架构图中Renderer的位置是在NuPlayerDecoder后面,他俩之间的交互是从NuPlayer::Decoder::handleAnOutputBuffer()函数开始的,在这个函数中:
mRenderer->queueBuffer(mIsAudio, buffer, reply);
注意这个reply是个AMessage,它是NuPlayerDecoder传给Renderer的,用于Renderer向NuPlayerDecoder传递信息,同时在NuPlayer::Decoder::handleAnOutputBuffer()函数中,并没有post这个reply msg。
sp<AMessage> reply = new AMessage(kWhatRenderBuffer, this);
reply->setSize("buffer-ix", index);
reply->setInt32("generation", mBufferGeneration);
(1)渲染器创建
NuPlayerRenderer渲染器在NuPlayer::onStart()函数中创建的,该方法中有如下代码:
- NuPlayer.cpp
sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
++mRendererGeneration;
notify->setInt32("generation", mRendererGeneration);
// 创建渲染器
mRenderer = AVNuFactory::get()->createRenderer(mAudioSink, mMediaClock, notify, flags);
mRendererLooper = new ALooper;
mRendererLooper->setName("NuPlayerRenderer");
mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
mRendererLooper->registerHandler(mRenderer);
status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);
......
float rate = getFrameRate();
if (rate > 0) {
rate = (rate > mMaxOutputFrameRate) ? mMaxOutputFrameRate : rate;
mRenderer->setVideoFrameRate(rate);
}
if (mVideoDecoder != NULL) {
mVideoDecoder->setRenderer(mRenderer);//向视频解码器中设置渲染器,后面会用到
}
if (mAudioDecoder != NULL) {
mAudioDecoder->setRenderer(mRenderer);//向音频解码器中设置渲染器,后面会用到
}
在这里将渲染器创建了出来,并且进行一些配置。然后再将渲染器分别传入到视频和音频解码器中,下面看下传递的过程:
- NuPlayerDecoderBase.cpp
void NuPlayer::DecoderBase::setRenderer(const sp<Renderer> &renderer) {
sp<AMessage> msg = new AMessage(kWhatSetRenderer, this);
msg->setObject("renderer", renderer);
msg->post();
}
-----------------------
// AHandle消息机制
case kWhatSetRenderer:
{
sp<RefBase> obj;
CHECK(msg->findObject("renderer", &obj));
onSetRenderer(static_cast<Renderer *>(obj.get()));
break;
}
NuPlayerDecoderBase是NuPlayerDecoder基类,所以onSetRenderer处理在NuPlayerDecoder:
- NuPlayerDecoder.cpp
void NuPlayer::Decoder::onSetRenderer(const sp<Renderer> &renderer) {
mRenderer = renderer;
}
这里只是将传递进来的渲染器mRenderer进行了一个保存。该渲染器的使用,在对解码后数据处理的函数(NuPlayer::Decoder::handleAnOutputBuffer)中使用。
mRenderer->queueBuffer(mIsAudio, buffer, reply);
(2)数据处理
NuPlayerRenderer渲染器的创建是在解码模块初始化之前实现的,解码模块在实例化并启动后,如果已经有了解码数据,通过一些列调用后,会调用到NuPlayer::Renderer::onQueueBuffer,将解码后的数据存放到缓存队列中去。代码流程如下:
下面就从queueBuffer函数开始分析:
void NuPlayer::Renderer::queueBuffer(
bool audio,
const sp<ABuffer> &buffer,
const sp<AMessage> ¬ifyConsumed) {
sp<AMessage> msg = new AMessage(kWhatQueueBuffer, this);
msg->setInt32("queueGeneration", getQueueGeneration(audio));
msg->setInt32("audio", static_cast<int32_t>(audio));
msg->setBuffer("buffer", buffer);
msg->setMessage("notifyConsumed", notifyConsumed);
msg->post();
}
这里是发送了一个kWhatQueueBuffer的msg,同时注意上面那个reply形参的变化,在NuPlayerRenderer中的名字为notifyConsumed。
继续调用到NuPlayer::Renderer::onQueueBuffer()函数中:
void NuPlayer::Renderer::onQueueBuffer(const sp<AMessage> &msg) {
int32_t audio;
CHECK(msg->findInt32("audio", &audio));
...
if (audio) {
mHasAudio = true;// 需要缓存的是解码后的音频数据
} else {
mHasVideo = true;// 需要缓存的是解码后的视频数据
}
if (mHasVideo) {
if (mVideoScheduler == NULL) {
mVideoScheduler = new VideoFrameScheduler();// 用于调整视频渲染计划
mVideoScheduler->init();
}
}
sp<RefBase> obj;
CHECK(msg->findObject("buffer", &obj));
// 获取需要被缓存的解码数据
sp<MediaCodecBuffer> buffer = static_cast<MediaCodecBuffer *>(obj.get());
...
QueueEntry entry;// 创建队列实体对象,并将解码后的buffer传递进去
entry.mBuffer = buffer;
entry.mNotifyConsumed = notifyConsumed;
entry.mOffset = 0;
entry.mFinalResult = OK;
entry.mBufferOrdinal = ++mTotalBuffersQueued;// 当前队列实体在队列中的序号
if (audio) { // 音频
Mutex::Autolock autoLock(mLock);
mAudioQueue.push_back(entry);//加入队列
postDrainAudioQueue_l();// 刷新或播放音频
} else { // 视频
mVideoQueue.push_back(entry); //加入队列
postDrainVideoQueue(); // 刷新或播放视频
}
...
sp<MediaCodecBuffer> firstAudioBuffer = (*mAudioQueue.begin()).mBuffer;
sp<MediaCodecBuffer> firstVideoBuffer = (*mVideoQueue.begin()).mBuffer;
...
int64_t firstAudioTimeUs;
int64_t firstVideoTimeUs;
// 计算队列中第一帧视频和第一帧音频的时间差值
int64_t diff = firstVideoTimeUs - firstAudioTimeUs;
ALOGV("queueDiff = %.2f secs", diff / 1E6);
if (diff > 100000ll) {
// 如果音频播放比视频播放的时间超前大于0.1秒,则丢弃掉音频数据
(*mAudioQueue.begin()).mNotifyConsumed->post();
mAudioQueue.erase(mAudioQueue.begin());
VTRACE_INT("drop-audio", 1);
VTRACE_ASYNC_END("render-audio", (int)firstAudioTimeUs);
return;
}
syncQueuesDone_l();// 刷新或播放音视频数据
}
在NuPlayerRenderer中,维持着两个List,一个是音频缓冲队列,一个是视频队列。并根据解码的结果加入到对应的队列中。
3.音频设备初始化
对于Android系统来说,音频的播放最终都绕不开AudioSink对象。NuPlayer中的AudioSink对象早在NuPlayer播放器创建时就已经创建,并传入NuPlayer体系中。
接下来在创建解码器的过程中,也就是NuPlayer::instantiateDecoder函数调用创建音频解码器的同时,会触发一系列对AudioSink的初始化和启动动作。调用链如下:
==>NuPlayer::instantiateDecoder
==> NuPlayer::determineAudioModeChange
==> NuPlayer::tryOpenAudioSinkForOffload
==> NuPlayer::Renderer::openAudioSink
==> NuPlayer::Renderer::onOpenAudioSink
- NuPlayer.cpp
status_t NuPlayer::Renderer::onOpenAudioSink(
const sp<AMessage> &format,
bool offloadOnly,
bool hasVideo,
uint32_t flags,
bool isStreaming) {
...
CHECK(format->findInt32("channel-count", &numChannels));//获取声道数
...
CHECK(format->findInt32("sample-rate", &sampleRate));//获取采样率
...
if (!offloadOnly && !offloadingAudio()) {// 非offload模式打开AudioSink
...
audioSinkChanged = true;
mAudioSink->close();
mCurrentOffloadInfo = AUDIO_INFO_INITIALIZER;
...
status_t err = mAudioSink->open(// 打开AudioSink(创建AudioTrack)
sampleRate,// 采样率
numChannels,// 声道数
(audio_channel_mask_t)channelMask,
AVNuUtils::get()->getPCMFormat(format),// 音频格式
0 /* bufferCount - unused */,
mUseAudioCallback ? &NuPlayer::Renderer::AudioSinkCallback : NULL,
mUseAudioCallback ? this : NULL,
(audio_output_flags_t)pcmFlags,
NULL,
doNotReconnect,
frameCount);
...
mCurrentPcmInfo = info;
if (!mPaused) { // for preview mode, don't start if paused
mAudioSink->start();// 启动AudioSink
}
}
if (audioSinkChanged) {
onAudioSinkChanged();
}
mAudioTornDown = false;
return OK;
}
在这个函数执行完启动AudioSink的操作后,只需要往AudioSink中写数据,音频数据便能够得到输出。
4.音频数据输出
音频数据输出的触发函数是postDrainAudioQueue_l,在缓存数据一节中分析NuPlayer::Renderer::onQueueBuffer函数执行时,当数据被缓存在音频队列后,postDrainAudioQueue_l便会执行,让数据最终写入到AudioSink中播放。而postDrainAudioQueue_l函数简单处理后,就通过Nativehandler机制,将调用传递到了NuPlayer::Renderer::onMessageReceived的kWhatDrainAudioQueue case中:
case kWhatDrainAudioQueue:
{
...
if (onDrainAudioQueue()) {// 真正往AudioSink中写数据的函数
uint32_t numFramesPlayed;
if (mAudioSink->getPosition(&numFramesPlayed) != OK) {
ALOGE("Error in time stamp query, return from here.\
Fillbuffer is called as part of session recreation");
break;
}
...
// AudioSink已经缓存的可用于播放数据的时间长度
int64_t delayUs =
mAudioSink->msecsPerFrame()
* numFramesPendingPlayout * 1000ll;
if (mPlaybackRate > 1.0f) {
delayUs /= mPlaybackRate; // 计算当前播放速度下的可播放时长
}
// 计算一半播放时长的延迟,刷新数据
delayUs /= 2;
...
postDrainAudioQueue_l(delayUs);// 重新调用刷新数据的循环
}
break;
}
真正的数据写入操作在onDrainAudioQueue,下面看一下这个函数:
bool NuPlayer::Renderer::onDrainAudioQueue() {
...
uint32_t prevFramesWritten = mNumFramesWritten;
while (!mAudioQueue.empty()) {// 如果音频的缓冲队列中还有数据,循环就不停止
QueueEntry *entry = &*mAudioQueue.begin();// 取出队首队列实体
...
// 写入AudioSink,会调用到AudioTrack的write()方法
ssize_t written = mAudioSink->write(entry->mBuffer->data() + entry->mOffset,
copy, false /* blocking */);
...
entry->mNotifyConsumed->post();//通知解码器数据已经消耗
mAudioQueue.erase(mAudioQueue.begin());//从队列中删掉已经播放的数据实体
entry = NULL;
}
...
}
// 计算我们是否需要重新安排另一次写入。
// 返回true时,会再次执行postDrainAudioQueue_l()
bool reschedule = !mAudioQueue.empty()
&& (!mPaused
|| prevFramesWritten != mNumFramesWritten); // permit pause to
return reschedule;
}
而mAudioSink的创建是在MediaPlayerService::Client::setDataSource_pre函数中执行的:
- MediaPlayerService.cpp
if (!p->hardwareOutput()) {
mAudioOutput = new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid(),
mPid, mAudioAttributes, mAudioDeviceUpdatedListener);
static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
}
其中:
class AudioOutput : public MediaPlayerBase::AudioSink
在MediaPlayerService::AudioOutput::open函数中会去创建AudioTrack,后续的很多相关操作都离不开这个AudioTrack。
5.视频数据播放
视频数据输出的时机几乎和音频数据输出是一样的,即在播放器创建完成并启动后便开始了。区别只是,音频执行了postDrainAudioQueue_l,而视频执行的是:postDrainVideoQueue。
下面先看一下postDrainVideoQueue函数:
void NuPlayer::Renderer::postDrainVideoQueue() {
...
QueueEntry &entry = *mVideoQueue.begin();// 从队列中取数据
sp<AMessage> msg = new AMessage(kWhatDrainVideoQueue, this);
msg->setInt32("drainGeneration", getDrainGeneration(false /* audio */));
...
// 省略音视频同步逻辑,后续单独来讲
// post 2 display refreshes before rendering is due
msg->post(delayUs > twoVsyncsUs ? delayUs - twoVsyncsUs : 0);
mDrainVideoQueuePending = true;
}
kWhatDrainVideoQueue会调用到onDrainVideoQueue():
void NuPlayer::Renderer::onDrainVideoQueue() {
if (mVideoQueue.empty()) {
return;
}
QueueEntry *entry = &*mVideoQueue.begin();//从视频队列中取出第一个元素
if (entry->mBuffer == NULL) {
// EOS
notifyEOS(false /* audio */, entry->mFinalResult);
mVideoQueue.erase(mVideoQueue.begin());
entry = NULL;
setVideoLateByUs(0);
return;
}
//上面的代码是对EOS进行处理。
int64_t nowUs = -1;
int64_t realTimeUs;
if (mFlags & FLAG_REAL_TIME) {
CHECK(entry->mBuffer->meta()->findInt64("timeUs", &realTimeUs));
} else {
int64_t mediaTimeUs;
CHECK(entry->mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
//track里面的时间
nowUs = ALooper::GetNowUs();
realTimeUs = getRealTimeUs(mediaTimeUs, nowUs);//现在的时间?
}
bool tooLate = false;
if (!mPaused) {
if (nowUs == -1) {
nowUs = ALooper::GetNowUs();
}
setVideoLateByUs(nowUs - realTimeUs);
tooLate = (mVideoLateByUs > 40000);//如果视频延迟了40000us,那么就不渲染了
if (tooLate) {
ALOGV("video late by %lld us (%.2f secs)",
(long long)mVideoLateByUs, mVideoLateByUs / 1E6);
} else {
int64_t mediaUs = 0;
mMediaClock->getMediaTime(realTimeUs, &mediaUs);
ALOGV("rendering video at media time %.2f secs",
(mFlags & FLAG_REAL_TIME ? realTimeUs :
mediaUs) / 1E6);
}
} else {
setVideoLateByUs(0);
if (!mVideoSampleReceived && !mHasAudio) {
// This will ensure that the first frame after a flush won't be used as anchor
// when renderer is in paused state, because resume can happen any time after seek.
Mutex::Autolock autoLock(mLock);
clearAnchorTime_l();
}
}
entry->mNotifyConsumed->setInt64("timestampNs", realTimeUs * 1000ll);
//注意,这个entry->mNotifyConsume就是从NuPlayerDecoder中传过来的reply,上面计算的参数在这里使用了。
entry->mNotifyConsumed->setInt32("render", !tooLate);
//如果延迟超过40000us的话,就不渲染,标记render为0.
entry->mNotifyConsumed->post();
//执行post函数,注意这是NuPlayerDecoder传过来的reply,sp<AMessage> reply = new AMessage(kWhatRenderBuffer, this);此时使用post到NuPlayerDecoder中。
mVideoQueue.erase(mVideoQueue.begin());
entry = NULL;
mVideoSampleReceived = true;
if (!mPaused) {
if (!mVideoRenderingStarted) {
mVideoRenderingStarted = true;
notifyVideoRenderingStart();
}
Mutex::Autolock autoLock(mLock);
notifyIfMediaRenderingStarted_l();
}
//这里的代码是通知NuPlayer,Render开始了。
}
对于Renderer的执行流程,这里就执行完了。发现其实它只是进行了音视频的同步和视频是否进行丢帧处理,并没有执行真正的渲染步骤,而且,只是对数据帧是否需要渲染做了标记而已:
//如果延迟超过40000us的话,就不渲染,标记render为0.
entry->mNotifyConsumed->setInt32("render", !tooLate);
真正的渲染步骤是硬件来执行的,而且,渲染是通过ACodec来完成的。
那么继续来看这个渲染流程,回到NuPlayer::Decoder::handleAnOutputBuffer()函数中,这个reply发送了kWhatRenderBuffer msg:
NuPlayer::Decoder::onMessageReceived()
case kWhatRenderBuffer:
{
if (!isStaleReply(msg)) {
onRenderBuffer(msg);
}
break;
}
----------------------------------
void NuPlayer::Decoder::onRenderBuffer(const sp<AMessage> &msg) {
status_t err;
int32_t render;
size_t bufferIx;
int32_t eos;
CHECK(msg->findSize("buffer-ix", &bufferIx));//找到buffer-ix
if (!mIsAudio) {
int64_t timeUs;
sp<ABuffer> buffer = mOutputBuffers[bufferIx];
buffer->meta()->findInt64("timeUs", &timeUs);
if (mCCDecoder != NULL && mCCDecoder->isSelected()) {
mCCDecoder->display(timeUs);//显示字幕
}
}
if (msg->findInt32("render", &render) && render) {//根据NuPlayerRenderer传过来的标记,来判断是否进行渲染
int64_t timestampNs;
CHECK(msg->findInt64("timestampNs", ×tampNs));
err = mCodec->renderOutputBufferAndRelease(bufferIx, timestampNs);
//发送给MediaCodec渲染并且release
} else {
mNumOutputFramesDropped += !mIsAudio;
err = mCodec->releaseOutputBuffer(bufferIx);
//不渲染,直接release
}
if (err != OK) {
ALOGE("failed to release output buffer for %s (err=%d)",
mComponentName.c_str(), err);
handleError(err);
}
if (msg->findInt32("eos", &eos) && eos
&& isDiscontinuityPending()) {
finishHandleDiscontinuity(true /* flushOnTimeChange */);
}
}
来看看这两个函数的实现,都是在MediaCodec.cpp中:
status_t MediaCodec::renderOutputBufferAndRelease(size_t index) {
sp<AMessage> msg = new AMessage(kWhatReleaseOutputBuffer, this);
msg->setSize("index", index);
msg->setInt32("render", true);
sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
}
-----------------
status_t MediaCodec::releaseOutputBuffer(size_t index) {
sp<AMessage> msg = new AMessage(kWhatReleaseOutputBuffer, this);
msg->setSize("index", index);
sp<AMessage> response;
return PostAndAwaitResponse(msg, &response);
}
他们的区别只是设置对应的buffer-ix的render标志位。
void MediaCodec::onMessageReceived(const sp<AMessage> &msg) {
case kWhatReleaseOutputBuffer:
{
sp<AReplyToken> replyID;
CHECK(msg->senderAwaitsResponse(&replyID));
if (!isExecuting()) {
PostReplyWithError(replyID, INVALID_OPERATION);
break;
} else if (mFlags & kFlagStickyError) {
PostReplyWithError(replyID, getStickyError());
break;
}
status_t err = onReleaseOutputBuffer(msg);
PostReplyWithError(replyID, err);
break;
}
执行到这里,重点又成了MediaCodec::onReleaseOutputBuffer()函数:
status_t MediaCodec::onReleaseOutputBuffer(const sp<AMessage> &msg) {
size_t index;
CHECK(msg->findSize("index", &index));
int32_t render;
if (!msg->findInt32("render", &render)) {
render = 0;
}
if (!isExecuting()) {
return -EINVAL;
}
if (index >= mPortBuffers[kPortIndexOutput].size()) {
return -ERANGE;
}
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
if (info->mNotify == NULL || !info->mOwnedByClient) {
return -EACCES;
}
// synchronization boundary for getBufferAndFormat
{
Mutex::Autolock al(mBufferLock);
info->mOwnedByClient = false;
}
if (render && info->mData != NULL && info->mData->size() != 0) {//render是否为true
info->mNotify->setInt32("render", true);
int64_t mediaTimeUs = -1;
info->mData->meta()->findInt64("timeUs", &mediaTimeUs);
int64_t renderTimeNs = 0;
if (!msg->findInt64("timestampNs", &renderTimeNs)) {
// use media timestamp if client did not request a specific render timestamp
ALOGV("using buffer PTS of %lld", (long long)mediaTimeUs);
renderTimeNs = mediaTimeUs * 1000;
}
info->mNotify->setInt64("timestampNs", renderTimeNs);//Renderer给的timestampNs
if (mSoftRenderer != NULL) {//这里判断是否使用软件去渲染
std::list<FrameRenderTracker::Info> doneFrames = mSoftRenderer->render(
info->mData->data(), info->mData->size(),
mediaTimeUs, renderTimeNs, NULL, info->mFormat);
// if we are running, notify rendered frames
if (!doneFrames.empty() && mState == STARTED && mOnFrameRenderedNotification != NULL) {
sp<AMessage> notify = mOnFrameRenderedNotification->dup();
sp<AMessage> data = new AMessage;
if (CreateFramesRenderedMessage(doneFrames, data)) {
notify->setMessage("data", data);
notify->post();
}
}
}
}
info->mNotify->post();
info->mNotify = NULL;
return OK;
}
这里的info->mNotify是从&mPortBuffers[kPortIndexOutput]里面获取到的,info->mNotify是ACodec给MediaCodec的reply。