東川印記

一本東川,笑看爭龍斗虎;寰茫兦者,度橫佰昧人生。

Exoplayer学习05 从renderer.render到音视频解码

2021年5月21日星期五



棄我 去者,昨日之日不可留;

亂我心者,今日之日多煩憂。

長風萬里送秋雁,對此可以酣高樓。

蓬萊文章建安骨,中間小謝又清發。

俱懷逸興壯思飛,欲上青天攬明月。

抽刀斷水水更流,舉杯銷愁愁更愁。

人生在世不稱意,明朝散發弄扁舟。

一首诗歌流传千年总是有理由的,比方这首诗,就像我看exoplayer的源码一样。。。。

连续两篇出现的doSomeWork,核心处理 renderer.render()方法,本质是循环使用各解码器解码。。。。

private void doSomeWork() throws ExoPlaybackException, IOException { long operationStartTimeMs = clock.uptimeMillis();//返回自启动以来的毫秒数,不计算深度睡眠所花费的时间。 updatePeriods(); 。。。 updatePlaybackPositions(); boolean renderersEnded = true; boolean renderersAllowPlayback = true; if (playingPeriodHolder.prepared) { 。。。 for (int i = 0; i < renderers.length; i++) { Renderer renderer = renderers[i]; if (!isRendererEnabled(renderer)) { continue; } // TODO: Each renderer should return the maximum delay before which it wishes to be called // again. The minimum of these values should then be used as the delay before the next // invocation of this method.每个渲染器应该返回希望再次调用的最大延迟。 这些值中的最小值应用作下一次调用此方法之前的延迟。 renderer.render(rendererPositionUs, rendererPositionElapsedRealtimeUs); //看着就很核心 renderersEnded = renderersEnded && renderer.isEnded(); 。。。 if (!allowsPlayback) { renderer.maybeThrowStreamError(); } } } else { playingPeriodHolder.mediaPeriod.maybeThrowPrepareError(); } long playingPeriodDurationUs = playingPeriodHolder.info.durationUs; boolean finishedRendering = renderersEnded && playingPeriodHolder.prepared && (playingPeriodDurationUs == C.TIME_UNSET || playingPeriodDurationUs <= playbackInfo.positionUs); if (finishedRendering && pendingPauseAtEndOfPeriod) { pendingPauseAtEndOfPeriod = false; setPlayWhenReadyInternal( /* playWhenReady= */ false, playbackInfo.playbackSuppressionReason, /* operationAck= */ false, Player.PLAY_WHEN_READY_CHANGE_REASON_END_OF_MEDIA_ITEM); } if (finishedRendering && playingPeriodHolder.info.isFinal) { setState(Player.STATE_ENDED); stopRenderers();//播放万恒,停止渲染器们 } else if (playbackInfo.playbackState == Player.STATE_BUFFERING && shouldTransitionToReadyState(renderersAllowPlayback)) { setState(Player.STATE_READY); //准备就绪 pendingRecoverableError = null; // Any pending error was successfully recovered from. if (shouldPlayWhenReady()) { startRenderers(); //启动渲染器们 } } else if (playbackInfo.playbackState == Player.STATE_READY && !(enabledRendererCount == 0 ? isTimelineReady() : renderersAllowPlayback)) { isRebuffering = shouldPlayWhenReady(); setState(Player.STATE_BUFFERING); //缓冲中 。。。 stopRenderers(); } if (playbackInfo.playbackState == Player.STATE_BUFFERING) { for (int i = 0; i < renderers.length; i++) { if (isRendererEnabled(renderers[i]) && renderers[i].getStream() == playingPeriodHolder.sampleStreams[i]) { renderers[i].maybeThrowStreamError(); } } 。。。 } } 。。。 if ((shouldPlayWhenReady() && playbackInfo.playbackState == Player.STATE_READY) || playbackInfo.playbackState == Player.STATE_BUFFERING) { sleepingForOffload = !maybeScheduleWakeup(operationStartTimeMs, ACTIVE_INTERVAL_MS); } else if (enabledRendererCount != 0 && playbackInfo.playbackState != Player.STATE_ENDED) { scheduleNextWork(operationStartTimeMs, IDLE_INTERVAL_MS); } else { handler.removeMessages(MSG_DO_SOME_WORK); } 。。。 }

renderers的来源,源于学习02中的builder.rendersFactory.create(),实际调用的是DefaultRenderersFactory的createRenderers()方法。

方法中,根据配置去build video、audio、text、meta等renderers....

1,默认的视频解码

MediaCodecVideoRenderer videoRenderer = new MediaCodecVideoRenderer( context, mediaCodecSelector, allowedVideoJoiningTimeMs, enableDecoderFallback, eventHandler, eventListener, MAX_DROPPED_VIDEO_FRAME_COUNT_TO_NOTIFY);

换一个工具出图就很简单。。。。

默认视频解码器实现类

/** * Decodes and renders video using {@link MediaCodec}. * * <p>This renderer accepts the following messages sent via {@link ExoPlayer#createMessage(Target)} * on the playback thread: * * <ul> * <li>Message with type {@link #MSG_SET_SURFACE} to set the output surface. The message payload * should be the target {@link Surface}, or null. * <li>Message with type {@link #MSG_SET_SCALING_MODE} to set the video scaling mode. The message * payload should be one of the integer scaling modes in {@link C.VideoScalingMode}. Note that * the scaling mode only applies if the {@link Surface} targeted by this renderer is owned by * a {@link android.view.SurfaceView}. * <li>Message with type {@link #MSG_SET_VIDEO_FRAME_METADATA_LISTENER} to set a listener for * metadata associated with frames being rendered. The message payload should be the {@link * VideoFrameMetadataListener}, or null. * </ul> * 使用MediaCodec解码和渲染视频。 * 该渲染器接受通过回放线程上的ExoPlayer.createMessage(com.google.android.exoplayer2.PlayerMessage.Target)发送的以下消息: * 类型为MSG_SET_SURFACE的消息,用于设置输出表面。 消息有效负载应为目标Surface,或者为null。 * 类型为MSG_SET_SCALING_MODE的消息,用于设置视频缩放模式。 消息有效负载应为C.VideoScalingMode中的整数缩放模式之一。 请注意,缩放模式仅在此渲染器定位的Surface属于android.view.SurfaceView的情况下适用。 * 类型为MSG_SET_VIDEO_FRAME_METADATA_LISTENER的消息,用于设置与正在渲染的帧关联的元数据的侦听器。 消息有效负载应为VideoFrameMetadataListener或为null。 */ public class MediaCodecVideoRenderer extends MediaCodecRenderer {}

继承自MediaCodecRender类 /** * An abstract renderer that uses {@link MediaCodec} to decode samples for rendering. * 使用{@link MediaCodec}解码要渲染的样本的抽象渲染器。 */ public abstract class MediaCodecRenderer extends BaseRenderer {}

继承自Base

/** * An abstract base class suitable for most {@link Renderer} implementations. * 适用于大多数{@link Renderer}实现的抽象基类。 */ public abstract class BaseRenderer implements Renderer, RendererCapabilities {}

实现了Renderer和RendererCapabilities接口

/** * 呈现从SampleStream读取的媒体。 * 在内部,渲染器的生命周期由拥有的ExoPlayer管理。 * 随着总体播放状态和启用的轨道发生变化,渲染器会通过各种状态进行转换。 有效状态转换如下所示,并标有每次转换期间调用的方法。 * Renders media read from a {@link SampleStream}. * * <p>Internally, a renderer's lifecycle is managed by the owning {@link ExoPlayer}. The renderer is * transitioned through various states as the overall playback state and enabled tracks change. The * valid state transitions are shown below, annotated with the methods that are called during each * transition. * * <p style="align:center"><img src="doc-files/renderer-states.svg" alt="Renderer state * transitions"> */ public interface Renderer extends PlayerMessage.Target {}


/** * Defines the capabilities of a {@link Renderer}. */ public interface RendererCapabilities {}


Renderer又继承了PlayerMessage.Target

/** * 定义可以与PlayerMessage.Sender发送并由PlayerMessage.Target接收的播放器消息。 * Defines a player message which can be sent with a {@link Sender} and received by a {@link * Target}. */ public final class PlayerMessage { /** A target for messages. */ public interface Target {}

。。。

}

render方法的定义在Renderer接口中

/** * 增量渲染SampleStream。 * Incrementally renders the {@link SampleStream}. * * <p>If the renderer is in the {@link #STATE_ENABLED} state then each call to this method will do * work toward being ready to render the {@link SampleStream} when the renderer is started. If the * renderer is in the {@link #STATE_STARTED} state then calls to this method will render the * {@link SampleStream} in sync with the specified media positions. * 如果渲染器处于STATE_ENABLED状态,则在启动渲染器时,对此方法的每次调用都将为准备渲染SampleStream做准备。 * 如果渲染器处于STATE_STARTED状态,则对该方法的调用将渲染SampleStream与指定的媒体位置同步。 * * <p>The renderer may also render the very start of the media at the current position (e.g. the * first frame of a video stream) while still in the {@link #STATE_ENABLED} state, unless it's the * initial start of the media after calling {@link #enable(RendererConfiguration, Format[], * SampleStream, long, boolean, boolean, long, long)} with {@code mayRenderStartOfStream} set to * {@code false}. * 渲染器还可以在媒体仍处于STATE_ENABLED状态的情况下,在当前位置(例如视频流的第一帧)渲染媒体的最开始, * 除非它是在调用enable(RendererConfiguration,Format [], * 将mayRenderStartOfStream设置为false的SampleStream,long,boolean,boolean,long,long)。 * * <p>This method should return quickly, and should not block if the renderer is unable to make * useful progress.此方法应快速返回,并且如果渲染器无法取得有用的进展,则不应阻塞。 * * <p>This method may be called when the renderer is in the following states: {@link * #STATE_ENABLED}, {@link #STATE_STARTED}. * 当渲染器处于以下状态时,可以调用此方法:STATE_ENABLED,STATE_STARTED。 * * @param positionUs The current media time in microseconds, measured at the start of the current * iteration of the rendering loop.在渲染循环的当前迭代开始时测量的当前媒体时间(以微秒为单位)。 * @param elapsedRealtimeUs {@link android.os.SystemClock#elapsedRealtime()} in microseconds, * measured at the start of the current iteration of the rendering loop. * {@link android.os.SystemClock#elapsedRealtime()}(以微秒为单位),在渲染循环的当前迭代开始时进行测量。 * @throws ExoPlaybackException If an error occurs. */ void render(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException;

实现于 MediaCodecRenderer类。。。。

@Override public void render(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException { Logger.w(TAG,positionUs,elapsedRealtimeUs);//0,32253533000 | 0,32255136000 | 146675,32255311000 if (pendingOutputEndOfStream) { Logger.w(TAG,"流结束信号 "+codecDrainAction); pendingOutputEndOfStream = false; processEndOfStream(); //处理流结束信号。 } if (pendingPlaybackException != null) { ExoPlaybackException playbackException = pendingPlaybackException; pendingPlaybackException = null; throw playbackException; } try { if (outputStreamEnded) { renderToEndOfStream();//增量渲染任何剩余的输出-无操作 return; } if (inputFormat == null && !readToFlagsOnlyBuffer(/* requireFormat= */ true)) { // We still don't have a format and can't make progress without one.我们仍然没有格式,没有格式也无法取得进展。 return; } // We have a format. maybeInitCodecOrBypass();//初始化解码器或直通 if (bypassEnabled) { TraceUtil.beginSection("bypassRender"); while (bypassRender(positionUs, elapsedRealtimeUs)) {}//这个是音频直通吧 TraceUtil.endSection(); } else if (codec != null) { long renderStartTimeMs = SystemClock.elapsedRealtime(); TraceUtil.beginSection("drainAndFeed"); //生产与消费 while (drainOutputBuffer(positionUs, elapsedRealtimeUs) && shouldContinueRendering(renderStartTimeMs)) {} //消耗解码数据 while (feedInputBuffer() && shouldContinueRendering(renderStartTimeMs)) {}//填充源数据 TraceUtil.endSection(); } else { decoderCounters.skippedInputBufferCount += skipSource(positionUs); // We need to read any format changes despite not having a codec so that drmSession can be // updated, and so that we have the most recent format should the codec be initialized. We // may also reach the end of the stream. Note that readSource will not read a sample into a // flags-only buffer.尽管没有编解码器,我们仍需要读取任何格式更改,以便可以更新drmSession, // 以便在初始化编解码器时拥有最新的格式。 我们也可能到达流程的尽头。 请注意,readSource不会将样本读取到仅标志缓冲区中。 readToFlagsOnlyBuffer(/* requireFormat= */ false); } decoderCounters.ensureUpdated(); } catch (IllegalStateException e) { if (isMediaCodecException(e)) { throw createRendererException(createDecoderException(e, getCodecInfo()), inputFormat); } throw e; } }

trace log:

 playWhenReady [eventTime=0.00, mediaPos=0.00, window=0, true, USER_REQUEST]
 timeline [eventTime=0.02, mediaPos=0.00, window=0, periodCount=1, windowCount=1, reason=PLAYLIST_CHANGED
   period [?]
   window [?, seekable=false, dynamic=true]
 ]
 mediaItem [eventTime=0.02, mediaPos=0.00, window=0, reason=PLAYLIST_CHANGED]
 state [eventTime=0.03, mediaPos=0.00, window=0, BUFFERING]
 surfaceSize [eventTime=0.10, mediaPos=0.00, window=0, 1920, 1080]
 loading [eventTime=0.12, mediaPos=0.00, window=0, period=0, true]
 timeline [eventTime=0.12, mediaPos=0.00, window=0, period=0, periodCount=1, windowCount=1, reason=SOURCE_UPDATE
   period [?]
   window [?, seekable=false, dynamic=false]
 ]
 timeline [eventTime=0.62, mediaPos=0.00, window=0, period=0, periodCount=1, windowCount=1, reason=SOURCE_UPDATE
   period [92.49]
   window [92.49, seekable=true, dynamic=false]
 ]
 videoEnabled [eventTime=0.77, mediaPos=0.00, window=0, period=0]
 audioEnabled [eventTime=0.77, mediaPos=0.00, window=0, period=0]
 tracks [eventTime=0.77, mediaPos=0.00, window=0, period=0
   MediaCodecVideoRenderer [
     Group:0, adaptive_supported=N/A [
       [X] Track:0, id=1, mimeType=video/avc, codecs=avc1.640029, res=1440x1080, language=und, supported=YES
     ]
   ]
   MediaCodecAudioRenderer [
     Group:0, adaptive_supported=N/A [
       [X] Track:0, id=2, mimeType=audio/ac3, channels=6, sample_rate=48000, language=en, supported=YES
     ]
   ]
   TextRenderer []
   MetadataRenderer []
   CameraMotionRenderer []
 ]
 downstreamFormat [eventTime=0.78, mediaPos=0.00, window=0, period=0, id=1, mimeType=video/avc, codecs=avc1.640029, res=1440x1080, language=und]
 videoDecoderInitialized [eventTime=0.85, mediaPos=0.00, window=0, period=0, OMX.hisi.video.decoder.avc]
 videoInputFormat [eventTime=0.85, mediaPos=0.00, window=0, period=0, id=1, mimeType=video/avc, codecs=avc1.640029, res=1440x1080, language=und]
 downstreamFormat [eventTime=0.85, mediaPos=0.00, window=0, period=0, id=2, mimeType=audio/ac3, channels=6, sample_rate=48000, language=en]
 audioInputFormat [eventTime=0.85, mediaPos=0.00, window=0, period=0, id=2, mimeType=audio/ac3, channels=6, sample_rate=48000, language=en]
 videoSize [eventTime=0.92, mediaPos=0.00, window=0, period=0, 1440, 1080]
 renderedFirstFrame [eventTime=0.92, mediaPos=0.00, window=0, period=0, Surface(name=null)/@0xbccb2c3]
 state [eventTime=2.40, mediaPos=0.00, window=0, period=0, READY]
 isPlaying [eventTime=2.42, mediaPos=0.00, window=0, period=0, true]
 surfaceSize [eventTime=10.24, mediaPos=7.83, window=0, period=0, 0, 0]
 droppedFrames [eventTime=10.56, mediaPos=7.88, window=0, period=0, 1]
 videoDisabled [eventTime=10.56, mediaPos=7.88, window=0, period=0]
 audioDisabled [eventTime=10.56, mediaPos=7.88, window=0, period=0]
 videoDecoderReleased [eventTime=10.56, mediaPos=7.88, window=0, period=0, OMX.hisi.video.decoder.avc]

默认自带的日志就很全。。。。

如果有格式,那就调用maybeInitCodecOrBypass()去初始化解码器或者直通,决定数据流向。

后面ifelse判断,对应 直通、解码、没有解码器。

先看maybeInitCodecOrBypass方法

protected final void maybeInitCodecOrBypass() throws ExoPlaybackException { if (codec != null || bypassEnabled || inputFormat == null) { // We have a codec, are bypassing it, or don't have a format to decide how to render. return; } if (sourceDrmSession == null && shouldUseBypass(inputFormat)) { initBypass(inputFormat); return; } setCodecDrmSession(sourceDrmSession); String mimeType = inputFormat.sampleMimeType; Logger.w(TAG,"maybeInitCodecOrBypass",mimeType,codecDrmSession);//video/avc,null if (codecDrmSession != null) { if (mediaCrypto == null) { @Nullable FrameworkMediaCrypto sessionMediaCrypto = getFrameworkMediaCrypto(codecDrmSession); if (sessionMediaCrypto == null) { @Nullable DrmSessionException drmError = codecDrmSession.getError(); if (drmError != null) { // Continue for now. We may be able to avoid failure if a new input format causes the // session to be replaced without it having been used. } else { // The drm session isn't open yet. return; } } else { try { mediaCrypto = new MediaCrypto(sessionMediaCrypto.uuid, sessionMediaCrypto.sessionId); } catch (MediaCryptoException e) { throw createRendererException(e, inputFormat); } mediaCryptoRequiresSecureDecoder = !sessionMediaCrypto.forceAllowInsecureDecoderComponents && mediaCrypto.requiresSecureDecoderComponent(mimeType); } } if (FrameworkMediaCrypto.WORKAROUND_DEVICE_NEEDS_KEYS_TO_CONFIGURE_CODEC) { @DrmSession.State int drmSessionState = codecDrmSession.getState(); if (drmSessionState == DrmSession.STATE_ERROR) { throw createRendererException(codecDrmSession.getError(), inputFormat); } else if (drmSessionState != DrmSession.STATE_OPENED_WITH_KEYS) { // Wait for keys. return; } } } try { maybeInitCodecWithFallback(mediaCrypto, mediaCryptoRequiresSecureDecoder); } catch (DecoderInitializationException e) { throw createRendererException(e, inputFormat); } }

这里做了各种DRM判断,用来考虑DRM情况。

其中 直通的流向

if (sourceDrmSession == null && shouldUseBypass(inputFormat)) { initBypass(inputFormat); return; }

其中 shouldUseBypass()在MediaCodecAudioRenderer类中做了重写

@Override protected boolean shouldUseBypass(Format format) { return audioSink.supportsFormat(format); }

调用了 audioSink接口

/** * Returns whether the sink supports a given {@link Format}. * * @param format The format. * @return Whether the sink supports the format. */ boolean supportsFormat(Format format);

调用了 getFormatSupport方法

@Override @SinkFormatSupport public int getFormatSupport(Format format) { if (MimeTypes.AUDIO_RAW.equals(format.sampleMimeType)) { if (!Util.isEncodingLinearPcm(format.pcmEncoding)) { Log.w(TAG, "Invalid PCM encoding: " + format.pcmEncoding); return SINK_FORMAT_UNSUPPORTED; } if (format.pcmEncoding == C.ENCODING_PCM_16BIT || (enableFloatOutput && format.pcmEncoding == C.ENCODING_PCM_FLOAT)) { return SINK_FORMAT_SUPPORTED_DIRECTLY; } // We can resample all linear PCM encodings to 16-bit integer PCM, which AudioTrack is // guaranteed to support. return SINK_FORMAT_SUPPORTED_WITH_TRANSCODING; } if (enableOffload && !offloadDisabledUntilNextConfiguration && isOffloadedPlaybackSupported(format, audioAttributes)) { return SINK_FORMAT_SUPPORTED_DIRECTLY; } if (isPassthroughPlaybackSupported(format, audioCapabilities)) { Logger.w(TAG,"getFormatSupport",Format.toLogString(format),audioCapabilities.toString(),"直通支持"); return SINK_FORMAT_SUPPORTED_DIRECTLY; } return SINK_FORMAT_UNSUPPORTED; }

判断返回不为SINK_FORMAT_UNSUPPORTED就返回true。

从而初始化 by pass

/** * Configures rendering where no codec is used. Called instead of {@link * #configureCodec(MediaCodecInfo, MediaCodecAdapter, Format, MediaCrypto, float)} when no codec * is used to render.在不使用编解码器的情况下配置渲染。 当不使用编解码器呈现时, * 调用而不是configureCodec(MediaCodecInfo,MediaCodecAdapter,Format,MediaCrypto,float)。 */ private void initBypass(Format format) { disableBypass(); // In case of transition between 2 bypass formats. String mimeType = format.sampleMimeType; if (!MimeTypes.AUDIO_AAC.equals(mimeType) && !MimeTypes.AUDIO_MPEG.equals(mimeType) && !MimeTypes.AUDIO_OPUS.equals(mimeType)) { // TODO(b/154746451): Batching provokes frame drops in non offload. bypassBatchBuffer.setMaxSampleCount(1); } else { bypassBatchBuffer.setMaxSampleCount(BatchBuffer.DEFAULT_MAX_SAMPLE_COUNT); } bypassEnabled = true; }

这样render()方法中就不会再去解码,直接输出

while (bypassRender(positionUs, elapsedRealtimeUs)) {}

不解码的处理逻辑

/** * Processes any pending batch of buffers without using a decoder, and drains a new batch of * buffers from the source. * * 在不使用解码器的情况下处理任何待处理的缓冲区批处理,并从源中排出新一批的缓冲区。 * * 参数: * positionUs –当前媒体时间(以微秒为单位),在渲染循环的当前迭代开始时测量。 * elapsedRealtimeUs – SystemClock.elapsedRealtime(),以微秒为单位,在渲染循环的当前迭代开始时进行测量。 * 返回值: * 是否立即再次调用此方法将取得更大的进展。 * * @param positionUs The current media time in microseconds, measured at the start of the current * iteration of the rendering loop. * @param elapsedRealtimeUs {@link SystemClock#elapsedRealtime()} in microseconds, measured at the * start of the current iteration of the rendering loop. * @return Whether immediately calling this method again will make more progress. * @throws ExoPlaybackException If an error occurred while processing a buffer or handling a * format change. */ private boolean bypassRender(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException { // Process any batched data分批处理数据. checkState(!outputStreamEnded); if (bypassBatchBuffer.hasSamples()) { if (processOutputBuffer( positionUs, elapsedRealtimeUs, /* codec= */ null, bypassBatchBuffer.data, outputIndex, /* bufferFlags= */ 0, bypassBatchBuffer.getSampleCount(), bypassBatchBuffer.getFirstSampleTimeUs(), bypassBatchBuffer.isDecodeOnly(), bypassBatchBuffer.isEndOfStream(), outputFormat)) { // The batch buffer has been fully processed. onProcessedOutputBuffer(bypassBatchBuffer.getLastSampleTimeUs()); bypassBatchBuffer.clear(); } else { // Could not process the whole batch buffer. Try again later. return false; } } // Process end of stream, if reached. if (inputStreamEnded) { outputStreamEnded = true; return false; } if (bypassSampleBufferPending) { Assertions.checkState(bypassBatchBuffer.append(bypassSampleBuffer)); bypassSampleBufferPending = false; } if (bypassDrainAndReinitialize) { if (bypassBatchBuffer.hasSamples()) { // This can only happen if bypassSampleBufferPending was true above. Return true to try and // immediately process the sample, which has now been appended to the batch buffer. return true; } // The new format might require using a codec rather than bypass. disableBypass(); bypassDrainAndReinitialize = false; maybeInitCodecOrBypass(); if (!bypassEnabled) { // We're no longer in bypass mode. return false; } } // Read from the input, appending any sample buffers to the batch buffer. bypassRead(); if (bypassBatchBuffer.hasSamples()) { bypassBatchBuffer.flip(); } // We can make more progress if we have batched data, an EOS, or a re-initialization to process // (note that one or more of the code blocks above will be executed during the next call). return bypassBatchBuffer.hasSamples() || inputStreamEnded || bypassDrainAndReinitialize; }

然后去不断读取流

private void bypassRead() throws ExoPlaybackException { checkState(!inputStreamEnded); FormatHolder formatHolder = getFormatHolder(); bypassSampleBuffer.clear(); while (true) { bypassSampleBuffer.clear(); @SampleStream.ReadDataResult int result = readSource(formatHolder, bypassSampleBuffer, /* formatRequired= */ false); switch (result) { case C.RESULT_FORMAT_READ: onInputFormatChanged(formatHolder); return; case C.RESULT_NOTHING_READ: return; case C.RESULT_BUFFER_READ: if (bypassSampleBuffer.isEndOfStream()) { inputStreamEnded = true; return; } if (waitingForFirstSampleInFormat) { // This is the first buffer in a new format, the output format must be updated. outputFormat = checkNotNull(inputFormat); onOutputFormatChanged(outputFormat, /* mediaFormat= */ null); waitingForFirstSampleInFormat = false; } // Try to append the buffer to the batch buffer. bypassSampleBuffer.flip(); if (!bypassBatchBuffer.append(bypassSampleBuffer)) { bypassSampleBufferPending = true; return; } break; default: throw new IllegalStateException(); } } }

调用了BaseRenderer中的读流方法

/** * Reads from the enabled upstream source. If the upstream source has been read to the end then * {@link C#RESULT_BUFFER_READ} is only returned if {@link #setCurrentStreamFinal()} has been * called. {@link C#RESULT_NOTHING_READ} is returned otherwise. * 从启用的上游源中读取。 如果上游源已读到末尾,则仅在调用setCurrentStreamFinal()的情况下才返回C.RESULT_BUFFER_READ。 否则返回C.RESULT_NOTHING_READ。 * 当渲染器处于以下状态时,可以调用此方法:STATE_ENABLED,STATE_STARTED。 * * <p>This method may be called when the renderer is in the following states: {@link * #STATE_ENABLED}, {@link #STATE_STARTED}. * * @param formatHolder A {@link FormatHolder} to populate in the case of reading a format. * @param buffer A {@link DecoderInputBuffer} to populate in the case of reading a sample or the * end of the stream. If the end of the stream has been reached, the {@link * C#BUFFER_FLAG_END_OF_STREAM} flag will be set on the buffer. * @param formatRequired Whether the caller requires that the format of the stream be read even if * it's not changing. A sample will never be read if set to true, however it is still possible * for the end of stream or nothing to be read. * @return The status of read, one of {@link SampleStream.ReadDataResult}. */ @SampleStream.ReadDataResult protected final int readSource( FormatHolder formatHolder, DecoderInputBuffer buffer, boolean formatRequired) { @SampleStream.ReadDataResult int result = Assertions.checkNotNull(stream).readData(formatHolder, buffer, formatRequired); if (result == C.RESULT_BUFFER_READ) { if (buffer.isEndOfStream()) { readingPositionUs = C.TIME_END_OF_SOURCE; return streamIsFinal ? C.RESULT_BUFFER_READ : C.RESULT_NOTHING_READ; } buffer.timeUs += streamOffsetUs; readingPositionUs = max(readingPositionUs, buffer.timeUs); } else if (result == C.RESULT_FORMAT_READ) { Format format = Assertions.checkNotNull(formatHolder.format); if (format.subsampleOffsetUs != Format.OFFSET_SAMPLE_RELATIVE) { format = format .buildUpon() .setSubsampleOffsetUs(format.subsampleOffsetUs + streamOffsetUs) .build(); formatHolder.format = format; } } return result; }

到了读取流的接口。。。。

/** * A stream of media samples (and associated format information). */ public interface SampleStream { /** Return values of {@link #readData(FormatHolder, DecoderInputBuffer, boolean)}. */ @Documented @Retention(RetentionPolicy.SOURCE) @IntDef({C.RESULT_NOTHING_READ, C.RESULT_FORMAT_READ, C.RESULT_BUFFER_READ}) @interface ReadDataResult {} /** * Returns whether data is available to be read. * <p> * Note: If the stream has ended then a buffer with the end of stream flag can always be read from * {@link #readData(FormatHolder, DecoderInputBuffer, boolean)}. Hence an ended stream is always * ready. * * @return Whether data is available to be read. */ boolean isReady(); /** * Throws an error that's preventing data from being read. Does nothing if no such error exists. * * @throws IOException The underlying error. */ void maybeThrowError() throws IOException; /** * Attempts to read from the stream. * * <p>If the stream has ended then {@link C#BUFFER_FLAG_END_OF_STREAM} flag is set on {@code * buffer} and {@link C#RESULT_BUFFER_READ} is returned. Else if no data is available then {@link * C#RESULT_NOTHING_READ} is returned. Else if the format of the media is changing or if {@code * formatRequired} is set then {@code formatHolder} is populated and {@link C#RESULT_FORMAT_READ} * is returned. Else {@code buffer} is populated and {@link C#RESULT_BUFFER_READ} is returned. * 尝试从流中读取。 * 如果流已结束,则在缓冲区上设置C.BUFFER_FLAG_END_OF_STREAM标志,并返回C.RESULT_BUFFER_READ。 * 否则,如果没有可用数据,则返回C.RESULT_NOTHING_READ。 * 否则,如果媒体的格式正在更改,或者如果设置了formatRequired,则填充formatHolder并返回C.RESULT_FORMAT_READ。 * 填充其他缓冲区,并返回C.RESULT_BUFFER_READ。 * * @param formatHolder A {@link FormatHolder} to populate in the case of reading a format. * @param buffer A {@link DecoderInputBuffer} to populate in the case of reading a sample or the * end of the stream. If the end of the stream has been reached, the {@link * C#BUFFER_FLAG_END_OF_STREAM} flag will be set on the buffer. If a {@link * DecoderInputBuffer#isFlagsOnly() flags-only} buffer is passed, then no {@link * DecoderInputBuffer#data} will be read and the read position of the stream will not change, * but the flags of the buffer will be populated. * @param formatRequired Whether the caller requires that the format of the stream be read even if * it's not changing. A sample will never be read if set to true, however it is still possible * for the end of stream or nothing to be read. * @return The status of read, one of {@link ReadDataResult}. */ @ReadDataResult int readData(FormatHolder formatHolder, DecoderInputBuffer buffer, boolean formatRequired); /** * Attempts to skip to the keyframe before the specified position, or to the end of the stream if * {@code positionUs} is beyond it. * * @param positionUs The specified time. * @return The number of samples that were skipped. */ int skipData(long positionUs); }

读流后,返回结果,并根据结果进行处理

/** * A return value for methods where nothing was read. */ public static final int RESULT_NOTHING_READ = -3; /** * A return value for methods where a buffer was read. */ public static final int RESULT_BUFFER_READ = -4; /** * A return value for methods where a format was read. */ public static final int RESULT_FORMAT_READ = -5;

直接看case RESULT_BUFFER_READ

case C.RESULT_BUFFER_READ: if (bypassSampleBuffer.isEndOfStream()) { inputStreamEnded = true; return; } if (waitingForFirstSampleInFormat) { // This is the first buffer in a new format, the output format must be updated.这是新格式的第一个缓冲区,必须更新输出格式。 outputFormat = checkNotNull(inputFormat); onOutputFormatChanged(outputFormat, /* mediaFormat= */ null); waitingForFirstSampleInFormat = false; } // Try to append the buffer to the batch buffer.尝试将缓冲区追加到批处理缓冲区。 bypassSampleBuffer.flip(); if (!bypassBatchBuffer.append(bypassSampleBuffer)) { bypassSampleBufferPending = true; return; } break;

头一次输出

/** * Called when one of the output formats changes. * * <p>The default implementation is a no-op. * * @param format The input {@link Format} to which future output now corresponds. If the renderer * is in bypass mode, this is also the output format. * @param mediaFormat The codec output {@link MediaFormat}, or {@code null} if the renderer is in * bypass mode. * @throws ExoPlaybackException Thrown if an error occurs configuring the output. */ protected void onOutputFormatChanged(Format format, @Nullable MediaFormat mediaFormat) throws ExoPlaybackException { // Do nothing. }

视频 音频分别进行了实现

视频

@Override protected void onOutputFormatChanged(Format format, @Nullable MediaFormat mediaFormat) { @Nullable MediaCodecAdapter codec = getCodec(); if (codec != null) { // Must be applied each time the output format changes. codec.setVideoScalingMode(scalingMode); } if (tunneling) { currentWidth = format.width; currentHeight = format.height; } else { Assertions.checkNotNull(mediaFormat); boolean hasCrop = mediaFormat.containsKey(KEY_CROP_RIGHT) && mediaFormat.containsKey(KEY_CROP_LEFT) && mediaFormat.containsKey(KEY_CROP_BOTTOM) && mediaFormat.containsKey(KEY_CROP_TOP); currentWidth = hasCrop ? mediaFormat.getInteger(KEY_CROP_RIGHT) - mediaFormat.getInteger(KEY_CROP_LEFT) + 1 : mediaFormat.getInteger(MediaFormat.KEY_WIDTH); currentHeight = hasCrop ? mediaFormat.getInteger(KEY_CROP_BOTTOM) - mediaFormat.getInteger(KEY_CROP_TOP) + 1 : mediaFormat.getInteger(MediaFormat.KEY_HEIGHT); } currentPixelWidthHeightRatio = format.pixelWidthHeightRatio; if (Util.SDK_INT >= 21) { // On API level 21 and above the decoder applies the rotation when rendering to the surface. // Hence currentUnappliedRotation should always be 0. For 90 and 270 degree rotations, we need // to flip the width, height and pixel aspect ratio to reflect the rotation that was applied. if (format.rotationDegrees == 90 || format.rotationDegrees == 270) { int rotatedHeight = currentWidth; currentWidth = currentHeight; currentHeight = rotatedHeight; currentPixelWidthHeightRatio = 1 / currentPixelWidthHeightRatio; } } else { // On API level 20 and below the decoder does not apply the rotation. currentUnappliedRotationDegrees = format.rotationDegrees; } frameReleaseHelper.onFormatChanged(format.frameRate); }

音频

@Override protected void onOutputFormatChanged(Format format, @Nullable MediaFormat mediaFormat) throws ExoPlaybackException { Format audioSinkInputFormat; @Nullable int[] channelMap = null; if (decryptOnlyCodecFormat != null) { // Direct playback with a codec for decryption. audioSinkInputFormat = decryptOnlyCodecFormat; } else if (getCodec() == null) { // Direct playback with codec bypass. audioSinkInputFormat = format; } else { @C.PcmEncoding int pcmEncoding; if (MimeTypes.AUDIO_RAW.equals(format.sampleMimeType)) { // For PCM streams, the encoder passes through int samples despite set to float mode. pcmEncoding = format.pcmEncoding; } else if (Util.SDK_INT >= 24 && mediaFormat.containsKey(MediaFormat.KEY_PCM_ENCODING)) { pcmEncoding = mediaFormat.getInteger(MediaFormat.KEY_PCM_ENCODING); } else if (mediaFormat.containsKey(VIVO_BITS_PER_SAMPLE_KEY)) { pcmEncoding = Util.getPcmEncoding(mediaFormat.getInteger(VIVO_BITS_PER_SAMPLE_KEY)); } else { // If the format is anything other than PCM then we assume that the audio decoder will // output 16-bit PCM. pcmEncoding = MimeTypes.AUDIO_RAW.equals(format.sampleMimeType) ? format.pcmEncoding : C.ENCODING_PCM_16BIT; } audioSinkInputFormat = new Format.Builder() .setSampleMimeType(MimeTypes.AUDIO_RAW) .setPcmEncoding(pcmEncoding) .setEncoderDelay(format.encoderDelay) .setEncoderPadding(format.encoderPadding) .setChannelCount(mediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT)) .setSampleRate(mediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE)) .build(); if (codecNeedsDiscardChannelsWorkaround && audioSinkInputFormat.channelCount == 6 && format.channelCount < 6) { channelMap = new int[format.channelCount]; for (int i = 0; i < format.channelCount; i++) { channelMap[i] = i; } } } try { audioSink.configure(audioSinkInputFormat, /* specifiedBufferSize= */ 0, channelMap); } catch (AudioSink.ConfigurationException e) { throw createRendererException(e, e.format); } }

先看视频,视频去更新了surface

/** * Called when the renderer's output format changes. * * @param formatFrameRate The format's frame rate, or {@link Format#NO_VALUE} if unknown. */ public void onFormatChanged(float formatFrameRate) { this.formatFrameRate = formatFrameRate; frameRateEstimator.reset(); updateSurfaceMediaFrameRate(); }

更新surface

/** * Updates the media frame rate that's used to calculate the playback frame rate of the current * {@link #surface}. If the frame rate is updated then {@link #updateSurfacePlaybackFrameRate} is * called to update the surface. * 更新用于计算当前表面的播放帧速率的媒体帧速率。 如果更新了帧速率,则将调用updateSurfacePlaybackFrameRate来更新曲面。 */ private void updateSurfaceMediaFrameRate() { if (Util.SDK_INT < 30 || surface == null) { return; } float candidateFrameRate = frameRateEstimator.isSynced() ? frameRateEstimator.getFrameRate() : formatFrameRate; if (candidateFrameRate == surfaceMediaFrameRate) { return; } // The candidate is different to the current surface media frame rate. Decide whether to update // the surface media frame rate. boolean shouldUpdate; if (candidateFrameRate != Format.NO_VALUE && surfaceMediaFrameRate != Format.NO_VALUE) { boolean candidateIsHighConfidence = frameRateEstimator.isSynced() && frameRateEstimator.getMatchingFrameDurationSumNs() >= MINIMUM_MATCHING_FRAME_DURATION_FOR_HIGH_CONFIDENCE_NS; float minimumChangeForUpdate = candidateIsHighConfidence ? MINIMUM_MEDIA_FRAME_RATE_CHANGE_FOR_UPDATE_HIGH_CONFIDENCE : MINIMUM_MEDIA_FRAME_RATE_CHANGE_FOR_UPDATE_LOW_CONFIDENCE; shouldUpdate = Math.abs(candidateFrameRate - surfaceMediaFrameRate) >= minimumChangeForUpdate; } else if (candidateFrameRate != Format.NO_VALUE) { shouldUpdate = true; } else { shouldUpdate = frameRateEstimator.getFramesWithoutSyncCount() >= MINIMUM_FRAMES_WITHOUT_SYNC_TO_CLEAR_SURFACE_FRAME_RATE; } if (shouldUpdate) { surfaceMediaFrameRate = candidateFrameRate; updateSurfacePlaybackFrameRate(/* isNewSurface= */ false); } }

sdk 还有要求

/** * Updates the playback frame rate of the current {@link #surface} based on the playback speed, * frame rate of the content, and whether the renderer is started. * 根据回放速度,内容的帧速率以及渲染器是否启动,更新当前表面的回放帧速率。 * * @param isNewSurface Whether the current {@link #surface} is new. */ private void updateSurfacePlaybackFrameRate(boolean isNewSurface) { if (Util.SDK_INT < 30 || surface == null) { return; } float surfacePlaybackFrameRate = 0; if (started && surfaceMediaFrameRate != Format.NO_VALUE) { surfacePlaybackFrameRate = surfaceMediaFrameRate * playbackSpeed; } // We always set the frame-rate if we have a new surface, since we have no way of knowing what // it might have been set to previously. if (!isNewSurface && this.surfacePlaybackFrameRate == surfacePlaybackFrameRate) { return; } this.surfacePlaybackFrameRate = surfacePlaybackFrameRate; setSurfaceFrameRateV30(surface, surfacePlaybackFrameRate); }

看起来是30以上才需要这么做

@RequiresApi(30) private static void setSurfaceFrameRateV30(Surface surface, float frameRate) { int compatibility = frameRate == 0 ? Surface.FRAME_RATE_COMPATIBILITY_DEFAULT : Surface.FRAME_RATE_COMPATIBILITY_FIXED_SOURCE; try { surface.setFrameRate(frameRate, compatibility); } catch (IllegalStateException e) { Log.e(TAG, "Failed to call Surface.setFrameRate", e); } }

sdk小于30,从一开始就return了。。。。

回到 mediaCodecRenderer.bypassRead()

就开始吐数据了

// Try to append the buffer to the batch buffer.尝试将缓冲区追加到批处理缓冲区。 bypassSampleBuffer.flip();

翻转数据

/** * Flips {@link #data} and {@link #supplementalData} in preparation for being queued to a decoder. * 翻转data和supplementalData以准备排队到解码器。 * * @see java.nio.Buffer#flip() */ public final void flip() { if (data != null) { data.flip(); } if (supplementalData != null) { supplementalData.flip(); } }

然后附加到缓冲区

/** * Attempts to append the provided buffer. * 尝试附加提供的缓冲区。 * * @param buffer The buffer to try and append. * @return Whether the buffer was successfully appended. * @throws IllegalArgumentException If the {@code buffer} is encrypted, has supplemental data, or * is an end of stream buffer, none of which are supported. */ public boolean append(DecoderInputBuffer buffer) { checkArgument(!buffer.isEncrypted()); checkArgument(!buffer.hasSupplementalData()); checkArgument(!buffer.isEndOfStream()); if (!canAppendSampleBuffer(buffer)) { return false; } if (sampleCount++ == 0) { timeUs = buffer.timeUs; if (buffer.isKeyFrame()) { setFlags(C.BUFFER_FLAG_KEY_FRAME); } } if (buffer.isDecodeOnly()) { setFlags(C.BUFFER_FLAG_DECODE_ONLY); } @Nullable ByteBuffer bufferData = buffer.data; if (bufferData != null) { ensureSpaceForWrite(bufferData.remaining()); data.put(bufferData); } lastSampleTimeUs = buffer.timeUs; return true; }

终于,直通输出了。。。。。。。。

当然,如果不是直通。。。。就得解码了。。。。

回到MediaCodecRenderer.maybeInitCodecOrBypass(),忽略直通,继续往下看。。。。

先判断是否有Drm信息,然后去寻找可用的解码器

private void maybeInitCodecWithFallback( MediaCrypto crypto, boolean mediaCryptoRequiresSecureDecoder) throws DecoderInitializationException { if (availableCodecInfos == null) { try { List<MediaCodecInfo> allAvailableCodecInfos = getAvailableCodecInfos(mediaCryptoRequiresSecureDecoder); Logger.w(TAG,"maybeInitCodecWithFallback allAvailableCodecInfos ",allAvailableCodecInfos);//[OMX.hisi.video.decoder.avc, OMX.google.h264.decoder] availableCodecInfos = new ArrayDeque<>(); if (enableDecoderFallback) { availableCodecInfos.addAll(allAvailableCodecInfos); } else if (!allAvailableCodecInfos.isEmpty()) { availableCodecInfos.add(allAvailableCodecInfos.get(0)); } preferredDecoderInitializationException = null; } catch (DecoderQueryException e) { throw new DecoderInitializationException( inputFormat, e, mediaCryptoRequiresSecureDecoder, DecoderInitializationException.DECODER_QUERY_ERROR); } } if (availableCodecInfos.isEmpty()) { throw new DecoderInitializationException( inputFormat, /* cause= */ null, mediaCryptoRequiresSecureDecoder, DecoderInitializationException.NO_SUITABLE_DECODER_ERROR); } while (codec == null) { MediaCodecInfo codecInfo = availableCodecInfos.peekFirst(); Logger.w(TAG,"maybeInitCodecWithFallback codecInfo ",codecInfo);//OMX.hisi.video.decoder.avc if (!shouldInitCodec(codecInfo)) { return; } try { initCodec(codecInfo, crypto); } catch (Exception e) { Log.w(TAG, "Failed to initialize decoder: " + codecInfo, e); // This codec failed to initialize, so fall back to the next codec in the list (if any). We // won't try to use this codec again unless there's a format change or the renderer is // disabled and re-enabled. availableCodecInfos.removeFirst(); DecoderInitializationException exception = new DecoderInitializationException( inputFormat, e, mediaCryptoRequiresSecureDecoder, codecInfo); if (preferredDecoderInitializationException == null) { preferredDecoderInitializationException = exception; } else { preferredDecoderInitializationException = preferredDecoderInitializationException.copyWithFallbackException(exception); } if (availableCodecInfos.isEmpty()) { throw preferredDecoderInitializationException; } } } availableCodecInfos = null; }

循环系统当前可用解码器,找到后去尝试初始化

private void initCodec(MediaCodecInfo codecInfo, MediaCrypto crypto) throws Exception { long codecInitializingTimestamp; long codecInitializedTimestamp; @Nullable MediaCodecAdapter codecAdapter = null; String codecName = codecInfo.name; float codecOperatingRate = Util.SDK_INT < 23 ? CODEC_OPERATING_RATE_UNSET : getCodecOperatingRateV23(targetPlaybackSpeed, inputFormat, getStreamFormats()); if (codecOperatingRate <= assumedMinimumCodecOperatingRate) { codecOperatingRate = CODEC_OPERATING_RATE_UNSET; } Logger.w(TAG,"initCodec",codecInfo,crypto);//OMX.hisi.video.decoder.avc,null try { codecInitializingTimestamp = SystemClock.elapsedRealtime(); TraceUtil.beginSection("createCodec:" + codecName); MediaCodec codec = MediaCodec.createByCodecName(codecName); if (enableAsynchronousBufferQueueing && Util.SDK_INT >= 23) { codecAdapter = new AsynchronousMediaCodecAdapter.Factory( getTrackType(), forceAsyncQueueingSynchronizationWorkaround, enableSynchronizeCodecInteractionsWithQueueing) .createAdapter(codec); } else { codecAdapter = codecAdapterFactory.createAdapter(codec); } TraceUtil.endSection(); TraceUtil.beginSection("configureCodec 找到了解码器"); configureCodec(codecInfo, codecAdapter, inputFormat, crypto, codecOperatingRate); TraceUtil.endSection(); TraceUtil.beginSection("startCodec 启动解码器"); codecAdapter.start(); TraceUtil.endSection(); codecInitializedTimestamp = SystemClock.elapsedRealtime(); } catch (Exception e) { if (codecAdapter != null) { codecAdapter.release(); } throw e; } this.codec = codecAdapter; this.codecInfo = codecInfo; this.codecOperatingRate = codecOperatingRate; codecInputFormat = inputFormat; codecAdaptationWorkaroundMode = codecAdaptationWorkaroundMode(codecName); codecNeedsDiscardToSpsWorkaround = codecNeedsDiscardToSpsWorkaround(codecName, codecInputFormat); codecNeedsFlushWorkaround = codecNeedsFlushWorkaround(codecName); codecNeedsSosFlushWorkaround = codecNeedsSosFlushWorkaround(codecName); codecNeedsEosFlushWorkaround = codecNeedsEosFlushWorkaround(codecName); codecNeedsEosOutputExceptionWorkaround = codecNeedsEosOutputExceptionWorkaround(codecName); codecNeedsEosBufferTimestampWorkaround = codecNeedsEosBufferTimestampWorkaround(codecName); codecNeedsMonoChannelCountWorkaround = codecNeedsMonoChannelCountWorkaround(codecName, codecInputFormat); codecNeedsEosPropagation = codecNeedsEosPropagationWorkaround(codecInfo) || getCodecNeedsEosPropagation(); if ("c2.android.mp3.decoder".equals(codecInfo.name)) { c2Mp3TimestampTracker = new C2Mp3TimestampTracker(); } if (getState() == STATE_STARTED) { codecHotswapDeadlineMs = SystemClock.elapsedRealtime() + MAX_CODEC_HOTSWAP_TIME_MS; } decoderCounters.decoderInitCount++; long elapsed = codecInitializedTimestamp - codecInitializingTimestamp; onCodecInitialized(codecName, codecInitializedTimestamp, elapsed); }

结尾竟然还做了个回调

/** * Called when a {@link MediaCodec} has been created and configured. * <p> * The default implementation is a no-op. * * @param name The name of the codec that was initialized. * @param initializedTimestampMs {@link SystemClock#elapsedRealtime()} when initialization * finished. * @param initializationDurationMs The time taken to initialize the codec in milliseconds. */ protected void onCodecInitialized(String name, long initializedTimestampMs, long initializationDurationMs) { // Do nothing. }

找到解码器后,回到 renderer.render()方法,就去循环消耗

while (drainOutputBuffer(positionUs, elapsedRealtimeUs) && shouldContinueRendering(renderStartTimeMs)) {} //消耗解码数据

消耗

/** * @return Whether it may be possible to drain more output data.是否有可能消耗更多的输出数据。 * @throws ExoPlaybackException If an error occurs draining the output buffer. */ private boolean drainOutputBuffer(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException { if (!hasOutputBuffer()) { int outputIndex; if (codecNeedsEosOutputExceptionWorkaround && codecReceivedEos) { try { outputIndex = codec.dequeueOutputBufferIndex(outputBufferInfo); } catch (IllegalStateException e) { processEndOfStream(); if (outputStreamEnded) { // Release the codec, as it's in an error state. releaseCodec(); } return false; } } else { outputIndex = codec.dequeueOutputBufferIndex(outputBufferInfo); } if (outputIndex < 0) { if (outputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED /* (-2) */) { processOutputMediaFormatChanged(); return true; } // MediaCodec.INFO_TRY_AGAIN_LATER (-1) or unknown negative return value. if (codecNeedsEosPropagation && (inputStreamEnded || codecDrainState == DRAIN_STATE_WAIT_END_OF_STREAM)) { processEndOfStream(); } return false; } // We've dequeued a buffer. if (shouldSkipAdaptationWorkaroundOutputBuffer) { shouldSkipAdaptationWorkaroundOutputBuffer = false; codec.releaseOutputBuffer(outputIndex, false); return true; } else if (outputBufferInfo.size == 0 && (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) { // The dequeued buffer indicates the end of the stream. Process it immediately. processEndOfStream(); return false; } this.outputIndex = outputIndex; outputBuffer = codec.getOutputBuffer(outputIndex); // The dequeued buffer is a media buffer. Do some initial setup. // It will be processed by calling processOutputBuffer (possibly multiple times). if (outputBuffer != null) { outputBuffer.position(outputBufferInfo.offset); outputBuffer.limit(outputBufferInfo.offset + outputBufferInfo.size); } if (codecNeedsEosBufferTimestampWorkaround && outputBufferInfo.presentationTimeUs == 0 && (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0 && largestQueuedPresentationTimeUs != C.TIME_UNSET) { outputBufferInfo.presentationTimeUs = largestQueuedPresentationTimeUs; } isDecodeOnlyOutputBuffer = isDecodeOnlyBuffer(outputBufferInfo.presentationTimeUs); isLastOutputBuffer = lastBufferInStreamPresentationTimeUs == outputBufferInfo.presentationTimeUs; updateOutputFormatForTime(outputBufferInfo.presentationTimeUs); } boolean processedOutputBuffer; if (codecNeedsEosOutputExceptionWorkaround && codecReceivedEos) { try { processedOutputBuffer = processOutputBuffer( positionUs, elapsedRealtimeUs, codec, outputBuffer, outputIndex, outputBufferInfo.flags, /* sampleCount= */ 1, outputBufferInfo.presentationTimeUs, isDecodeOnlyOutputBuffer, isLastOutputBuffer, outputFormat); } catch (IllegalStateException e) { processEndOfStream(); if (outputStreamEnded) { // Release the codec, as it's in an error state. releaseCodec(); } return false; } } else { processedOutputBuffer = processOutputBuffer( positionUs, elapsedRealtimeUs, codec, outputBuffer, outputIndex, outputBufferInfo.flags, /* sampleCount= */ 1, outputBufferInfo.presentationTimeUs, isDecodeOnlyOutputBuffer, isLastOutputBuffer, outputFormat); } if (processedOutputBuffer) { onProcessedOutputBuffer(outputBufferInfo.presentationTimeUs); boolean isEndOfStream = (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0; resetOutputBuffer(); if (!isEndOfStream) { return true; } processEndOfStream(); } return false; }

这个操作也是很眼熟了,先拿索引,然后按索引操作

索引

/** * Returns the next available output buffer index from the underlying {@link MediaCodec}. If the * next available output is a MediaFormat change, it will return {@link * MediaCodec#INFO_OUTPUT_FORMAT_CHANGED} and you should call {@link #getOutputFormat()} to get * the format. If there is no available output, this method will return {@link * MediaCodec#INFO_TRY_AGAIN_LATER}. * 从基础MediaCodec返回下一个可用的输出缓冲区索引。 * 如果下一个可用的输出是MediaFormat更改,它将返回MediaCodec.INFO_OUTPUT_FORMAT_CHANGED,并且您应该调用getOutputFormat()以获取格式。 * 如果没有可用的输出,则此方法将返回MediaCodec.INFO_TRY_AGAIN_LATER。 * * @throws IllegalStateException If the underlying {@link MediaCodec} raised an error. */ int dequeueOutputBufferIndex(MediaCodec.BufferInfo bufferInfo);

buffer 出列

/** * Returns the buffer to the {@link MediaCodec}. If the {@link MediaCodec} was configured with an * output surface, setting {@code render} to {@code true} will first send the buffer to the output * surface. The surface will release the buffer back to the codec once it is no longer * used/displayed.将缓冲区返回给MediaCodec。 * 如果MediaCodec配置了输出表面,则将render设置为true会首先将缓冲区发送到输出表面。 一旦不再使用/显示,则表面会将缓冲区释放回编解码器。 * * @see MediaCodec#releaseOutputBuffer(int, boolean) */ void releaseOutputBuffer(int index, boolean render);

通过索引取buffer

outputBuffer = codec.getOutputBuffer(outputIndex); //返回出列输出缓冲区索引的只读ByteBuffer。

字节序

// The dequeued buffer is a media buffer. Do some initial setup. // It will be processed by calling processOutputBuffer (possibly multiple times). // 出队缓冲区是媒体缓冲区。 做一些初始设置。 将通过调用processOutputBuffer(可能多次)进行处理。 if (outputBuffer != null) { outputBuffer.position(outputBufferInfo.offset); outputBuffer.limit(outputBufferInfo.offset + outputBufferInfo.size); }

处理

/** * Processes an output media buffer. * 处理输出媒体缓冲区。 * 当将新的ByteBuffer传递给此方法时,其位置和限制描述了要处理的数据。 * 返回值指示缓冲区是否已满。 如果返回true,则对该方法的下一次调用将接收要处理的新缓冲区。 如果返回false,则将相同的缓冲区传递给下一个调用。 * 此方法的实现可以自由修改缓冲区,并且可以假定在连续调用之间不会从外部修改缓冲区。 因此,实现可以例如修改缓冲区的位置,以跟踪其已处理的数据量。 * 请注意,在调用onPositionReset(long,boolean)之后,对此方法的第一次调用将始终接收要处理的新ByteBuffer。 * * <p>When a new {@link ByteBuffer} is passed to this method its position and limit delineate the * data to be processed. The return value indicates whether the buffer was processed in full. If * true is returned then the next call to this method will receive a new buffer to be processed. * If false is returned then the same buffer will be passed to the next call. An implementation of * this method is free to modify the buffer and can assume that the buffer will not be externally * modified between successive calls. Hence an implementation can, for example, modify the * buffer's position to keep track of how much of the data it has processed. * * <p>Note that the first call to this method following a call to {@link #onPositionReset(long, * boolean)} will always receive a new {@link ByteBuffer} to be processed. * * @param positionUs The current media time in microseconds, measured at the start of the current * iteration of the rendering loop. * @param elapsedRealtimeUs {@link SystemClock#elapsedRealtime()} in microseconds, measured at the * start of the current iteration of the rendering loop. * @param codec The {@link MediaCodecAdapter} instance, or null in bypass mode were no codec is * used. * @param buffer The output buffer to process, or null if the buffer data is not made available to * the application layer (see {@link MediaCodec#getOutputBuffer(int)}). This {@code buffer} * can only be null for video data. Note that the buffer data can still be rendered in this * case by using the {@code bufferIndex}. * @param bufferIndex The index of the output buffer. * @param bufferFlags The flags attached to the output buffer. * @param sampleCount The number of samples extracted from the sample queue in the buffer. This * allows handling multiple samples as a batch for efficiency. * @param bufferPresentationTimeUs The presentation time of the output buffer in microseconds. * @param isDecodeOnlyBuffer Whether the buffer was marked with {@link C#BUFFER_FLAG_DECODE_ONLY} * by the source. * @param isLastBuffer Whether the buffer is the last sample of the current stream. * @param format The {@link Format} associated with the buffer. * @return Whether the output buffer was fully processed (for example, rendered or skipped). * @throws ExoPlaybackException If an error occurs processing the output buffer. */ protected abstract boolean processOutputBuffer( long positionUs, long elapsedRealtimeUs, @Nullable MediaCodecAdapter codec, @Nullable ByteBuffer buffer, int bufferIndex, int bufferFlags, int sampleCount, long bufferPresentationTimeUs, boolean isDecodeOnlyBuffer, boolean isLastBuffer, Format format) throws ExoPlaybackException;

同样的,这个函数由 video和audio分别实现。。。。

看不动了。。不动了。。。动了。。。。

2021年05月20日21:01:16

同样的,先看视频重写

@Override protected boolean processOutputBuffer( long positionUs, long elapsedRealtimeUs, @Nullable MediaCodecAdapter codec, @Nullable ByteBuffer buffer, int bufferIndex, int bufferFlags, int sampleCount, long bufferPresentationTimeUs, boolean isDecodeOnlyBuffer, boolean isLastBuffer, Format format) throws ExoPlaybackException { Assertions.checkNotNull(codec); // Can not render video without codec没有编解码器就无法渲染视频 if (initialPositionUs == C.TIME_UNSET) { //表示未设置或未知的时间或持续时间的特殊常数。 适用于任何时基。 initialPositionUs = positionUs; } if (bufferPresentationTimeUs != lastBufferPresentationTimeUs) { frameReleaseHelper.onNextFrame(bufferPresentationTimeUs);//在跳过,拖放或渲染帧之前,渲染器为每个帧调用。 this.lastBufferPresentationTimeUs = bufferPresentationTimeUs; } long outputStreamOffsetUs = getOutputStreamOffsetUs();//获取相对于媒体的播放位置 long presentationTimeUs = bufferPresentationTimeUs - outputStreamOffsetUs; if (isDecodeOnlyBuffer && !isLastBuffer) { skipOutputBuffer(codec, bufferIndex, presentationTimeUs); return true; } // Note: Use of double rather than float is intentional for accuracy in the calculations below. double playbackSpeed = getPlaybackSpeed(); boolean isStarted = getState() == STATE_STARTED; long elapsedRealtimeNowUs = SystemClock.elapsedRealtime() * 1000; // Calculate how early we are. In other words, the realtime duration that needs to elapse whilst // the renderer is started before the frame should be rendered. A negative value means that // we're already late. long earlyUs = (long) ((bufferPresentationTimeUs - positionUs) / playbackSpeed); if (isStarted) { // Account for the elapsed time since the start of this iteration of the rendering loop. earlyUs -= elapsedRealtimeNowUs - elapsedRealtimeUs; } if (surface == dummySurface) { // Skip frames in sync with playback, so we'll be at the right frame if the mode changes. if (isBufferLate(earlyUs)) { skipOutputBuffer(codec, bufferIndex, presentationTimeUs); updateVideoFrameProcessingOffsetCounters(earlyUs); return true; } return false; } long elapsedSinceLastRenderUs = elapsedRealtimeNowUs - lastRenderRealtimeUs; boolean shouldRenderFirstFrame = !renderedFirstFrameAfterEnable ? (isStarted || mayRenderFirstFrameAfterEnableIfNotStarted) : !renderedFirstFrameAfterReset; // Don't force output until we joined and the position reached the current stream. boolean forceRenderOutputBuffer = joiningDeadlineMs == C.TIME_UNSET && positionUs >= outputStreamOffsetUs && (shouldRenderFirstFrame || (isStarted && shouldForceRenderOutputBuffer(earlyUs, elapsedSinceLastRenderUs))); if (forceRenderOutputBuffer) { long releaseTimeNs = System.nanoTime(); notifyFrameMetadataListener(presentationTimeUs, releaseTimeNs, format); if (Util.SDK_INT >= 21) { renderOutputBufferV21(codec, bufferIndex, presentationTimeUs, releaseTimeNs); } else { renderOutputBuffer(codec, bufferIndex, presentationTimeUs); } updateVideoFrameProcessingOffsetCounters(earlyUs); return true; } if (!isStarted || positionUs == initialPositionUs) { return false; } // Compute the buffer's desired release time in nanoseconds. long systemTimeNs = System.nanoTime(); long unadjustedFrameReleaseTimeNs = systemTimeNs + (earlyUs * 1000); // Apply a timestamp adjustment, if there is one. long adjustedReleaseTimeNs = frameReleaseHelper.adjustReleaseTime(unadjustedFrameReleaseTimeNs); earlyUs = (adjustedReleaseTimeNs - systemTimeNs) / 1000; boolean treatDroppedBuffersAsSkipped = joiningDeadlineMs != C.TIME_UNSET; if (shouldDropBuffersToKeyframe(earlyUs, elapsedRealtimeUs, isLastBuffer) && maybeDropBuffersToKeyframe(positionUs, treatDroppedBuffersAsSkipped)) { return false; } else if (shouldDropOutputBuffer(earlyUs, elapsedRealtimeUs, isLastBuffer)) { if (treatDroppedBuffersAsSkipped) { skipOutputBuffer(codec, bufferIndex, presentationTimeUs); } else { dropOutputBuffer(codec, bufferIndex, presentationTimeUs); } updateVideoFrameProcessingOffsetCounters(earlyUs); return true; } if (Util.SDK_INT >= 21) { // Let the underlying framework time the release. if (earlyUs < 50000) { notifyFrameMetadataListener(presentationTimeUs, adjustedReleaseTimeNs, format);//通知监听 renderOutputBufferV21(codec, bufferIndex, presentationTimeUs, adjustedReleaseTimeNs);//使用指定的索引渲染输出缓冲区 updateVideoFrameProcessingOffsetCounters(earlyUs); return true; } } else { // We need to time the release ourselves. if (earlyUs < 30000) { if (earlyUs > 11000) { // We're a little too early to render the frame. Sleep until the frame can be rendered. // Note: The 11ms threshold was chosen fairly arbitrarily. try { // Subtracting 10000 rather than 11000 ensures the sleep time will be at least 1ms. Thread.sleep((earlyUs - 10000) / 1000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); return false; } } notifyFrameMetadataListener(presentationTimeUs, adjustedReleaseTimeNs, format); renderOutputBuffer(codec, bufferIndex, presentationTimeUs); updateVideoFrameProcessingOffsetCounters(earlyUs); return true; } } // We're either not playing, or it's not time to render the frame yet. return false; }

输出到索引

/** * Renders the output buffer with the specified index. This method is only called if the platform * API version of the device is 21 or later. * 使用指定的索引渲染输出缓冲区。 仅当设备的平台API版本为21或更高版本时,才调用此方法。 * * 参数: * 编解码器–拥有输出缓冲区的编解码器。 * index –要删除的输出缓冲区的索引。 * presentationTimeUs –输出缓冲区的显示时间,以微秒为单位。 * releaseTimeNs –应显示帧的墙上时钟时间(以纳秒为单位)。 * @param codec The codec that owns the output buffer. * @param index The index of the output buffer to drop. * @param presentationTimeUs The presentation time of the output buffer, in microseconds. * @param releaseTimeNs The wallclock time at which the frame should be displayed, in nanoseconds. */ @RequiresApi(21) protected void renderOutputBufferV21( MediaCodecAdapter codec, int index, long presentationTimeUs, long releaseTimeNs) { maybeNotifyVideoSizeChanged(); TraceUtil.beginSection("releaseOutputBuffer"); codec.releaseOutputBuffer(index, releaseTimeNs); TraceUtil.endSection(); lastRenderRealtimeUs = SystemClock.elapsedRealtime() * 1000; decoderCounters.renderedOutputBufferCount++; consecutiveDroppedFrameCount = 0; maybeNotifyRenderedFirstFrame(); }

先判断video分辨率旋转角度

private void maybeNotifyVideoSizeChanged() { if ((currentWidth != Format.NO_VALUE || currentHeight != Format.NO_VALUE) && (reportedWidth != currentWidth || reportedHeight != currentHeight || reportedUnappliedRotationDegrees != currentUnappliedRotationDegrees || reportedPixelWidthHeightRatio != currentPixelWidthHeightRatio)) { eventDispatcher.videoSizeChanged(currentWidth, currentHeight, currentUnappliedRotationDegrees, currentPixelWidthHeightRatio); reportedWidth = currentWidth; reportedHeight = currentHeight; reportedUnappliedRotationDegrees = currentUnappliedRotationDegrees; reportedPixelWidthHeightRatio = currentPixelWidthHeightRatio; } }

输出给surface

/** * Updates the output buffer's surface timestamp and sends it to the {@link MediaCodec} to render * it on the output surface. If the {@link MediaCodec} is not configured with an output surface, * this call will simply return the buffer to the {@link MediaCodec}. * 更新输出缓冲区的表面时间戳,并将其发送到MediaCodec,以将其呈现在输出surface上。 * 如果没有为MediaCodec配置输出surface,则此调用将简单地将缓冲区返回给MediaCodec。 * * @see MediaCodec#releaseOutputBuffer(int, long) */ @RequiresApi(21) void releaseOutputBuffer(int index, long renderTimeStampNs);

把流给了surface,就是surface的处理了

这个codec,类型是个 codecAdapter,分异步跟同步的方式初始化

codecInitializingTimestamp = SystemClock.elapsedRealtime(); TraceUtil.beginSection("createCodec:" + codecName); MediaCodec codec = MediaCodec.createByCodecName(codecName); if (enableAsynchronousBufferQueueing && Util.SDK_INT >= 23) { codecAdapter = new AsynchronousMediaCodecAdapter.Factory( getTrackType(), forceAsyncQueueingSynchronizationWorkaround, enableSynchronizeCodecInteractionsWithQueueing) .createAdapter(codec); } else { codecAdapter = codecAdapterFactory.createAdapter(codec); } TraceUtil.endSection();

通过 enableAsynchronousBufferQueueing 进行开关。

同步方式的codecAdapterFactory 来自于video或Audio子类render的实现

如 视频为

MediaCodecAdapter.Factory.DEFAULT ->

/** * Abstracts {@link MediaCodec} operations. * 摘要MediaCodec操作。 * MediaCodecAdapter提供了与MediaCodec交互的通用接口,而与MediaCodec的运行模式无关。 * * <p>{@code MediaCodecAdapter} offers a common interface to interact with a {@link MediaCodec} * regardless of the mode the {@link MediaCodec} is operating in. */ public interface MediaCodecAdapter { /** A factory for {@link MediaCodecAdapter} instances. */ interface Factory { /** Default factory used in most cases. */ Factory DEFAULT = new SynchronousMediaCodecAdapter.Factory(); /** Creates an instance wrapping the provided {@link MediaCodec} instance. */ MediaCodecAdapter createAdapter(MediaCodec codec); }

。。。

}

最终还是new,codec为传入

/** * A {@link MediaCodecAdapter} that operates the underlying {@link MediaCodec} in synchronous mode. */ public final class SynchronousMediaCodecAdapter implements MediaCodecAdapter { /** A factory for {@link SynchronousMediaCodecAdapter} instances. */ public static final class Factory implements MediaCodecAdapter.Factory { @Override public MediaCodecAdapter createAdapter(MediaCodec codec) { return new SynchronousMediaCodecAdapter(codec); } } private final MediaCodec codec; @Nullable private ByteBuffer[] inputByteBuffers; @Nullable private ByteBuffer[] outputByteBuffers; private SynchronousMediaCodecAdapter(MediaCodec mediaCodec) { this.codec = mediaCodec; }

。。。

}

再看异步

codecAdapter = new AsynchronousMediaCodecAdapter.Factory( getTrackType(), forceAsyncQueueingSynchronizationWorkaround, enableSynchronizeCodecInteractionsWithQueueing) .createAdapter(codec);

工厂实例

/** * A {@link MediaCodecAdapter} that operates the underlying {@link MediaCodec} in asynchronous mode, * routes {@link MediaCodec.Callback} callbacks on a dedicated thread that is managed internally, * and queues input buffers asynchronously.一个MediaCodecAdapter,它以异步模式操作基础MediaCodec, * 在内部管理的专用线程上路由MediaCodec.Callback回调,并异步对输入缓冲区进行排队。 */ @RequiresApi(23) /* package */ final class AsynchronousMediaCodecAdapter implements MediaCodecAdapter { /** A factory for {@link AsynchronousMediaCodecAdapter} instances. */ public static final class Factory implements MediaCodecAdapter.Factory { private final Supplier<HandlerThread> callbackThreadSupplier; private final Supplier<HandlerThread> queueingThreadSupplier; private final boolean forceQueueingSynchronizationWorkaround; private final boolean synchronizeCodecInteractionsWithQueueing; /** Creates a factory for the specified {@code trackType}. */ public Factory(int trackType) { this( trackType, /* forceQueueingSynchronizationWorkaround= */ false, /* synchronizeCodecInteractionsWithQueueing= */ false); } /** * Creates an factory for {@link AsynchronousMediaCodecAdapter} instances. * 为AsynchronousMediaCodecAdapter实例创建一个工厂。 * * 参数: * trackType – C.TRACK_TYPE_AUDIO或C.TRACK_TYPE_VIDEO之一。 用于相应地标记内螺纹。 * forceQueueingSynchronizationWorkaround –默认情况下是启用队列同步解决方法还是仅对预定义设备启用队列同步解决方法。 * syncnizeCodecInteractionsWithQueueing –适配器是否应将MediaCodec交互与异步缓冲区队列同步。 * 设置为true时,编解码器交互将等待,直到所有等待排队的输入缓冲区都将提交给MediaCodec。 * * @param trackType One of {@link C#TRACK_TYPE_AUDIO} or {@link C#TRACK_TYPE_VIDEO}. Used for * labelling the internal thread accordingly. * @param forceQueueingSynchronizationWorkaround Whether the queueing synchronization workaround * will be enabled by default or only for the predefined devices. * @param synchronizeCodecInteractionsWithQueueing Whether the adapter should synchronize {@link * MediaCodec} interactions with asynchronous buffer queueing. When {@code true}, codec * interactions will wait until all input buffers pending queueing wil be submitted to the * {@link MediaCodec}. */ public Factory( int trackType, boolean forceQueueingSynchronizationWorkaround, boolean synchronizeCodecInteractionsWithQueueing) { this( /* callbackThreadSupplier= */ () -> new HandlerThread(createCallbackThreadLabel(trackType)),//ExoPlayer:MediaCodecAsyncAdapter:Video/Audio /* queueingThreadSupplier= */ () -> new HandlerThread(createQueueingThreadLabel(trackType)),//ExoPlayer:MediaCodecQueueingThread:Video/Audio forceQueueingSynchronizationWorkaround, synchronizeCodecInteractionsWithQueueing); }

@VisibleForTesting /* package */ Factory( Supplier<HandlerThread> callbackThreadSupplier, Supplier<HandlerThread> queueingThreadSupplier, boolean forceQueueingSynchronizationWorkaround, boolean synchronizeCodecInteractionsWithQueueing) { this.callbackThreadSupplier = callbackThreadSupplier; this.queueingThreadSupplier = queueingThreadSupplier; this.forceQueueingSynchronizationWorkaround = forceQueueingSynchronizationWorkaround; this.synchronizeCodecInteractionsWithQueueing = synchronizeCodecInteractionsWithQueueing; }

@Override public AsynchronousMediaCodecAdapter createAdapter(MediaCodec codec) { return new AsynchronousMediaCodecAdapter( codec, callbackThreadSupplier.get(), queueingThreadSupplier.get(), forceQueueingSynchronizationWorkaround, synchronizeCodecInteractionsWithQueueing); }

。。。

}

Supplier来自于grade的guava包

/**
 * A class that can supply objects of a single type; a pre-Java-8 version of {@link
 * java.util.function.Supplier java.util.function.Supplier}. Semantically, this could be a factory,
 * generator, builder, closure, or something else entirely. No guarantees are implied by this
 * interface.
 *
 * <p>The {@link Suppliers} class provides common suppliers and related utilities.
 *
 * <p>See the Guava User Guide article on <a href=
 * "https://github.com/google/guava/wiki/FunctionalExplained">the use of functional types</a>.
 *
 * <h3>For Java 8+ users</h3>
 *
 * <p>This interface is now a legacy type. Use {@code java.util.function.Supplier} (or the
 * appropriate primitive specialization such as {@code IntSupplier}) instead whenever possible.
 * Otherwise, at least reduce <i>explicit</i> dependencies on this type by using lambda expressions
 * or method references instead of classes, leaving your code easier to migrate in the future.
 *
 * <p>To use an existing supplier instance (say, named {@code supplier}) in a context where the
 * <i>other type</i> of supplier is expected, use the method reference {@code supplier::get}. A
 * future version of {@code com.google.common.base.Supplier} will be made to <i>extend</i> {@code
 * java.util.function.Supplier}, making conversion code necessary only in one direction. At that
 * time, this interface will be officially discouraged.
 *
 * @author Harry Heymann
 * @since 2.0
 */
@GwtCompatible
public interface Supplier<T> {
  /**
   * Retrieves an instance of the appropriate type. The returned object may or may not be a new
   * instance, depending on the implementation.
   *
   * @return an instance of the appropriate type
   */
  @CanIgnoreReturnValue
  T get();
}


异步解码Adapter有三个状态

@Documented @Retention(RetentionPolicy.SOURCE) @IntDef({STATE_CREATED, STATE_CONFIGURED, STATE_STARTED, STATE_SHUT_DOWN}) private @interface State {}

其他基本都一样,毕竟都是实现的同一个MediaCodecAdapter接口

对于codec的操作,无论同步异步都是调用了系统的处理

codec.configure(mediaFormat, surface, crypto, flags);

codec.start();

codec.dequeueInputBuffer(0);

index = codec.dequeueOutputBuffer(bufferInfo, 0);

codec.queueInputBuffer(index, offset, size, presentationTimeUs, flags);

codec.queueSecureInputBuffer( index, offset, info.getFrameworkCryptoInfo(), presentationTimeUs, flags);

codec.releaseOutputBuffer(index, render);

codec.release();

系统接口

/**
 * Configures a component.
 *
 * @param format The format of the input data (decoder) or the desired
 *               format of the output data (encoder). Passing {@code null}
 *               as {@code format} is equivalent to passing an
 *               {@link MediaFormat#MediaFormat an empty mediaformat}.
 * @param surface Specify a surface on which to render the output of this
 *                decoder. Pass {@code null} as {@code surface} if the
 *                codec does not generate raw video output (e.g. not a video
 *                decoder) and/or if you want to configure the codec for
 *                {@link ByteBuffer} output.
 * @param crypto  Specify a crypto object to facilitate secure decryption
 *                of the media data. Pass {@code null} as {@code crypto} for
 *                non-secure codecs.
 *                Please note that {@link MediaCodec} does NOT take ownership
 *                of the {@link MediaCrypto} object; it is the application's
 *                responsibility to properly cleanup the {@link MediaCrypto} object
 *                when not in use.
 * @param flags   Specify {@link #CONFIGURE_FLAG_ENCODE} to configure the
 *                component as an encoder.
 * @throws IllegalArgumentException if the surface has been released (or is invalid),
 * or the format is unacceptable (e.g. missing a mandatory key),
 * or the flags are not set properly
 * (e.g. missing {@link #CONFIGURE_FLAG_ENCODE} for an encoder).
 * @throws IllegalStateException if not in the Uninitialized state.
 * @throws CryptoException upon DRM error.
 * @throws CodecException upon codec error.
 */
public void configure(
        @Nullable MediaFormat format,
        @Nullable Surface surface, @Nullable MediaCrypto crypto,
        @ConfigureFlag int flags) {
    configure(format, surface, crypto, null, flags);
}
/**
 * Returns the index of an input buffer to be filled with valid data
 * or -1 if no such buffer is currently available.
 * This method will return immediately if timeoutUs == 0, wait indefinitely
 * for the availability of an input buffer if timeoutUs &lt; 0 or wait up
 * to "timeoutUs" microseconds if timeoutUs &gt; 0.
 * @param timeoutUs The timeout in microseconds, a negative timeout indicates "infinite".
 * @throws IllegalStateException if not in the Executing state,
 *         or codec is configured in asynchronous mode.
 * @throws MediaCodec.CodecException upon codec error.
 */
public final int dequeueInputBuffer(long timeoutUs) {
    synchronized (mBufferLock) {
        if (mBufferMode == BUFFER_MODE_BLOCK) {
            throw new IncompatibleWithBlockModelException("dequeueInputBuffer() "
                    + "is not compatible with CONFIGURE_FLAG_USE_BLOCK_MODEL. "
                    + "Please use MediaCodec.Callback objectes to get input buffer slots.");
        }
    }
    int res = native_dequeueInputBuffer(timeoutUs);
    if (res >= 0) {
        synchronized(mBufferLock) {
            validateInputByteBuffer(mCachedInputBuffers, res);
        }
    }
    return res;
}
private native final int native_dequeueInputBuffer(long timeoutUs);
/**
 * After filling a range of the input buffer at the specified index
 * submit it to the component. Once an input buffer is queued to
 * the codec, it MUST NOT be used until it is later retrieved by
 * {@link #getInputBuffer} in response to a {@link #dequeueInputBuffer}
 * return value or a {@link Callback#onInputBufferAvailable}
 * callback.
 * <p>
 * Many decoders require the actual compressed data stream to be
 * preceded by "codec specific data", i.e. setup data used to initialize
 * the codec such as PPS/SPS in the case of AVC video or code tables
 * in the case of vorbis audio.
 * The class {@link android.media.MediaExtractor} provides codec
 * specific data as part of
 * the returned track format in entries named "csd-0", "csd-1" ...
 * <p>
 * These buffers can be submitted directly after {@link #start} or
 * {@link #flush} by specifying the flag {@link
 * #BUFFER_FLAG_CODEC_CONFIG}.  However, if you configure the
 * codec with a {@link MediaFormat} containing these keys, they
 * will be automatically submitted by MediaCodec directly after
 * start.  Therefore, the use of {@link
 * #BUFFER_FLAG_CODEC_CONFIG} flag is discouraged and is
 * recommended only for advanced users.
 * <p>
 * To indicate that this is the final piece of input data (or rather that
 * no more input data follows unless the decoder is subsequently flushed)
 * specify the flag {@link #BUFFER_FLAG_END_OF_STREAM}.
 * <p class=note>
 * <strong>Note:</strong> Prior to {@link android.os.Build.VERSION_CODES#M},
 * {@code presentationTimeUs} was not propagated to the frame timestamp of (rendered)
 * Surface output buffers, and the resulting frame timestamp was undefined.
 * Use {@link #releaseOutputBuffer(int, long)} to ensure a specific frame timestamp is set.
 * Similarly, since frame timestamps can be used by the destination surface for rendering
 * synchronization, <strong>care must be taken to normalize presentationTimeUs so as to not be
 * mistaken for a system time. (See {@linkplain #releaseOutputBuffer(int, long)
 * SurfaceView specifics}).</strong>
 *
 * @param index The index of a client-owned input buffer previously returned
 *              in a call to {@link #dequeueInputBuffer}.
 * @param offset The byte offset into the input buffer at which the data starts.
 * @param size The number of bytes of valid input data.
 * @param presentationTimeUs The presentation timestamp in microseconds for this
 *                           buffer. This is normally the media time at which this
 *                           buffer should be presented (rendered). When using an output
 *                           surface, this will be propagated as the {@link
 *                           SurfaceTexture#getTimestamp timestamp} for the frame (after
 *                           conversion to nanoseconds).
 * @param flags A bitmask of flags
 *              {@link #BUFFER_FLAG_CODEC_CONFIG} and {@link #BUFFER_FLAG_END_OF_STREAM}.
 *              While not prohibited, most codecs do not use the
 *              {@link #BUFFER_FLAG_KEY_FRAME} flag for input buffers.
 * @throws IllegalStateException if not in the Executing state.
 * @throws MediaCodec.CodecException upon codec error.
 * @throws CryptoException if a crypto object has been specified in
 *         {@link #configure}
 */
public final void queueInputBuffer(
        int index,
        int offset, int size, long presentationTimeUs, int flags)
    throws CryptoException {
    synchronized(mBufferLock) {
        if (mBufferMode == BUFFER_MODE_BLOCK) {
            throw new IncompatibleWithBlockModelException("queueInputBuffer() "
                    + "is not compatible with CONFIGURE_FLAG_USE_BLOCK_MODEL. "
                    + "Please use getQueueRequest() to queue buffers");
        }
        invalidateByteBuffer(mCachedInputBuffers, index);
        mDequeuedInputBuffers.remove(index);
    }
    try {
        native_queueInputBuffer(
                index, offset, size, presentationTimeUs, flags);
    } catch (CryptoException | IllegalStateException e) {
        revalidateByteBuffer(mCachedInputBuffers, index);
        throw e;
    }
}

private native final void native_queueInputBuffer(
        int index,
        int offset, int size, long presentationTimeUs, int flags)
    throws CryptoException;

这样,就实现了流的消耗。。。。。也就是 drain....

drain完成后,同一个while中,判断时间是否继续while....

private boolean shouldContinueRendering(long renderStartTimeMs) { return renderTimeLimitMs == C.TIME_UNSET || SystemClock.elapsedRealtime() - renderStartTimeMs < renderTimeLimitMs; }

对应的,看数据填充和时间判断是否继续,时间与drain一致。

/** * @return Whether it may be possible to feed more input data.是否有可能提供更多的输入数据。 * @throws ExoPlaybackException If an error occurs feeding the input buffer. */ private boolean feedInputBuffer() throws ExoPlaybackException { if (codec == null || codecDrainState == DRAIN_STATE_WAIT_END_OF_STREAM || inputStreamEnded) { return false; } if (inputIndex < 0) { inputIndex = codec.dequeueInputBufferIndex(); if (inputIndex < 0) { return false; } buffer.data = codec.getInputBuffer(inputIndex); buffer.clear(); } if (codecDrainState == DRAIN_STATE_SIGNAL_END_OF_STREAM) { // We need to re-initialize the codec. Send an end of stream signal to the existing codec so // that it outputs any remaining buffers before we release it. if (codecNeedsEosPropagation) { // Do nothing. } else { codecReceivedEos = true; codec.queueInputBuffer(inputIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM); resetInputBuffer(); } codecDrainState = DRAIN_STATE_WAIT_END_OF_STREAM; return false; } if (codecNeedsAdaptationWorkaroundBuffer) { codecNeedsAdaptationWorkaroundBuffer = false; buffer.data.put(ADAPTATION_WORKAROUND_BUFFER); codec.queueInputBuffer(inputIndex, 0, ADAPTATION_WORKAROUND_BUFFER.length, 0, 0); resetInputBuffer(); codecReceivedBuffers = true; return true; } // For adaptive reconfiguration, decoders expect all reconfiguration data to be supplied at // the start of the buffer that also contains the first frame in the new format. if (codecReconfigurationState == RECONFIGURATION_STATE_WRITE_PENDING) { for (int i = 0; i < codecInputFormat.initializationData.size(); i++) { byte[] data = codecInputFormat.initializationData.get(i); buffer.data.put(data); } codecReconfigurationState = RECONFIGURATION_STATE_QUEUE_PENDING; } int adaptiveReconfigurationBytes = buffer.data.position(); FormatHolder formatHolder = getFormatHolder(); @SampleStream.ReadDataResult int result = readSource(formatHolder, buffer, /* formatRequired= */ false); if (hasReadStreamToEnd()) { // Notify output queue of the last buffer's timestamp. lastBufferInStreamPresentationTimeUs = largestQueuedPresentationTimeUs; } if (result == C.RESULT_NOTHING_READ) { return false; } if (result == C.RESULT_FORMAT_READ) { if (codecReconfigurationState == RECONFIGURATION_STATE_QUEUE_PENDING) { // We received two formats in a row. Clear the current buffer of any reconfiguration data // associated with the first format. buffer.clear(); codecReconfigurationState = RECONFIGURATION_STATE_WRITE_PENDING; } onInputFormatChanged(formatHolder); return true; } // We've read a buffer.读取到了buffer if (buffer.isEndOfStream()) { if (codecReconfigurationState == RECONFIGURATION_STATE_QUEUE_PENDING) { // We received a new format immediately before the end of the stream. We need to clear // the corresponding reconfiguration data from the current buffer, but re-write it into // a subsequent buffer if there are any (for example, if the user seeks backwards). buffer.clear(); codecReconfigurationState = RECONFIGURATION_STATE_WRITE_PENDING; } inputStreamEnded = true; if (!codecReceivedBuffers) { processEndOfStream(); return false; } try { if (codecNeedsEosPropagation) { // Do nothing. } else { codecReceivedEos = true; codec.queueInputBuffer( inputIndex, /* offset= */ 0, /* size= */ 0, /* presentationTimeUs= */ 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM); resetInputBuffer(); } } catch (CryptoException e) { throw createRendererException(e, inputFormat); } return false; } // This logic is required for cases where the decoder needs to be flushed or re-instantiated // during normal consumption of samples from the source (i.e., without a corresponding // Renderer.enable or Renderer.resetPosition call). This is necessary for certain legacy and // workaround behaviors, for example when switching the output Surface on API levels prior to // the introduction of MediaCodec.setOutputSurface. if (!codecReceivedBuffers && !buffer.isKeyFrame()) { buffer.clear(); if (codecReconfigurationState == RECONFIGURATION_STATE_QUEUE_PENDING) { // The buffer we just cleared contained reconfiguration data. We need to re-write this data // into a subsequent buffer (if there is one). codecReconfigurationState = RECONFIGURATION_STATE_WRITE_PENDING; } return true; } boolean bufferEncrypted = buffer.isEncrypted(); if (bufferEncrypted) { buffer.cryptoInfo.increaseClearDataFirstSubSampleBy(adaptiveReconfigurationBytes); } if (codecNeedsDiscardToSpsWorkaround && !bufferEncrypted) { NalUnitUtil.discardToSps(buffer.data); if (buffer.data.position() == 0) { return true; } codecNeedsDiscardToSpsWorkaround = false; } long presentationTimeUs = buffer.timeUs; if (c2Mp3TimestampTracker != null) { presentationTimeUs = c2Mp3TimestampTracker.updateAndGetPresentationTimeUs(inputFormat, buffer); } if (buffer.isDecodeOnly()) { decodeOnlyPresentationTimestamps.add(presentationTimeUs); } if (waitingForFirstSampleInFormat) { formatQueue.add(presentationTimeUs, inputFormat); waitingForFirstSampleInFormat = false; } // TODO(b/158483277): Find the root cause of why a gap is introduced in MP3 playback when using // presentationTimeUs from the c2Mp3TimestampTracker. if (c2Mp3TimestampTracker != null) { largestQueuedPresentationTimeUs = max(largestQueuedPresentationTimeUs, buffer.timeUs); } else { largestQueuedPresentationTimeUs = max(largestQueuedPresentationTimeUs, presentationTimeUs); } buffer.flip(); if (buffer.hasSupplementalData()) { handleInputBufferSupplementalData(buffer); } onQueueInputBuffer(buffer); try { if (bufferEncrypted) { codec.queueSecureInputBuffer( inputIndex, /* offset= */ 0, buffer.cryptoInfo, presentationTimeUs, /* flags= */ 0); } else { codec.queueInputBuffer( inputIndex, /* offset= */ 0, buffer.data.limit(), presentationTimeUs, /* flags= */ 0); } } catch (CryptoException e) { throw createRendererException(e, inputFormat); } resetInputBuffer(); codecReceivedBuffers = true; codecReconfigurationState = RECONFIGURATION_STATE_NONE; decoderCounters.inputBufferCount++; return true; }

数据传递给了codec adapter

/** * Submit an input buffer for decoding. * * <p>The {@code index} must be an input buffer index that has been obtained from a previous call * to {@link #dequeueInputBufferIndex()}. * * @see MediaCodec#queueInputBuffer */ void queueInputBuffer(int index, int offset, int size, long presentationTimeUs, int flags);

调用系统api

@Override public void queueInputBuffer( int index, int offset, int size, long presentationTimeUs, int flags) { codec.queueInputBuffer(index, offset, size, presentationTimeUs, flags); }

视频就被这么消耗掉了。。。。

2,默认的音频解码

音频大部分是跟视频一样的,甚至继承使用了同一套代码,参考图MediaCodecRenderer

在默认的视频编码时,有两处由子类实现比较明显,分别为头一次输出时onOutputFormatChange()及实际输出processOutputBuffer()的实现。

2.1)输出格式变化onOutputFormatChange()

根据格式,确定了如何输出,如直通输出

/** * Called when one of the output formats changes. * * <p>The default implementation is a no-op. * * @param format The input {@link Format} to which future output now corresponds. If the renderer * is in bypass mode, this is also the output format. * @param mediaFormat The codec output {@link MediaFormat}, or {@code null} if the renderer is in * bypass mode. * @throws ExoPlaybackException Thrown if an error occurs configuring the output. */ protected void onOutputFormatChanged(Format format, @Nullable MediaFormat mediaFormat) throws ExoPlaybackException { // Do nothing. }

音频实现

@Override protected void onOutputFormatChanged(Format format, @Nullable MediaFormat mediaFormat) throws ExoPlaybackException { Format audioSinkInputFormat; @Nullable int[] channelMap = null; if (decryptOnlyCodecFormat != null) { // Direct playback with a codec for decryption.使用编解码器直接播放以解密。//仅在直通和分载中用于DRM解密的编解码器。 audioSinkInputFormat = decryptOnlyCodecFormat; } else if (getCodec() == null) { // Direct playback with codec bypass.使用编解码器旁路直接播放。 audioSinkInputFormat = format; } else { @C.PcmEncoding int pcmEncoding; if (MimeTypes.AUDIO_RAW.equals(format.sampleMimeType)) { //audio/raw // For PCM streams, the encoder passes through int samples despite set to float mode. pcmEncoding = format.pcmEncoding; } else if (Util.SDK_INT >= 24 && mediaFormat.containsKey(MediaFormat.KEY_PCM_ENCODING)) { //pcm-encoding pcmEncoding = mediaFormat.getInteger(MediaFormat.KEY_PCM_ENCODING); } else if (mediaFormat.containsKey(VIVO_BITS_PER_SAMPLE_KEY)) { //v-bits-per-sample pcmEncoding = Util.getPcmEncoding(mediaFormat.getInteger(VIVO_BITS_PER_SAMPLE_KEY)); } else { // If the format is anything other than PCM then we assume that the audio decoder will // output 16-bit PCM.如果格式不是PCM,则我们假设音频解码器将输出16位PCM。 pcmEncoding = MimeTypes.AUDIO_RAW.equals(format.sampleMimeType) ? format.pcmEncoding : C.ENCODING_PCM_16BIT; } audioSinkInputFormat = new Format.Builder() .setSampleMimeType(MimeTypes.AUDIO_RAW) .setPcmEncoding(pcmEncoding) .setEncoderDelay(format.encoderDelay) .setEncoderPadding(format.encoderPadding) .setChannelCount(mediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT)) .setSampleRate(mediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE)) .build(); if (codecNeedsDiscardChannelsWorkaround && audioSinkInputFormat.channelCount == 6 && format.channelCount < 6) { channelMap = new int[format.channelCount]; for (int i = 0; i < format.channelCount; i++) { channelMap[i] = i; } } } try { Logger.w(TAG,"onOutputFormatChanged",audioSinkInputFormat,channelMap,format,decryptOnlyCodecFormat,mediaFormat);//Format(2, null, null, audio/ac3, null, -1, en, [-1, -1, -1.0], [6, 48000]),=format audioSink.configure(audioSinkInputFormat, /* specifiedBufferSize= */ 0, channelMap); } catch (AudioSink.ConfigurationException e) { throw createRendererException(e, e.format); } }

根据当前环境,传递到audioSink.config

/** * Configures (or reconfigures) the sink. * 配置(或重新配置)接收器。 * * 参数: * inputFormat –输入缓冲区中提供的音频数据的格式。 * 指定缓冲区大小–回放缓冲区的特定大小(以字节为单位),或0表示合适的缓冲区大小。 * outputChannels –从输入到输出通道的映射,如果处理PCM输入,则作为预处理步骤应用于此接收器的输入。 * 指定null可使输入保持不变。 否则,索引i处的元素指定在预处理输入缓冲区时映射到输出通道i的输入通道的索引。 * 应用映射后,音频数据将具有outputChannels.length通道。 * * @param inputFormat The format of audio data provided in the input buffers. * @param specifiedBufferSize A specific size for the playback buffer in bytes, or 0 to infer a * suitable buffer size. * @param outputChannels A mapping from input to output channels that is applied to this sink's * input as a preprocessing step, if handling PCM input. Specify {@code null} to leave the * input unchanged. Otherwise, the element at index {@code i} specifies index of the input * channel to map to output channel {@code i} when preprocessing input buffers. After the map * is applied the audio data will have {@code outputChannels.length} channels. * @throws ConfigurationException If an error occurs configuring the sink. */ void configure(Format inputFormat, int specifiedBufferSize, @Nullable int[] outputChannels) throws ConfigurationException;

默认的DefaultAudioSink的实现

@Override public void configure(Format inputFormat, int specifiedBufferSize, @Nullable int[] outputChannels) throws ConfigurationException { int inputPcmFrameSize; @Nullable AudioProcessor[] availableAudioProcessors; @OutputMode int outputMode; @C.Encoding int outputEncoding; int outputSampleRate; int outputChannelConfig; int outputPcmFrameSize; Logger.w(TAG,"configure方法",inputFormat.toString(),specifiedBufferSize,outputChannels);//Format(2, null, null, audio/ac3, null, -1, en, [-1, -1, -1.0], [6, 48000]),0,null if (MimeTypes.AUDIO_RAW.equals(inputFormat.sampleMimeType)) { Assertions.checkArgument(Util.isEncodingLinearPcm(inputFormat.pcmEncoding)); inputPcmFrameSize = Util.getPcmFrameSize(inputFormat.pcmEncoding, inputFormat.channelCount); availableAudioProcessors = shouldUseFloatOutput(inputFormat.pcmEncoding) ? toFloatPcmAvailableAudioProcessors : toIntPcmAvailableAudioProcessors; trimmingAudioProcessor.setTrimFrameCount( inputFormat.encoderDelay, inputFormat.encoderPadding); if (Util.SDK_INT < 21 && inputFormat.channelCount == 8 && outputChannels == null) { // AudioTrack doesn't support 8 channel output before Android L. Discard the last two (side) // channels to give a 6 channel stream that is supported. outputChannels = new int[6]; for (int i = 0; i < outputChannels.length; i++) { outputChannels[i] = i; } } channelMappingAudioProcessor.setChannelMap(outputChannels); AudioProcessor.AudioFormat outputFormat = new AudioProcessor.AudioFormat( inputFormat.sampleRate, inputFormat.channelCount, inputFormat.pcmEncoding); for (AudioProcessor audioProcessor : availableAudioProcessors) { try { AudioProcessor.AudioFormat nextFormat = audioProcessor.configure(outputFormat); if (audioProcessor.isActive()) { outputFormat = nextFormat; } } catch (UnhandledAudioFormatException e) { throw new ConfigurationException(e, inputFormat); } } outputMode = OUTPUT_MODE_PCM; outputEncoding = outputFormat.encoding; outputSampleRate = outputFormat.sampleRate; outputChannelConfig = Util.getAudioTrackChannelConfig(outputFormat.channelCount); outputPcmFrameSize = Util.getPcmFrameSize(outputEncoding, outputFormat.channelCount); } else { inputPcmFrameSize = C.LENGTH_UNSET; availableAudioProcessors = new AudioProcessor[0]; outputSampleRate = inputFormat.sampleRate; outputPcmFrameSize = C.LENGTH_UNSET; Logger.w(TAG,"configure方法x2",enableOffload,isOffloadedPlaybackSupported(inputFormat, audioAttributes));//false,false if (enableOffload && isOffloadedPlaybackSupported(inputFormat, audioAttributes)) { outputMode = OUTPUT_MODE_OFFLOAD; outputEncoding = MimeTypes.getEncoding( Assertions.checkNotNull(inputFormat.sampleMimeType), inputFormat.codecs); outputChannelConfig = Util.getAudioTrackChannelConfig(inputFormat.channelCount); } else { outputMode = OUTPUT_MODE_PASSTHROUGH;//直通输出 @Nullable Pair<Integer, Integer> encodingAndChannelConfig = getEncodingAndChannelConfigForPassthrough(inputFormat, audioCapabilities); Logger.w("pass though x2",encodingAndChannelConfig);//Pair{5 252} if (encodingAndChannelConfig == null) { throw new ConfigurationException( "Unable to configure passthrough for: " + inputFormat, inputFormat); } outputEncoding = encodingAndChannelConfig.first;//5 outputChannelConfig = encodingAndChannelConfig.second;//252 } } if (outputEncoding == C.ENCODING_INVALID) { throw new ConfigurationException( "Invalid output encoding (mode=" + outputMode + ") for: " + inputFormat, inputFormat); } if (outputChannelConfig == AudioFormat.CHANNEL_INVALID) { throw new ConfigurationException( "Invalid output channel config (mode=" + outputMode + ") for: " + inputFormat, inputFormat); } offloadDisabledUntilNextConfiguration = false; Configuration pendingConfiguration = new Configuration( inputFormat, inputPcmFrameSize, outputMode, outputPcmFrameSize, outputSampleRate, outputChannelConfig, outputEncoding, specifiedBufferSize, enableAudioTrackPlaybackParams, availableAudioProcessors); if (isAudioTrackInitialized()) { this.pendingConfiguration = pendingConfiguration; } else { configuration = pendingConfiguration; } }

就这样,先后经过了/raw解码输出 -> offload音频分载 -> 最后到了直通输出

也就是说,直通输出是自动选择的。。。。

奇怪,我分别设置了系统音频输出中HDMI和SPDIF的输出选项,改成解码和透传,输出AC3编码,这个地方的输出Mode一直是直通。。。。

@Documented @Retention(RetentionPolicy.SOURCE) @IntDef({OUTPUT_MODE_PCM, OUTPUT_MODE_OFFLOAD, OUTPUT_MODE_PASSTHROUGH}) private @interface OutputMode {}

明明在系统音频设置了走软解,但这个地方遇到支持的格式outMode还是直通,但是 实际测试的时候是软解了,应该在什么地方二次修正了吧

public Configuration( Format inputFormat, int inputPcmFrameSize, @OutputMode int outputMode, int outputPcmFrameSize, int outputSampleRate, int outputChannelConfig, int outputEncoding, int specifiedBufferSize, boolean enableAudioTrackPlaybackParams, AudioProcessor[] availableAudioProcessors) { //Format(2, null, null, audio/ac3, null, -1, en, [-1, -1, -1.0], [6, 48000]),-1,2,-1,48000,252,5,0,false,[Lcom.google.android.exoplayer2.audio.AudioProcessor;@bccb2c3 - ac3 5.1测试 //Format(null, null, null, audio/raw, null, -1, null, [-1, -1, -1.0], [2, 44100]),4,0,4,44100,12,2,0,false,[Lcom.google.android.exoplayer2.audio.AudioProcessor;@b3cbfbe - 4k Istanbul VP9 + Vorbis //Format(2, null, null, audio/raw, null, -1, en, [-1, -1, -1.0], [6, 96000]),18,0,12,96000,252,2,0,false,[Lcom.google.android.exoplayer2.audio.AudioProcessor;@d6c9e53 - lpcm 5.1 //:Format(null, null, null, audio/raw, null, -1, null, [-1, -1, -1.0], [6, 48000]),12,0,12,48000,252,2,0,false,[Lcom.google.android.exoplayer2.audio.AudioProcessor;@636559d - aac Logger.w(TAG,"Configuration构造",inputFormat,inputPcmFrameSize,outputMode,outputPcmFrameSize,outputSampleRate,outputChannelConfig,outputEncoding,specifiedBufferSize,enableAudioTrackPlaybackParams,availableAudioProcessors); this.inputFormat = inputFormat; this.inputPcmFrameSize = inputPcmFrameSize; this.outputMode = outputMode; this.outputPcmFrameSize = outputPcmFrameSize; this.outputSampleRate = outputSampleRate; this.outputChannelConfig = outputChannelConfig; this.outputEncoding = outputEncoding; this.availableAudioProcessors = availableAudioProcessors; // Call computeBufferSize() last as it 取决于其他配置值depends on the other configuration values. this.bufferSize = computeBufferSize(specifiedBufferSize, enableAudioTrackPlaybackParams); }

分别抓取了 AC3、Vorbis、LPCM、AAC,除了直通支持的AC3其他都走了解码输出。

new 完config之后,会去判断isAudioTrackInitialized(),如果为空,就留着后面创建audioTrack的时候用。。。。


2.2)处理输出processOutputBuffer()

@Override protected boolean processOutputBuffer( long positionUs, long elapsedRealtimeUs, @Nullable MediaCodecAdapter codec, @Nullable ByteBuffer buffer, int bufferIndex, int bufferFlags, int sampleCount, long bufferPresentationTimeUs, boolean isDecodeOnlyBuffer, boolean isLastBuffer, Format format) throws ExoPlaybackException { checkNotNull(buffer); if (decryptOnlyCodecFormat != null && (bufferFlags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) { // Discard output buffers from the passthrough (raw) decoder containing codec specific data. checkNotNull(codec).releaseOutputBuffer(bufferIndex, false); return true; } if (isDecodeOnlyBuffer) { if (codec != null) { codec.releaseOutputBuffer(bufferIndex, false); } decoderCounters.skippedOutputBufferCount += sampleCount; audioSink.handleDiscontinuity(); return true; } boolean fullyConsumed;//完全消耗 try { fullyConsumed = audioSink.handleBuffer(buffer, bufferPresentationTimeUs, sampleCount); } catch (InitializationException e) { throw createRendererException(e, e.format, e.isRecoverable); } catch (WriteException e) { throw createRendererException(e, format, e.isRecoverable); } if (fullyConsumed) { if (codec != null) { codec.releaseOutputBuffer(bufferIndex, false); } decoderCounters.renderedOutputBufferCount += sampleCount; return true; } return false; }

audioSink的接口,用于消耗数据

/** * Attempts to process data from a {@link ByteBuffer}, starting from its current position and * ending at its limit (exclusive). The position of the {@link ByteBuffer} is advanced by the * number of bytes that were handled. {@link Listener#onPositionDiscontinuity()} will be called if * {@code presentationTimeUs} is discontinuous with the last buffer handled since the last reset. * 尝试处理ByteBuffer的数据,从其当前位置开始,直到其限制(不包括限制)。 * ByteBuffer的位置提前处理的字节数。 如果presentationTimeUs与自上次重置以来处理的最后一个缓冲区不连续,则将调用AudioSink.Listener.onPositionDiscontinuity()。 * 返回数据是否已全部处理。 如果未对数据进行完整处理,则必须将相同的ByteBuffer提供给后续调用,直到完全消耗完为止, * 除非是对flush()(或configure(Format,int,int [])的中间调用) 导致水槽被冲洗)。 * * <p>Returns whether the data was handled in full. If the data was not handled in full then the * same {@link ByteBuffer} must be provided to subsequent calls until it has been fully consumed, * except in the case of an intervening call to {@link #flush()} (or to {@link #configure(Format, * int, int[])} that causes the sink to be flushed). * * @param buffer The buffer containing audio data. * @param presentationTimeUs The presentation timestamp of the buffer in microseconds. * @param encodedAccessUnitCount The number of encoded access units in the buffer, or 1 if the * buffer contains PCM audio. This allows batching multiple encoded access units in one * buffer. * @return Whether the buffer was handled fully. * @throws InitializationException If an error occurs initializing the sink. * @throws WriteException If an error occurs writing the audio data. */ boolean handleBuffer(ByteBuffer buffer, long presentationTimeUs, int encodedAccessUnitCount) throws InitializationException, WriteException;

默认实现

@Override @SuppressWarnings("ReferenceEquality") public boolean handleBuffer( ByteBuffer buffer, long presentationTimeUs, int encodedAccessUnitCount) throws InitializationException, WriteException { Assertions.checkArgument(inputBuffer == null || buffer == inputBuffer); if (pendingConfiguration != null) { if (!drainToEndOfStream()) { // There's still pending data in audio processors to write to the track. return false; } else if (!pendingConfiguration.canReuseAudioTrack(configuration)) { playPendingData(); if (hasPendingData()) { // We're waiting for playout on the current audio track to finish. return false; } flush(); } else { // The current audio track can be reused for the new configuration. configuration = pendingConfiguration; pendingConfiguration = null; if (isOffloadedPlayback(audioTrack)) { audioTrack.setOffloadEndOfStream(); audioTrack.setOffloadDelayPadding( configuration.inputFormat.encoderDelay, configuration.inputFormat.encoderPadding); isWaitingForOffloadEndOfStreamHandled = true; } } // Re-apply playback parameters. applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); } if (!isAudioTrackInitialized()) { try { initializeAudioTrack(); //初始化 audioTrack } catch (InitializationException e) { if (e.isRecoverable) { throw e; // Do not delay the exception if it can be recovered at higher level. } initializationExceptionPendingExceptionHolder.throwExceptionIfDeadlineIsReached(e); return false; } } initializationExceptionPendingExceptionHolder.clear(); if (startMediaTimeUsNeedsInit) { startMediaTimeUs = max(0, presentationTimeUs); startMediaTimeUsNeedsSync = false; startMediaTimeUsNeedsInit = false; if (enableAudioTrackPlaybackParams && Util.SDK_INT >= 23) { setAudioTrackPlaybackParametersV23(audioTrackPlaybackParameters); } applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); if (playing) { play();//audioTrack开始播放 } } if (!audioTrackPositionTracker.mayHandleBuffer(getWrittenFrames())) { return false; } if (inputBuffer == null) { // We are seeing this buffer for the first time. Assertions.checkArgument(buffer.order() == ByteOrder.LITTLE_ENDIAN); if (!buffer.hasRemaining()) { // The buffer is empty. return true; } if (configuration.outputMode != OUTPUT_MODE_PCM && framesPerEncodedSample == 0) { // If this is the first encoded sample, calculate the sample size in frames. framesPerEncodedSample = getFramesPerEncodedSample(configuration.outputEncoding, buffer); if (framesPerEncodedSample == 0) { // We still don't know the number of frames per sample, so drop the buffer. // For TrueHD this can occur after some seek operations, as not every sample starts with // a syncframe header. If we chunked samples together so the extracted samples always // started with a syncframe header, the chunks would be too large. return true; } } if (afterDrainParameters != null) { if (!drainToEndOfStream()) { // Don't process any more input until draining completes. return false; } applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); afterDrainParameters = null; } // Check that presentationTimeUs is consistent with the expected value. long expectedPresentationTimeUs = startMediaTimeUs + configuration.inputFramesToDurationUs( getSubmittedFrames() - trimmingAudioProcessor.getTrimmedFrameCount()); if (!startMediaTimeUsNeedsSync && Math.abs(expectedPresentationTimeUs - presentationTimeUs) > 200000) { Log.e( TAG, "Discontinuity detected [expected " + expectedPresentationTimeUs + ", got " + presentationTimeUs + "]"); startMediaTimeUsNeedsSync = true; } if (startMediaTimeUsNeedsSync) { if (!drainToEndOfStream()) { // Don't update timing until pending AudioProcessor buffers are completely drained. return false; } // Adjust startMediaTimeUs to be consistent with the current buffer's start time and the // number of bytes submitted. long adjustmentUs = presentationTimeUs - expectedPresentationTimeUs; startMediaTimeUs += adjustmentUs; startMediaTimeUsNeedsSync = false; // Re-apply playback parameters because the startMediaTimeUs changed. applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); if (listener != null && adjustmentUs != 0) { listener.onPositionDiscontinuity(); } } if (configuration.outputMode == OUTPUT_MODE_PCM) { submittedPcmBytes += buffer.remaining(); } else { submittedEncodedFrames += framesPerEncodedSample * encodedAccessUnitCount; } inputBuffer = buffer; inputBufferAccessUnitCount = encodedAccessUnitCount; } processBuffers(presentationTimeUs);//调用02篇的build时创建的音频处理器处理 if (!inputBuffer.hasRemaining()) { inputBuffer = null; inputBufferAccessUnitCount = 0; return true; } if (audioTrackPositionTracker.isStalled(getWrittenFrames())) { Log.w(TAG, "Resetting stalled audio track"); flush(); return true; } return false; }

如果没有初始化,那就先去初始化audioTrack

private void initializeAudioTrack() throws InitializationException { // If we're asynchronously releasing a previous audio track then we block until it has been // released. This guarantees that we cannot end up in a state where we have multiple audio // track instances. Without this guarantee it would be possible, in extreme cases, to exhaust // the shared memory that's available for audio track buffers. This would in turn cause the // initialization of the audio track to fail.如果我们异步释放先前的音轨,则我们将阻止它直到其被释放为止。 // 这保证了我们不会最终陷入拥有多个音轨实例的状态。 没有这种保证,在极端情况下,可能会耗尽可用于音轨缓冲区的共享内存。 这继而将导致音频轨道的初始化失败。 releasingConditionVariable.block(); audioTrack = buildAudioTrack(); if (isOffloadedPlayback(audioTrack)) { registerStreamEventCallbackV29(audioTrack); audioTrack.setOffloadDelayPadding( configuration.inputFormat.encoderDelay, configuration.inputFormat.encoderPadding); } audioSessionId = audioTrack.getAudioSessionId(); audioTrackPositionTracker.setAudioTrack( audioTrack, /* isPassthrough= */ configuration.outputMode == OUTPUT_MODE_PASSTHROUGH,//直通2 configuration.outputEncoding, configuration.outputPcmFrameSize, configuration.bufferSize); setVolumeInternal();//音量 if (auxEffectInfo.effectId != AuxEffectInfo.NO_AUX_EFFECT_ID) { audioTrack.attachAuxEffect(auxEffectInfo.effectId); audioTrack.setAuxEffectSendLevel(auxEffectInfo.sendLevel); } startMediaTimeUsNeedsInit = true; }

初始化audioTrack,先去build,用上了刚才生成的config

private AudioTrack buildAudioTrack() throws InitializationException { try { return Assertions.checkNotNull(configuration)//拿config了 .buildAudioTrack(tunneling, audioAttributes, audioSessionId); } catch (InitializationException e) { maybeDisableOffload(); if (listener != null) { listener.onAudioSinkError(e); } throw e; } }

android的版本诟病,不是白来的

private AudioTrack createAudioTrack( boolean tunneling, AudioAttributes audioAttributes, int audioSessionId) { if (Util.SDK_INT >= 29) { return createAudioTrackV29(tunneling, audioAttributes, audioSessionId); } else if (Util.SDK_INT >= 21) { return createAudioTrackV21(tunneling, audioAttributes, audioSessionId); } else { return createAudioTrackV9(audioAttributes, audioSessionId); } } @RequiresApi(29) private AudioTrack createAudioTrackV29( boolean tunneling, AudioAttributes audioAttributes, int audioSessionId) { AudioFormat audioFormat = getAudioFormat(outputSampleRate, outputChannelConfig, outputEncoding); android.media.AudioAttributes audioTrackAttributes = getAudioTrackAttributesV21(audioAttributes, tunneling); return new AudioTrack.Builder() .setAudioAttributes(audioTrackAttributes) .setAudioFormat(audioFormat) .setTransferMode(AudioTrack.MODE_STREAM) .setBufferSizeInBytes(bufferSize) .setSessionId(audioSessionId) .setOffloadedPlayback(outputMode == OUTPUT_MODE_OFFLOAD) .build(); }

版本没那那么高。。。。

@RequiresApi(21) private AudioTrack createAudioTrackV21( boolean tunneling, AudioAttributes audioAttributes, int audioSessionId) { //false,AudioAttributes{contentType=0, flags=0, usage=1, allowedCapturePolicy=1, audioAttributesV21=null},241,48000,252,5,40000 Logger.w(TAG,"createAudioTrackV21",tunneling,audioAttributes,audioSessionId,outputSampleRate,outputChannelConfig,outputEncoding,bufferSize); return new AudioTrack( getAudioTrackAttributesV21(audioAttributes, tunneling), getAudioFormat(outputSampleRate, outputChannelConfig, outputEncoding), bufferSize, AudioTrack.MODE_STREAM, audioSessionId); }

然后就调用了SDK的audioTrack api

/**
 * Class constructor with {@link AudioAttributes} and {@link AudioFormat}.
 * @param attributes a non-null {@link AudioAttributes} instance.
 * @param format a non-null {@link AudioFormat} instance describing the format of the data
 *     that will be played through this AudioTrack. See {@link AudioFormat.Builder} for
 *     configuring the audio format parameters such as encoding, channel mask and sample rate.
 * @param bufferSizeInBytes the total size (in bytes) of the internal buffer where audio data is
 *   read from for playback. This should be a nonzero multiple of the frame size in bytes.
 *   <p> If the track's creation mode is {@link #MODE_STATIC},
 *   this is the maximum length sample, or audio clip, that can be played by this instance.
 *   <p> If the track's creation mode is {@link #MODE_STREAM},
 *   this should be the desired buffer size
 *   for the <code>AudioTrack</code> to satisfy the application's
 *   latency requirements.
 *   If <code>bufferSizeInBytes</code> is less than the
 *   minimum buffer size for the output sink, it is increased to the minimum
 *   buffer size.
 *   The method {@link #getBufferSizeInFrames()} returns the
 *   actual size in frames of the buffer created, which
 *   determines the minimum frequency to write
 *   to the streaming <code>AudioTrack</code> to avoid underrun.
 *   See {@link #getMinBufferSize(int, int, int)} to determine the estimated minimum buffer size
 *   for an AudioTrack instance in streaming mode.
 * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}.
 * @param sessionId ID of audio session the AudioTrack must be attached to, or
 *   {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction
 *   time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before
 *   construction.
 * @throws IllegalArgumentException
 */
public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
        int mode, int sessionId)
                throws IllegalArgumentException {
    this(attributes, format, bufferSizeInBytes, mode, sessionId, false /*offload*/,
            ENCAPSULATION_MODE_NONE, null /* tunerConfiguration */);
}

然后把生成后的audioTrack设置给了一个Tracker....

/** * Wraps an {@link AudioTrack}, exposing a position based on {@link * AudioTrack#getPlaybackHeadPosition()} and {@link AudioTrack#getTimestamp(AudioTimestamp)}. * * <p>Call {@link #setAudioTrack(AudioTrack, boolean, int, int, int)} to set the audio track to * wrap. Call {@link #mayHandleBuffer(long)} if there is input data to write to the track. If it * returns false, the audio track position is stabilizing and no data may be written. Call {@link * #start()} immediately before calling {@link AudioTrack#play()}. Call {@link #pause()} when * pausing the track. Call {@link #handleEndOfStream(long)} when no more data will be written to the * track. When the audio track will no longer be used, call {@link #reset()}. * 包裹一个AudioTrack,基于AudioTrack.getPlaybackHeadPosition()和AudioTrack.getTimestamp(AudioTimestamp)公开一个位置。 * 调用setAudioTrack(AudioTrack,boolean,int,int,int)设置要包装的音轨。 * 如果有输入数据要写入轨道,则调用mayHandleBuffer(long)。 如果返回假,则表明音频轨道的位置稳定并且不能写入任何数据。 * 在调用AudioTrack.play()之前立即调用start()。 暂停曲目时,请调用pause()。 * 当没有更多数据写入轨道时,请调用handleEndOfStream(long)。 当不再使用音轨时,请调用reset()。 */ /* package */ final class AudioTrackPositionTracker {}

塞入audioTrack

/** * Sets the {@link AudioTrack} to wrap. Subsequent method calls on this instance relate to this * track's position, until the next call to {@link #reset()}. * 设置要环绕的AudioTrack。 在此实例上的后续方法调用与该轨道的位置有关,直到下一次对reset()的调用为止。 * * 参数: * audioTrack –要包装的音频轨道。 * isPassthrough –是否使用直通模式。 * outputEncoding –音轨的编码。 * outputPcmFrameSize –对于PCM输出编码,为帧大小。 否则将忽略该值。 * bufferSize –音频轨道缓冲区的大小(以字节为单位)。 * * @param audioTrack The audio track to wrap. * @param isPassthrough Whether passthrough mode is being used. * @param outputEncoding The encoding of the audio track. * @param outputPcmFrameSize For PCM output encodings, the frame size. The value is ignored * otherwise. * @param bufferSize The audio track buffer size in bytes. */ public void setAudioTrack( AudioTrack audioTrack, boolean isPassthrough, @C.Encoding int outputEncoding, int outputPcmFrameSize, int bufferSize) { Logger.w("AudioTrackPositionTracker.setAudiTrack",audioTrack,isPassthrough,outputEncoding,outputPcmFrameSize,bufferSize); this.audioTrack = audioTrack; this.outputPcmFrameSize = outputPcmFrameSize; this.bufferSize = bufferSize; audioTimestampPoller = new AudioTimestampPoller(audioTrack); outputSampleRate = audioTrack.getSampleRate(); needsPassthroughWorkarounds = isPassthrough && needsPassthroughWorkarounds(outputEncoding); isOutputPcm = Util.isEncodingLinearPcm(outputEncoding); bufferSizeUs = isOutputPcm ? framesToDurationUs(bufferSize / outputPcmFrameSize) : C.TIME_UNSET; lastRawPlaybackHeadPosition = 0; rawPlaybackHeadWrapCount = 0; passthroughWorkaroundPauseOffset = 0; hasData = false; stopTimestampUs = C.TIME_UNSET; forceResetWorkaroundTimeMs = C.TIME_UNSET; lastLatencySampleTimeUs = 0; latencyUs = 0; audioTrackPlaybackSpeed = 1f; }

设置音量。。。。

private void setVolumeInternal() { if (!isAudioTrackInitialized()) { // Do nothing. } else if (Util.SDK_INT >= 21) { setVolumeInternalV21(audioTrack, volume); } else { setVolumeInternalV3(audioTrack, volume); } }

又是分版本

@RequiresApi(21) private static void setVolumeInternalV21(AudioTrack audioTrack, float volume) { audioTrack.setVolume(volume); }

后面还有个 Aux的。。。。。

初始化完成

然后判断是不是第一次处理这个流。。。。

然后去处理写入流到audioTrack

private void processBuffers(long avSyncPresentationTimeUs) throws WriteException { int count = activeAudioProcessors.length; int index = count; Logger.w(TAG,"processBuffers",avSyncPresentationTimeUs,count);//0,0 | 32000,0 | 64000,0|96000,0 while (index >= 0) { ByteBuffer input = index > 0 ? outputBuffers[index - 1] : (inputBuffer != null ? inputBuffer : AudioProcessor.EMPTY_BUFFER); if (index == count) { writeBuffer(input, avSyncPresentationTimeUs); } else { AudioProcessor audioProcessor = activeAudioProcessors[index]; if (index > drainingAudioProcessorIndex) { audioProcessor.queueInput(input); } ByteBuffer output = audioProcessor.getOutput(); outputBuffers[index] = output; if (output.hasRemaining()) { // Handle the output as input to the next audio processor or the AudioTrack. index++; continue; } } if (input.hasRemaining()) { // The input wasn't consumed and no output was produced, so give up for now. return; } // Get more input from upstream. index--; } }

处理之后,写入流

@SuppressWarnings("ReferenceEquality") private void writeBuffer(ByteBuffer buffer, long avSyncPresentationTimeUs) throws WriteException { if (!buffer.hasRemaining()) { return; } if (outputBuffer != null) { Assertions.checkArgument(outputBuffer == buffer); } else { outputBuffer = buffer; if (Util.SDK_INT < 21) { int bytesRemaining = buffer.remaining(); if (preV21OutputBuffer == null || preV21OutputBuffer.length < bytesRemaining) { preV21OutputBuffer = new byte[bytesRemaining]; } int originalPosition = buffer.position(); buffer.get(preV21OutputBuffer, 0, bytesRemaining); buffer.position(originalPosition); preV21OutputBufferOffset = 0; } } int bytesRemaining = buffer.remaining(); int bytesWrittenOrError = 0; // Error if negative if (Util.SDK_INT < 21) { // outputMode == OUTPUT_MODE_PCM. // Work out how many bytes we can write without the risk of blocking. int bytesToWrite = audioTrackPositionTracker.getAvailableBufferSize(writtenPcmBytes); if (bytesToWrite > 0) { bytesToWrite = min(bytesRemaining, bytesToWrite); bytesWrittenOrError = audioTrack.write(preV21OutputBuffer, preV21OutputBufferOffset, bytesToWrite); if (bytesWrittenOrError > 0) { // No error preV21OutputBufferOffset += bytesWrittenOrError; buffer.position(buffer.position() + bytesWrittenOrError); } } } else if (tunneling) { Assertions.checkState(avSyncPresentationTimeUs != C.TIME_UNSET); bytesWrittenOrError = writeNonBlockingWithAvSyncV21( audioTrack, buffer, bytesRemaining, avSyncPresentationTimeUs); } else { bytesWrittenOrError = writeNonBlockingV21(audioTrack, buffer, bytesRemaining); } lastFeedElapsedRealtimeMs = SystemClock.elapsedRealtime(); if (bytesWrittenOrError < 0) { int error = bytesWrittenOrError; boolean isRecoverable = isAudioTrackDeadObject(error); if (isRecoverable) { maybeDisableOffload(); } WriteException e = new WriteException(error, configuration.inputFormat, isRecoverable); if (listener != null) { listener.onAudioSinkError(e); } if (e.isRecoverable) { throw e; // Do not delay the exception if it can be recovered at higher level. } writeExceptionPendingExceptionHolder.throwExceptionIfDeadlineIsReached(e); return; } writeExceptionPendingExceptionHolder.clear(); int bytesWritten = bytesWrittenOrError; if (isOffloadedPlayback(audioTrack)) { // After calling AudioTrack.setOffloadEndOfStream, the AudioTrack internally stops and // restarts during which AudioTrack.write will return 0. This situation must be detected to // prevent reporting the buffer as full even though it is not which could lead ExoPlayer to // sleep forever waiting for a onDataRequest that will never come. if (writtenEncodedFrames > 0) { isWaitingForOffloadEndOfStreamHandled = false; } // Consider the offload buffer as full if the AudioTrack is playing and AudioTrack.write could // not write all the data provided to it. This relies on the assumption that AudioTrack.write // always writes as much as possible. if (playing && listener != null && bytesWritten < bytesRemaining && !isWaitingForOffloadEndOfStreamHandled) { long pendingDurationMs = audioTrackPositionTracker.getPendingBufferDurationMs(writtenEncodedFrames); listener.onOffloadBufferFull(pendingDurationMs); } } if (configuration.outputMode == OUTPUT_MODE_PCM) { writtenPcmBytes += bytesWritten; } if (bytesWritten == bytesRemaining) { if (configuration.outputMode != OUTPUT_MODE_PCM) { // When playing non-PCM, the inputBuffer is never processed, thus the last inputBuffer // must be the current input buffer. Assertions.checkState(buffer == inputBuffer); writtenEncodedFrames += framesPerEncodedSample * inputBufferAccessUnitCount; } outputBuffer = null; } }

调用了audioTrack方法写入

@RequiresApi(21) private static int writeNonBlockingV21(AudioTrack audioTrack, ByteBuffer buffer, int size) { return audioTrack.write(buffer, size, AudioTrack.WRITE_NON_BLOCKING); }

然后AudioTrack调用native输出。。。。

/**
 * Writes the audio data to the audio sink for playback (streaming mode),
 * or copies audio data for later playback (static buffer mode).
 * The audioData in ByteBuffer should match the format specified in the AudioTrack constructor.
 * <p>
 * In streaming mode, the blocking behavior depends on the write mode.  If the write mode is
 * {@link #WRITE_BLOCKING}, the write will normally block until all the data has been enqueued
 * for playback, and will return a full transfer count.  However, if the write mode is
 * {@link #WRITE_NON_BLOCKING}, or the track is stopped or paused on entry, or another thread
 * interrupts the write by calling stop or pause, or an I/O error
 * occurs during the write, then the write may return a short transfer count.
 * <p>
 * In static buffer mode, copies the data to the buffer starting at offset 0,
 * and the write mode is ignored.
 * Note that the actual playback of this data might occur after this function returns.
 *
 * @param audioData the buffer that holds the data to write, starting at the position reported
 *     by <code>audioData.position()</code>.
 *     <BR>Note that upon return, the buffer position (<code>audioData.position()</code>) will
 *     have been advanced to reflect the amount of data that was successfully written to
 *     the AudioTrack.
 * @param sizeInBytes number of bytes to write.  It is recommended but not enforced
 *     that the number of bytes requested be a multiple of the frame size (sample size in
 *     bytes multiplied by the channel count).
 *     <BR>Note this may differ from <code>audioData.remaining()</code>, but cannot exceed it.
 * @param writeMode one of {@link #WRITE_BLOCKING}, {@link #WRITE_NON_BLOCKING}. It has no
 *     effect in static mode.
 *     <BR>With {@link #WRITE_BLOCKING}, the write will block until all data has been written
 *         to the audio sink.
 *     <BR>With {@link #WRITE_NON_BLOCKING}, the write will return immediately after
 *     queuing as much audio data for playback as possible without blocking.
 * @return zero or the positive number of bytes that were written, or one of the following
 *    error codes.
 * <ul>
 * <li>{@link #ERROR_INVALID_OPERATION} if the track isn't properly initialized</li>
 * <li>{@link #ERROR_BAD_VALUE} if the parameters don't resolve to valid data and indexes</li>
 * <li>{@link #ERROR_DEAD_OBJECT} if the AudioTrack is not valid anymore and
 *    needs to be recreated. The dead object error code is not returned if some data was
 *    successfully transferred. In this case, the error is returned at the next write()</li>
 * <li>{@link #ERROR} in case of other error</li>
 * </ul>
 */
public int write(@NonNull ByteBuffer audioData, int sizeInBytes,
        @WriteMode int writeMode) {

    if (mState == STATE_UNINITIALIZED) {
        Log.e(TAG, "AudioTrack.write() called in invalid state STATE_UNINITIALIZED");
        return ERROR_INVALID_OPERATION;
    }

    if ((writeMode != WRITE_BLOCKING) && (writeMode != WRITE_NON_BLOCKING)) {
        Log.e(TAG, "AudioTrack.write() called with invalid blocking mode");
        return ERROR_BAD_VALUE;
    }

    if ( (audioData == null) || (sizeInBytes < 0) || (sizeInBytes > audioData.remaining())) {
        Log.e(TAG, "AudioTrack.write() called with invalid size (" + sizeInBytes + ") value");
        return ERROR_BAD_VALUE;
    }

    if (!blockUntilOffloadDrain(writeMode)) {
        return 0;
    }

    int ret = 0;
    if (audioData.isDirect()) {
        ret = native_write_native_bytes(audioData,
                audioData.position(), sizeInBytes, mAudioFormat,
                writeMode == WRITE_BLOCKING);
    } else {
        ret = native_write_byte(NioUtils.unsafeArray(audioData),
                NioUtils.unsafeArrayOffset(audioData) + audioData.position(),
                sizeInBytes, mAudioFormat,
                writeMode == WRITE_BLOCKING);
    }

    if ((mDataLoadMode == MODE_STATIC)
            && (mState == STATE_NO_STATIC_DATA)
            && (ret > 0)) {
        // benign race with respect to other APIs that read mState
        mState = STATE_INITIALIZED;
    }

    if (ret > 0) {
        audioData.position(audioData.position() + ret);
    }

    return ret;
}

音频流就这么被消耗掉了。。。。

2021年05月21日17:07:22

--
senRsl
2021年05月20日15:15:24

没有评论 :

发表评论