東川印記

一本東川,笑看爭龍斗虎;寰茫兦者,度橫佰昧人生。

Exoplayer学习02 创建SimpleExoPlayer

2021年5月17日星期一



由上篇已知 SimpleExoPlayer的关系

SimpleExoPlayer

   - extends BasePlayer implements Player

  - implements ExoPlayer extends Player
  - implements Player.AudioComponent,
  - implements Player.VideoComponent,
  - implements Player.TextComponent,
  - implements Player.MetadataComponent,
  - implements Player.DeviceComponent

根据以上关系,确定了Player在组织内部的重要性及领导性。。。。

but

口号就是生产力,革命方针已经制定,灯塔已在前方,实现就需要发动群众

ExoPlayerImpl

    - extends BasePlayer implements Player

    - implements ExoPlayer extends Player

所以,ExoPlayerImpl extends BasePlayer 才是最终贯彻群众路线的两位。

1,demo中的播放操作

demos/main/src/main/java/com/google/android/exoplayer2/demo/PlayerActivity.java

player = new SimpleExoPlayer.Builder(/* context= */ this) .build(); player.setPlayWhenReady(startAutoPlay); playerView.setPlayer(player);

player.setMediaItems(mediaItems, /* resetPosition= */ !haveStartPosition); player.prepare();

先通过Builder创建一个player:SimpleExoPlayer,

然后设置自动播放,

把player指向playerView这个布局;

给player 增加mediaItems格式的播放内容,

设备player准备播放。

这样,当player加载完成后,就会自动开始播放了。

2,player的创建

builder构造函数,最终指向了

/** * Creates a builder with the specified custom components. * * <p>Note that this constructor is only useful to try and ensure that ExoPlayer's default * components can be removed by ProGuard or R8. * * @param context A {@link Context}. * @param renderersFactory A factory for creating {@link Renderer Renderers} to be used by the * player. * @param trackSelector A {@link TrackSelector}. * @param mediaSourceFactory A {@link MediaSourceFactory}. * @param loadControl A {@link LoadControl}. * @param bandwidthMeter A {@link BandwidthMeter}. * @param analyticsCollector An {@link AnalyticsCollector}. */ public Builder( Context context, RenderersFactory renderersFactory, TrackSelector trackSelector, MediaSourceFactory mediaSourceFactory, LoadControl loadControl, BandwidthMeter bandwidthMeter, AnalyticsCollector analyticsCollector) { this.context = context; this.renderersFactory = renderersFactory; this.trackSelector = trackSelector; this.mediaSourceFactory = mediaSourceFactory; this.loadControl = loadControl; this.bandwidthMeter = bandwidthMeter; this.analyticsCollector = analyticsCollector; looper = Util.getCurrentOrMainLooper(); audioAttributes = AudioAttributes.DEFAULT; wakeMode = C.WAKE_MODE_NONE; videoScalingMode = C.VIDEO_SCALING_MODE_DEFAULT; useLazyPreparation = true; seekParameters = SeekParameters.DEFAULT; livePlaybackSpeedControl = new DefaultLivePlaybackSpeedControl.Builder().build(); clock = Clock.DEFAULT; releaseTimeoutMs = ExoPlayer.DEFAULT_RELEASE_TIMEOUT_MS; detachSurfaceTimeoutMs = DEFAULT_DETACH_SURFACE_TIMEOUT_MS; }

builder的时候,可以自定义上文中的几个组件传进来,如 RenderersFactory、MediaSourceFactory、LoadControl、BandwidthMeter等。。。。

配置完config后,执行build,即Builder的最后一个方法

/** * Builds a {@link SimpleExoPlayer} instance. * * @throws IllegalStateException If this method has already been called. */ public SimpleExoPlayer build() { Assertions.checkState(!buildCalled); buildCalled = true; return new SimpleExoPlayer(/* builder= */ this); }

最终竟然是去new 了 SimpeExoPlayer.

/** * @param builder The {@link Builder} to obtain all construction parameters. */ protected SimpleExoPlayer(Builder builder) { applicationContext = builder.context.getApplicationContext(); analyticsCollector = builder.analyticsCollector; priorityTaskManager = builder.priorityTaskManager; audioAttributes = builder.audioAttributes; videoScalingMode = builder.videoScalingMode; skipSilenceEnabled = builder.skipSilenceEnabled; detachSurfaceTimeoutMs = builder.detachSurfaceTimeoutMs; componentListener = new ComponentListener(); videoListeners = new CopyOnWriteArraySet<>(); audioListeners = new CopyOnWriteArraySet<>(); textOutputs = new CopyOnWriteArraySet<>(); metadataOutputs = new CopyOnWriteArraySet<>(); deviceListeners = new CopyOnWriteArraySet<>(); Handler eventHandler = new Handler(builder.looper); renderers = builder.renderersFactory.createRenderers( eventHandler, componentListener, componentListener, componentListener, componentListener); // Set initial values. audioVolume = 1; if (Util.SDK_INT < 21) { audioSessionId = initializeKeepSessionIdAudioTrack(C.AUDIO_SESSION_ID_UNSET); } else { audioSessionId = C.generateAudioSessionIdV21(applicationContext); } currentCues = Collections.emptyList(); throwsWhenUsingWrongThread = true; // Build the player and associated objects. player = new ExoPlayerImpl( renderers, builder.trackSelector, builder.mediaSourceFactory, builder.loadControl, builder.bandwidthMeter, analyticsCollector, builder.useLazyPreparation, builder.seekParameters, builder.livePlaybackSpeedControl, builder.releaseTimeoutMs, builder.pauseAtEndOfMediaItems, builder.clock, builder.looper, /* wrappingPlayer= */ this); player.addListener(componentListener); audioBecomingNoisyManager = new AudioBecomingNoisyManager(builder.context, eventHandler, componentListener); audioBecomingNoisyManager.setEnabled(builder.handleAudioBecomingNoisy); audioFocusManager = new AudioFocusManager(builder.context, eventHandler, componentListener); audioFocusManager.setAudioAttributes(builder.handleAudioFocus ? audioAttributes : null); streamVolumeManager = new StreamVolumeManager(builder.context, eventHandler, componentListener); streamVolumeManager.setStreamType(Util.getStreamTypeForAudioUsage(audioAttributes.usage)); wakeLockManager = new WakeLockManager(builder.context); wakeLockManager.setEnabled(builder.wakeMode != C.WAKE_MODE_NONE); wifiLockManager = new WifiLockManager(builder.context); wifiLockManager.setEnabled(builder.wakeMode == C.WAKE_MODE_NETWORK); deviceInfo = createDeviceInfo(streamVolumeManager); sendRendererMessage(C.TRACK_TYPE_AUDIO, Renderer.MSG_SET_AUDIO_SESSION_ID, audioSessionId); sendRendererMessage(C.TRACK_TYPE_VIDEO, Renderer.MSG_SET_AUDIO_SESSION_ID, audioSessionId); sendRendererMessage(C.TRACK_TYPE_AUDIO, Renderer.MSG_SET_AUDIO_ATTRIBUTES, audioAttributes); sendRendererMessage(C.TRACK_TYPE_VIDEO, Renderer.MSG_SET_SCALING_MODE, videoScalingMode); sendRendererMessage( C.TRACK_TYPE_AUDIO, Renderer.MSG_SET_SKIP_SILENCE_ENABLED, skipSilenceEnabled); }

先通过builder.renderersFactory创建了renderers,然后把renderers连同builder中参数给了ExoPlayerImpl,创建了player.

根据上篇,我们知道,ExoPlayerImpl走的是群众路线

final class ExoPlayerImpl extends BasePlayer implements ExoPlayer {}

SimpleExoPlayer类中的new,实际创建的是ExoPlayerImplInternal

/** * Constructs an instance. Must be called from a thread that has an associated {@link Looper}. * * @param renderers The {@link Renderer}s. * @param trackSelector The {@link TrackSelector}. * @param mediaSourceFactory The {@link MediaSourceFactory}. * @param loadControl The {@link LoadControl}. * @param bandwidthMeter The {@link BandwidthMeter}. * @param analyticsCollector The {@link AnalyticsCollector}. * @param useLazyPreparation Whether playlist items are prepared lazily. If false, all manifest * loads and other initial preparation steps happen immediately. If true, these initial * preparations are triggered only when the player starts buffering the media. * @param seekParameters The {@link SeekParameters}. * @param livePlaybackSpeedControl The {@link LivePlaybackSpeedControl}. * @param releaseTimeoutMs The timeout for calls to {@link #release()} in milliseconds. * @param pauseAtEndOfMediaItems Whether to pause playback at the end of each media item. * @param clock The {@link Clock}. * @param applicationLooper The {@link Looper} that must be used for all calls to the player and * which is used to call listeners on. * @param wrappingPlayer The {@link Player} wrapping this one if applicable. This player instance * should be used for all externally visible callbacks. */ @SuppressLint("HandlerLeak") public ExoPlayerImpl( Renderer[] renderers, TrackSelector trackSelector, MediaSourceFactory mediaSourceFactory, LoadControl loadControl, BandwidthMeter bandwidthMeter, @Nullable AnalyticsCollector analyticsCollector, boolean useLazyPreparation, SeekParameters seekParameters, LivePlaybackSpeedControl livePlaybackSpeedControl, long releaseTimeoutMs, boolean pauseAtEndOfMediaItems, Clock clock, Looper applicationLooper, @Nullable Player wrappingPlayer) { Log.i( TAG, "Init " + Integer.toHexString(System.identityHashCode(this)) + " [" + ExoPlayerLibraryInfo.VERSION_SLASHY + "] [" + Util.DEVICE_DEBUG_INFO + "]"); checkState(renderers.length > 0); this.renderers = checkNotNull(renderers); this.trackSelector = checkNotNull(trackSelector); this.mediaSourceFactory = mediaSourceFactory; this.bandwidthMeter = bandwidthMeter; this.analyticsCollector = analyticsCollector; this.useLazyPreparation = useLazyPreparation; this.seekParameters = seekParameters; this.pauseAtEndOfMediaItems = pauseAtEndOfMediaItems; this.applicationLooper = applicationLooper; this.clock = clock; repeatMode = Player.REPEAT_MODE_OFF; Player playerForListeners = wrappingPlayer != null ? wrappingPlayer : this; listeners = new ListenerSet<>( applicationLooper, clock, Player.Events::new, (listener, eventFlags) -> listener.onEvents(playerForListeners, eventFlags)); mediaSourceHolderSnapshots = new ArrayList<>(); shuffleOrder = new ShuffleOrder.DefaultShuffleOrder(/* length= */ 0); emptyTrackSelectorResult = new TrackSelectorResult( new RendererConfiguration[renderers.length], new ExoTrackSelection[renderers.length], /* info= */ null); period = new Timeline.Period(); maskingWindowIndex = C.INDEX_UNSET; playbackInfoUpdateHandler = clock.createHandler(applicationLooper, /* callback= */ null); playbackInfoUpdateListener = playbackInfoUpdate -> playbackInfoUpdateHandler.post(() -> handlePlaybackInfo(playbackInfoUpdate)); playbackInfo = PlaybackInfo.createDummy(emptyTrackSelectorResult); if (analyticsCollector != null) { analyticsCollector.setPlayer(playerForListeners, applicationLooper); addListener(analyticsCollector); bandwidthMeter.addEventListener(new Handler(applicationLooper), analyticsCollector); } internalPlayer = new ExoPlayerImplInternal( renderers, trackSelector, emptyTrackSelectorResult, loadControl, bandwidthMeter, repeatMode, shuffleModeEnabled, analyticsCollector, seekParameters, livePlaybackSpeedControl, releaseTimeoutMs, pauseAtEndOfMediaItems, applicationLooper, clock, playbackInfoUpdateListener); }

这个方法,出现了 Timeline.Period()的调用。

/** Implements the internal behavior of {@link ExoPlayerImpl}. */ /* package */ final class ExoPlayerImplInternal implements Handler.Callback, MediaPeriod.Callback, TrackSelector.InvalidationListener, MediaSourceList.MediaSourceListInfoRefreshListener, PlaybackParametersListener, PlayerMessage.Sender {}

注释告诉我们,这是对ExoPlayerImpl的内部实现

继续跟 new流程

public ExoPlayerImplInternal( Renderer[] renderers, TrackSelector trackSelector, TrackSelectorResult emptyTrackSelectorResult, LoadControl loadControl, BandwidthMeter bandwidthMeter, @Player.RepeatMode int repeatMode, boolean shuffleModeEnabled, @Nullable AnalyticsCollector analyticsCollector, SeekParameters seekParameters, LivePlaybackSpeedControl livePlaybackSpeedControl, long releaseTimeoutMs, boolean pauseAtEndOfWindow, Looper applicationLooper, Clock clock, PlaybackInfoUpdateListener playbackInfoUpdateListener) { this.playbackInfoUpdateListener = playbackInfoUpdateListener; this.renderers = renderers; this.trackSelector = trackSelector; this.emptyTrackSelectorResult = emptyTrackSelectorResult; this.loadControl = loadControl; this.bandwidthMeter = bandwidthMeter; this.repeatMode = repeatMode; this.shuffleModeEnabled = shuffleModeEnabled; this.seekParameters = seekParameters; this.livePlaybackSpeedControl = livePlaybackSpeedControl; this.releaseTimeoutMs = releaseTimeoutMs; this.setForegroundModeTimeoutMs = releaseTimeoutMs; this.pauseAtEndOfWindow = pauseAtEndOfWindow; this.clock = clock; backBufferDurationUs = loadControl.getBackBufferDurationUs(); retainBackBufferFromKeyframe = loadControl.retainBackBufferFromKeyframe(); playbackInfo = PlaybackInfo.createDummy(emptyTrackSelectorResult); playbackInfoUpdate = new PlaybackInfoUpdate(playbackInfo); rendererCapabilities = new RendererCapabilities[renderers.length]; for (int i = 0; i < renderers.length; i++) { renderers[i].setIndex(i); rendererCapabilities[i] = renderers[i].getCapabilities(); } mediaClock = new DefaultMediaClock(this, clock); pendingMessages = new ArrayList<>(); window = new Timeline.Window(); period = new Timeline.Period(); trackSelector.init(/* listener= */ this, bandwidthMeter); deliverPendingMessageAtStartPositionRequired = true; Handler eventHandler = new Handler(applicationLooper); queue = new MediaPeriodQueue(analyticsCollector, eventHandler); mediaSourceList = new MediaSourceList(/* listener= */ this, analyticsCollector, eventHandler); // Note: The documentation for Process.THREAD_PRIORITY_AUDIO that states "Applications can // not normally change to this priority" is incorrect. internalPlaybackThread = new HandlerThread("ExoPlayer:Playback", Process.THREAD_PRIORITY_AUDIO); internalPlaybackThread.start(); playbackLooper = internalPlaybackThread.getLooper(); handler = clock.createHandler(playbackLooper, this); }

这里面,LoadControl开始起作用,创建了PlaybackInfo、DefaultMediaClock这俩关键组件

启动了internalPlaybackThread:HandlerThread:Thread这个线程

取了 playbackLooper:Looper和  handler:HandlerWrapper

似曾相识的感觉。。。。

一会该不会出现ThreadLocal吧。。。。。

所以,player的创建,最终是在ExoPlayerImplInternal中创建了internalPlaybackThread.

3,player创建后的消息传递

这个类实现了一堆Callback,第一个,就是常用的Handler流程

// Handler.Callback implementation. @Override public boolean handleMessage(Message msg) { try { switch (msg.what) { case MSG_PREPARE: prepareInternal(); break; case MSG_SET_PLAY_WHEN_READY: setPlayWhenReadyInternal( /* playWhenReady= */ msg.arg1 != 0, /* playbackSuppressionReason= */ msg.arg2, /* operationAck= */ true, Player.PLAY_WHEN_READY_CHANGE_REASON_USER_REQUEST); break; case MSG_SET_REPEAT_MODE: setRepeatModeInternal(msg.arg1); break; case MSG_SET_SHUFFLE_ENABLED: setShuffleModeEnabledInternal(msg.arg1 != 0); break; case MSG_DO_SOME_WORK: doSomeWork(); break; case MSG_SEEK_TO: seekToInternal((SeekPosition) msg.obj); break; case MSG_SET_PLAYBACK_PARAMETERS: setPlaybackParametersInternal((PlaybackParameters) msg.obj); break; case MSG_SET_SEEK_PARAMETERS: setSeekParametersInternal((SeekParameters) msg.obj); break; case MSG_SET_FOREGROUND_MODE: setForegroundModeInternal( /* foregroundMode= */ msg.arg1 != 0, /* processedFlag= */ (AtomicBoolean) msg.obj); break; case MSG_STOP: stopInternal(/* forceResetRenderers= */ false, /* acknowledgeStop= */ true); break; case MSG_PERIOD_PREPARED: handlePeriodPrepared((MediaPeriod) msg.obj); break; case MSG_SOURCE_CONTINUE_LOADING_REQUESTED: handleContinueLoadingRequested((MediaPeriod) msg.obj); break; case MSG_TRACK_SELECTION_INVALIDATED: reselectTracksInternal(); break; case MSG_PLAYBACK_PARAMETERS_CHANGED_INTERNAL: handlePlaybackParameters((PlaybackParameters) msg.obj, /* acknowledgeCommand= */ false); break; case MSG_SEND_MESSAGE: sendMessageInternal((PlayerMessage) msg.obj); break; case MSG_SEND_MESSAGE_TO_TARGET_THREAD: sendMessageToTargetThread((PlayerMessage) msg.obj); break; case MSG_SET_MEDIA_SOURCES: setMediaItemsInternal((MediaSourceListUpdateMessage) msg.obj); break; case MSG_ADD_MEDIA_SOURCES: addMediaItemsInternal((MediaSourceListUpdateMessage) msg.obj, msg.arg1); break; case MSG_MOVE_MEDIA_SOURCES: moveMediaItemsInternal((MoveMediaItemsMessage) msg.obj); break; case MSG_REMOVE_MEDIA_SOURCES: removeMediaItemsInternal(msg.arg1, msg.arg2, (ShuffleOrder) msg.obj); break; case MSG_SET_SHUFFLE_ORDER: setShuffleOrderInternal((ShuffleOrder) msg.obj); break; case MSG_PLAYLIST_UPDATE_REQUESTED: mediaSourceListUpdateRequestedInternal(); break; case MSG_SET_PAUSE_AT_END_OF_WINDOW: setPauseAtEndOfWindowInternal(msg.arg1 != 0); break; case MSG_SET_OFFLOAD_SCHEDULING_ENABLED: setOffloadSchedulingEnabledInternal(msg.arg1 == 1); break; case MSG_ATTEMPT_ERROR_RECOVERY: attemptErrorRecovery((ExoPlaybackException) msg.obj); break; case MSG_RELEASE: releaseInternal(); // Return immediately to not send playback info updates after release. return true; default: return false; } maybeNotifyPlaybackInfoChanged(); } catch (ExoPlaybackException e) { if (e.type == ExoPlaybackException.TYPE_RENDERER) { @Nullable MediaPeriodHolder readingPeriod = queue.getReadingPeriod(); if (readingPeriod != null) { // We can assume that all renderer errors happen in the context of the reading period. See // [internal: b/150584930#comment4] for exceptions that aren't covered by this assumption. e = e.copyWithMediaPeriodId(readingPeriod.info.id); } } if (e.isRecoverable && pendingRecoverableError == null) { Log.w(TAG, "Recoverable playback error", e); pendingRecoverableError = e; Message message = handler.obtainMessage(MSG_ATTEMPT_ERROR_RECOVERY, e); // Given that the player is now in an unhandled exception state, the error needs to be // recovered or the player stopped before any other message is handled. message.getTarget().sendMessageAtFrontOfQueue(message); } else { if (pendingRecoverableError != null) { e.addSuppressed(pendingRecoverableError); pendingRecoverableError = null; } Log.e(TAG, "Playback error", e); stopInternal(/* forceResetRenderers= */ true, /* acknowledgeStop= */ false); playbackInfo = playbackInfo.copyWithPlaybackError(e); } maybeNotifyPlaybackInfoChanged(); } catch (IOException e) { ExoPlaybackException error = ExoPlaybackException.createForSource(e); @Nullable MediaPeriodHolder playingPeriod = queue.getPlayingPeriod(); if (playingPeriod != null) { // We ensure that all IOException throwing methods are only executed for the playing period. error = error.copyWithMediaPeriodId(playingPeriod.info.id); } Log.e(TAG, "Playback error", error); stopInternal(/* forceResetRenderers= */ false, /* acknowledgeStop= */ false); playbackInfo = playbackInfo.copyWithPlaybackError(error); maybeNotifyPlaybackInfoChanged(); } catch (RuntimeException e) { ExoPlaybackException error = ExoPlaybackException.createForUnexpected(e); Log.e(TAG, "Playback error", error); stopInternal(/* forceResetRenderers= */ true, /* acknowledgeStop= */ false); playbackInfo = playbackInfo.copyWithPlaybackError(error); maybeNotifyPlaybackInfoChanged(); } return true; }

catch里写了这么多逻辑,google你不讲武德呀。。。。

看起来player的操作,都是以消息传递过来的。。。。

4,向下操作的Renderers

另外在上文中向下引入的Renderers

在ExoPlayerImplInternal构造中,第一个就是renderers数组,

相对应的,生成了renderersCapabilities数组。

rendererCapabilities = new RendererCapabilities[renderers.length]; for (int i = 0; i < renderers.length; i++) { renderers[i].setIndex(i); rendererCapabilities[i] = renderers[i].getCapabilities(); }

renderersCapabilities是个接口。

/** * Defines the capabilities of a {@link Renderer}. */ public interface RendererCapabilities {}

同样可以猜出,Renderers也是个接口

/** * Renders media read from a {@link SampleStream}. * * <p>Internally, a renderer's lifecycle is managed by the owning {@link ExoPlayer}. The renderer is * transitioned through various states as the overall playback state and enabled tracks change. The * valid state transitions are shown below, annotated with the methods that are called during each * transition. * * <p style="align:center"><img src="doc-files/renderer-states.svg" alt="Renderer state * transitions"> */ public interface Renderer extends PlayerMessage.Target {}

继承的肯定还是个接口。。。。

/** * Defines a player message which can be sent with a {@link Sender} and received by a {@link * Target}. */ public final class PlayerMessage { /** A target for messages. */ public interface Target {}

}

在ExoPlayerImplInternal中,对renderers进行了实际的调用

如启动和停止

private void startRenderers() throws ExoPlaybackException { isRebuffering = false; mediaClock.start(); for (Renderer renderer : renderers) { Logger.w(TAG,renderer,"是否启用 "+isRendererEnabled(renderer)); if (isRendererEnabled(renderer)) { renderer.start(); } } } private void stopRenderers() throws ExoPlaybackException { mediaClock.stop(); for (Renderer renderer : renderers) { if (isRendererEnabled(renderer)) { ensureStopped(renderer); } } }

对于renderersCapibities竟然操作有限。。。。

再回到SimpleExoPlayer看是怎么创建的Renderers

renderers = builder.renderersFactory.createRenderers( eventHandler, componentListener, componentListener, componentListener, componentListener);

设计模式是真多。。。。

/** * Builds {@link Renderer} instances for use by a {@link SimpleExoPlayer}. */ public interface RenderersFactory { /** * Builds the {@link Renderer} instances for a {@link SimpleExoPlayer}. * * @param eventHandler A handler to use when invoking event listeners and outputs. * @param videoRendererEventListener An event listener for video renderers. * @param audioRendererEventListener An event listener for audio renderers. * @param textRendererOutput An output for text renderers. * @param metadataRendererOutput An output for metadata renderers. * @return The {@link Renderer instances}. */ Renderer[] createRenderers( Handler eventHandler, VideoRendererEventListener videoRendererEventListener, AudioRendererEventListener audioRendererEventListener, TextOutput textRendererOutput, MetadataOutput metadataRendererOutput); }

回到了最开始对SimpleExoPlayer.Builder的调用,没有手动传入,就调用了默认的

/** * Creates a builder with a custom {@link RenderersFactory}. * * <p>See {@link #Builder(Context)} for a list of default values. * * @param context A {@link Context}. * @param renderersFactory A factory for creating {@link Renderer Renderers} to be used by the * player. */ public Builder(Context context, RenderersFactory renderersFactory) { this(context, renderersFactory, new DefaultExtractorsFactory()); }

默认的 Extractors Factory

/** * 一个{@link ExtractorsFactory},它提供以下格式的提取器数组: * An {@link ExtractorsFactory} that provides an array of extractors for the following formats: * * <ul> * <li>MP4, including M4A ({@link Mp4Extractor}) * <li>fMP4 ({@link FragmentedMp4Extractor}) * <li>Matroska and WebM ({@link MatroskaExtractor}) * <li>Ogg Vorbis/FLAC ({@link OggExtractor} * <li>MP3 ({@link Mp3Extractor}) * <li>AAC ({@link AdtsExtractor}) * <li>MPEG TS ({@link TsExtractor}) * <li>MPEG PS ({@link PsExtractor}) * <li>FLV ({@link FlvExtractor}) * <li>WAV ({@link WavExtractor}) * <li>AC3 ({@link Ac3Extractor}) * <li>AC4 ({@link Ac4Extractor}) * <li>AMR ({@link AmrExtractor}) * <li>FLAC * <ul> * <li>If available, the FLAC extension's {@code * com.google.android.exoplayer2.ext.flac.FlacExtractor} is used. * <li>Otherwise, the core {@link FlacExtractor} is used. Note that Android devices do not * generally include a FLAC decoder before API 27. This can be worked around by using * the FLAC extension or the FFmpeg extension. * </ul> * <li>JPEG ({@link JpegExtractor}) * </ul> */ public final class DefaultExtractorsFactory implements ExtractorsFactory {}

上回书说道,Extractor用来提取容器格式和轨道信息,看起来就是根据不同的格式使用了不同的提取器。

提取器肯定也是个接口。。。。

/** Factory for arrays of {@link Extractor} instances. */ public interface ExtractorsFactory { /** * Extractor factory that returns an empty list of extractors. Can be used whenever {@link * Extractor Extractors} are not required. */ ExtractorsFactory EMPTY = () -> new Extractor[] {}; /** Returns an array of new {@link Extractor} instances. */ Extractor[] createExtractors(); /** * Returns an array of new {@link Extractor} instances. * * @param uri The {@link Uri} of the media to extract. * @param responseHeaders The response headers of the media to extract, or an empty map if there * are none. The map lookup should be case-insensitive. * @return The {@link Extractor} instances. */ default Extractor[] createExtractors(Uri uri, Map<String, List<String>> responseHeaders) { return createExtractors(); } }

看错了,找Renderers....

回到 SimpleExoPlayer对RenderersFactory的创建

/** * Creates a builder. * * <p>Use {@link #Builder(Context, RenderersFactory)}, {@link #Builder(Context, * RenderersFactory)} or {@link #Builder(Context, RenderersFactory, ExtractorsFactory)} instead, * if you intend to provide a custom {@link RenderersFactory} or a custom {@link * ExtractorsFactory}. This is to ensure that ProGuard or R8 can remove ExoPlayer's {@link * DefaultRenderersFactory} and {@link DefaultExtractorsFactory} from the APK. * * <p>The builder uses the following default values: * * <ul> * <li>{@link RenderersFactory}: {@link DefaultRenderersFactory} * <li>{@link TrackSelector}: {@link DefaultTrackSelector} * <li>{@link MediaSourceFactory}: {@link DefaultMediaSourceFactory} * <li>{@link LoadControl}: {@link DefaultLoadControl} * <li>{@link BandwidthMeter}: {@link DefaultBandwidthMeter#getSingletonInstance(Context)} * <li>{@link LivePlaybackSpeedControl}: {@link DefaultLivePlaybackSpeedControl} * <li>{@link Looper}: The {@link Looper} associated with the current thread, or the {@link * Looper} of the application's main thread if the current thread doesn't have a {@link * Looper} * <li>{@link AnalyticsCollector}: {@link AnalyticsCollector} with {@link Clock#DEFAULT} * <li>{@link PriorityTaskManager}: {@code null} (not used) * <li>{@link AudioAttributes}: {@link AudioAttributes#DEFAULT}, not handling audio focus * <li>{@link C.WakeMode}: {@link C#WAKE_MODE_NONE} * <li>{@code handleAudioBecomingNoisy}: {@code false} * <li>{@code skipSilenceEnabled}: {@code false} * <li>{@link C.VideoScalingMode}: {@link C#VIDEO_SCALING_MODE_DEFAULT} * <li>{@code useLazyPreparation}: {@code true} * <li>{@link SeekParameters}: {@link SeekParameters#DEFAULT} * <li>{@code releaseTimeoutMs}: {@link ExoPlayer#DEFAULT_RELEASE_TIMEOUT_MS} * <li>{@code detachSurfaceTimeoutMs}: {@link #DEFAULT_DETACH_SURFACE_TIMEOUT_MS} * <li>{@code pauseAtEndOfMediaItems}: {@code false} * <li>{@link Clock}: {@link Clock#DEFAULT} * </ul> * * @param context A {@link Context}. */ public Builder(Context context) { this(context, new DefaultRenderersFactory(context), new DefaultExtractorsFactory()); }

也是个默认实现

/** * Default {@link RenderersFactory} implementation. */ public class DefaultRenderersFactory implements RenderersFactory {}

原始接口工厂

/** * Builds {@link Renderer} instances for use by a {@link SimpleExoPlayer}. */ public interface RenderersFactory { /** * Builds the {@link Renderer} instances for a {@link SimpleExoPlayer}. * * @param eventHandler A handler to use when invoking event listeners and outputs. * @param videoRendererEventListener An event listener for video renderers. * @param audioRendererEventListener An event listener for audio renderers. * @param textRendererOutput An output for text renderers. * @param metadataRendererOutput An output for metadata renderers. * @return The {@link Renderer instances}. */ Renderer[] createRenderers( Handler eventHandler, VideoRendererEventListener videoRendererEventListener, AudioRendererEventListener audioRendererEventListener, TextOutput textRendererOutput, MetadataOutput metadataRendererOutput); }

在DefaultRenderersFactory里,有一个眼熟的面孔

/** * Modes for using extension renderers. One of {@link #EXTENSION_RENDERER_MODE_OFF}, {@link * #EXTENSION_RENDERER_MODE_ON} or {@link #EXTENSION_RENDERER_MODE_PREFER}. */ @Documented @Retention(RetentionPolicy.SOURCE) @IntDef({EXTENSION_RENDERER_MODE_OFF, EXTENSION_RENDERER_MODE_ON, EXTENSION_RENDERER_MODE_PREFER}) public @interface ExtensionRendererMode {}

此假枚举可以用来控制解码所用库

又开始了new

/** @param context A {@link Context}. */ public DefaultRenderersFactory(Context context) { this.context = context; extensionRendererMode = EXTENSION_RENDERER_MODE_OFF; //默认不适用扩展渲染器 allowedVideoJoiningTimeMs = DEFAULT_ALLOWED_VIDEO_JOINING_TIME_MS;//视频渲染器可以尝试无缝加入正在进行的播放的默认最大持续时间。 mediaCodecSelector = MediaCodecSelector.DEFAULT; //默认解码器 }

毫无疑问,解码器选择器又是个接口

/** * Selector of {@link MediaCodec} instances. */ public interface MediaCodecSelector { /** * Default implementation of {@link MediaCodecSelector}, which returns the preferred decoder for * the given format. */ MediaCodecSelector DEFAULT = MediaCodecUtil::getDecoderInfos; /** * Returns a list of decoders that can decode media in the specified MIME type, in priority order. * * @param mimeType The MIME type for which a decoder is required. * @param requiresSecureDecoder Whether a secure decoder is required. * @param requiresTunnelingDecoder Whether a tunneling decoder is required. * @return An unmodifiable list of {@link MediaCodecInfo}s corresponding to decoders. May be * empty. * @throws DecoderQueryException Thrown if there was an error querying decoders. */ List<MediaCodecInfo> getDecoderInfos( String mimeType, boolean requiresSecureDecoder, boolean requiresTunnelingDecoder) throws DecoderQueryException; }

MediaCodecUtil 默认的工具实现

/* * Returns all {@link MediaCodecInfo}s for the given mime type, in the order given by {@link * MediaCodecList}. * * @param mimeType The MIME type. * @param secure Whether the decoder is required to support secure decryption. Always pass false * unless secure decryption really is required. * @param tunneling Whether the decoder is required to support tunneling. Always pass false unless * tunneling really is required. * @return An unmodifiable list of all {@link MediaCodecInfo}s for the given mime type, in the * order given by {@link MediaCodecList}. * @throws DecoderQueryException If there was an error querying the available decoders. */ public static synchronized List<MediaCodecInfo> getDecoderInfos( String mimeType, boolean secure, boolean tunneling) throws DecoderQueryException { CodecKey key = new CodecKey(mimeType, secure, tunneling); @Nullable List<MediaCodecInfo> cachedDecoderInfos = decoderInfosCache.get(key); if (cachedDecoderInfos != null) { return cachedDecoderInfos; } MediaCodecListCompat mediaCodecList = Util.SDK_INT >= 21 ? new MediaCodecListCompatV21(secure, tunneling) : new MediaCodecListCompatV16(); ArrayList<MediaCodecInfo> decoderInfos = getDecoderInfosInternal(key, mediaCodecList); if (secure && decoderInfos.isEmpty() && 21 <= Util.SDK_INT && Util.SDK_INT <= 23) { // Some devices don't list secure decoders on API level 21 [Internal: b/18678462]. Try the // legacy path. We also try this path on API levels 22 and 23 as a defensive measure. mediaCodecList = new MediaCodecListCompatV16(); decoderInfos = getDecoderInfosInternal(key, mediaCodecList); if (!decoderInfos.isEmpty()) { Log.w(TAG, "MediaCodecList API didn't list secure decoder for: " + mimeType + ". Assuming: " + decoderInfos.get(0).name); } } applyWorkarounds(mimeType, decoderInfos); List<MediaCodecInfo> unmodifiableDecoderInfos = Collections.unmodifiableList(decoderInfos); decoderInfosCache.put(key, unmodifiableDecoderInfos); return unmodifiableDecoderInfos; }

。。。。

总之,就是 在SimpleExoPlayer.Builder的时候,传了个DefaultRenderersFactory,factory创建的时候调用了个默认的解码器选择器,这个解码器选择器可以根据文件类型返回相应的解码器。

然后回到SimpleExoPlayer,创建了默认的渲染器工厂后,在new ExoPlayerImpl()之前,使用默认的渲染器工厂builder.renderFactoroy创建了渲染器。

renderers = builder.renderersFactory.createRenderers( eventHandler, componentListener, componentListener, componentListener, componentListener);

实际内部调用也就是 DefaultRenderersFactory.createRenderers()

传入的这几个参数就很有意思。。。。

第一个

Handler eventHandler = new Handler(builder.looper);

后面几个是一个

componentListener = new ComponentListener();

所需不是一个类型

/** * Builds the {@link Renderer} instances for a {@link SimpleExoPlayer}. * * @param eventHandler A handler to use when invoking event listeners and outputs. * @param videoRendererEventListener An event listener for video renderers. * @param audioRendererEventListener An event listener for audio renderers. * @param textRendererOutput An output for text renderers. * @param metadataRendererOutput An output for metadata renderers. * @return The {@link Renderer instances}. */ Renderer[] createRenderers( Handler eventHandler, VideoRendererEventListener videoRendererEventListener, AudioRendererEventListener audioRendererEventListener, TextOutput textRendererOutput, MetadataOutput metadataRendererOutput);

然后componentListener全都进行了实现

private final class ComponentListener implements VideoRendererEventListener, AudioRendererEventListener, TextOutput, MetadataOutput, SurfaceHolder.Callback, TextureView.SurfaceTextureListener, AudioFocusManager.PlayerControl, AudioBecomingNoisyManager.EventListener, StreamVolumeManager.Listener, Player.EventListener {}

回到调用,DefaultRenderersFactoroy创建Renderers[]

@Override public Renderer[] createRenderers( Handler eventHandler, VideoRendererEventListener videoRendererEventListener, AudioRendererEventListener audioRendererEventListener, TextOutput textRendererOutput, MetadataOutput metadataRendererOutput) { ArrayList<Renderer> renderersList = new ArrayList<>(); buildVideoRenderers( context, extensionRendererMode, mediaCodecSelector, enableDecoderFallback, eventHandler, videoRendererEventListener, allowedVideoJoiningTimeMs, renderersList); @Nullable AudioSink audioSink = buildAudioSink(context, enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload); if (audioSink != null) { buildAudioRenderers( context, extensionRendererMode, mediaCodecSelector, enableDecoderFallback, audioSink, eventHandler, audioRendererEventListener, renderersList); } buildTextRenderers(context, textRendererOutput, eventHandler.getLooper(), extensionRendererMode, renderersList); buildMetadataRenderers(context, metadataRendererOutput, eventHandler.getLooper(), extensionRendererMode, renderersList); buildCameraMotionRenderers(context, extensionRendererMode, renderersList); buildMiscellaneousRenderers(context, eventHandler, extensionRendererMode, renderersList); return renderersList.toArray(new Renderer[0]); }

又看到了眼熟的新面孔

这个开始创建Renderers了

4.1)创建视频Renderers

先通过buildVideoRenderers创建视频的渲染器

/** * Builds video renderers for use by the player. * * @param context The {@link Context} associated with the player. * @param extensionRendererMode The extension renderer mode. * @param mediaCodecSelector A decoder selector. * @param enableDecoderFallback Whether to enable fallback to lower-priority decoders if decoder * initialization fails. This may result in using a decoder that is slower/less efficient than * the primary decoder. * @param eventHandler A handler associated with the main thread's looper. * @param eventListener An event listener. * @param allowedVideoJoiningTimeMs The maximum duration for which video renderers can attempt to * seamlessly join an ongoing playback, in milliseconds. * @param out An array to which the built renderers should be appended. */ protected void buildVideoRenderers( Context context, @ExtensionRendererMode int extensionRendererMode, MediaCodecSelector mediaCodecSelector, boolean enableDecoderFallback, Handler eventHandler, VideoRendererEventListener eventListener, long allowedVideoJoiningTimeMs, ArrayList<Renderer> out) { MediaCodecVideoRenderer videoRenderer = new MediaCodecVideoRenderer( context, mediaCodecSelector, allowedVideoJoiningTimeMs, enableDecoderFallback, eventHandler, eventListener, MAX_DROPPED_VIDEO_FRAME_COUNT_TO_NOTIFY); videoRenderer.experimentalSetAsynchronousBufferQueueingEnabled(enableAsyncQueueing); videoRenderer.experimentalSetForceAsyncQueueingSynchronizationWorkaround( forceAsyncQueueingSynchronizationWorkaround); videoRenderer.experimentalSetSynchronizeCodecInteractionsWithQueueingEnabled( enableSynchronizeCodecInteractionsWithQueueing); out.add(videoRenderer); if (extensionRendererMode == EXTENSION_RENDERER_MODE_OFF) { return; } int extensionRendererIndex = out.size(); if (extensionRendererMode == EXTENSION_RENDERER_MODE_PREFER) { extensionRendererIndex--; } try { // Full class names used for constructor args so the LINT rule triggers if any of them move. // LINT.IfChange Class<?> clazz = Class.forName("com.google.android.exoplayer2.ext.vp9.LibvpxVideoRenderer"); Constructor<?> constructor = clazz.getConstructor( long.class, android.os.Handler.class, com.google.android.exoplayer2.video.VideoRendererEventListener.class, int.class); // LINT.ThenChange(../../../../../../../proguard-rules.txt) Renderer renderer = (Renderer) constructor.newInstance( allowedVideoJoiningTimeMs, eventHandler, eventListener, MAX_DROPPED_VIDEO_FRAME_COUNT_TO_NOTIFY); out.add(extensionRendererIndex++, renderer); Log.i(TAG, "Loaded LibvpxVideoRenderer."); } catch (ClassNotFoundException e) { // Expected if the app was built without the extension. } catch (Exception e) { // The extension is present, but instantiation failed. throw new RuntimeException("Error instantiating VP9 extension", e); } try { // Full class names used for constructor args so the LINT rule triggers if any of them move. // LINT.IfChange Class<?> clazz = Class.forName("com.google.android.exoplayer2.ext.av1.Libgav1VideoRenderer"); Constructor<?> constructor = clazz.getConstructor( long.class, android.os.Handler.class, com.google.android.exoplayer2.video.VideoRendererEventListener.class, int.class); // LINT.ThenChange(../../../../../../../proguard-rules.txt) Renderer renderer = (Renderer) constructor.newInstance( allowedVideoJoiningTimeMs, eventHandler, eventListener, MAX_DROPPED_VIDEO_FRAME_COUNT_TO_NOTIFY); out.add(extensionRendererIndex++, renderer); Log.i(TAG, "Loaded Libgav1VideoRenderer."); } catch (ClassNotFoundException e) { // Expected if the app was built without the extension. } catch (Exception e) { // The extension is present, but instantiation failed. throw new RuntimeException("Error instantiating AV1 extension", e); } }

视频渲染器,先加一个MediaCodecVideoRenderer

然后判断是否启用扩展解码器,如ffmpeg等,再往下加

if (extensionRendererMode == EXTENSION_RENDERER_MODE_OFF) { return; } int extensionRendererIndex = out.size(); //前面只加了一个,应该是1 if (extensionRendererMode == EXTENSION_RENDERER_MODE_PREFER) { extensionRendererIndex--; //配置了优先扩展,变成0 }

后面再加的时候,就往前排,排在默认解码器之前,按顺序累加

try { // Full class names used for constructor args so the LINT rule triggers if any of them move. // LINT.IfChange Class<?> clazz = Class.forName("com.google.android.exoplayer2.ext.vp9.LibvpxVideoRenderer"); Constructor<?> constructor = clazz.getConstructor( long.class, android.os.Handler.class, com.google.android.exoplayer2.video.VideoRendererEventListener.class, int.class); // LINT.ThenChange(../../../../../../../proguard-rules.txt) Renderer renderer = (Renderer) constructor.newInstance( allowedVideoJoiningTimeMs, eventHandler, eventListener, MAX_DROPPED_VIDEO_FRAME_COUNT_TO_NOTIFY); out.add(extensionRendererIndex++, renderer); Log.i(TAG, "Loaded LibvpxVideoRenderer."); } catch (ClassNotFoundException e) { // Expected if the app was built without the extension. } catch (Exception e) { // The extension is present, but instantiation failed. throw new RuntimeException("Error instantiating VP9 extension", e); }

视频解码器Renderers完成。

4.2)创建音频Renderers

先创建一个AudioSilk

@Nullable AudioSink audioSink = buildAudioSink(context, enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload);

本地方法

/** * Builds an {@link AudioSink} to which the audio renderers will output. * * @param context The {@link Context} associated with the player. * @param enableFloatOutput Whether to enable use of floating point audio output, if available.浮点音频 * @param enableAudioTrackPlaybackParams Whether to enable setting playback speed using {@link * android.media.AudioTrack#setPlaybackParams(PlaybackParams)}, if supported.播放速度 * @param enableOffload Whether to enable use of audio offload for supported formats, if * available. 音频分载,上篇表格 * @return The {@link AudioSink} to which the audio renderers will output. May be {@code null} if * no audio renderers are required. If {@code null} is returned then {@link * #buildAudioRenderers} will not be called. */ @Nullable protected AudioSink buildAudioSink( Context context, boolean enableFloatOutput, boolean enableAudioTrackPlaybackParams, boolean enableOffload) { return new DefaultAudioSink( AudioCapabilities.getCapabilities(context), new DefaultAudioProcessorChain(), enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload); }

看着就不简单。。。。

监狱放风归来,继续划水

首先看AudioSink,毫无意外,这又特么是个接口

注意,是sink,sin ke,不是 silk,sei ao ke

/** * A sink that consumes audio data. 接收音频数据的接收器。 * * <p>Before starting playback, specify the input audio format by calling {@link #configure(Format, * int, int[])}. * * <p>Call {@link #handleBuffer(ByteBuffer, long, int)} to write data, and {@link * #handleDiscontinuity()} when the data being fed is discontinuous. Call {@link #play()} to start * playing the written data. * * <p>Call {@link #configure(Format, int, int[])} whenever the input format changes. The sink will * be reinitialized on the next call to {@link #handleBuffer(ByteBuffer, long, int)}. * * <p>Call {@link #flush()} to prepare the sink to receive audio data from a new playback position. * * <p>Call {@link #playToEndOfStream()} repeatedly to play out all data when no more input buffers * will be provided via {@link #handleBuffer(ByteBuffer, long, int)} until the next {@link * #flush()}. Call {@link #reset()} when the instance is no longer required. * * <p>The implementation may be backed by a platform {@link AudioTrack}. In this case, {@link * #setAudioSessionId(int)}, {@link #setAudioAttributes(AudioAttributes)}, {@link * #enableTunnelingV21()} and {@link #disableTunneling()} may be called before writing data to the * sink. These methods may also be called after writing data to the sink, in which case it will be * reinitialized as required. For implementations that are not based on platform {@link * AudioTrack}s, calling methods relating to audio sessions, audio attributes, and tunneling may * have no effect. * * <p>在开始播放之前,请通过调用{@link #configure(Format, * * int,int [])}。 * * * * <p>调用{@link #handleBuffer(ByteBuffer,long,int)}写入数据,然后{@link * * #handleDiscontinuity()},当馈送的数据不连续时。致电{@link #play()}开始 * *播放写入的数据。 * * * * <p>每当输入格式更改时,调用{@link #configure(Format,int,int [])}。水槽会 * *在下一次调用{@link #handleBuffer(ByteBuffer,long,int)}时重新初始化。 * * * * <p>调用{@link #flush()}以使接收器准备好从新的播放位置接收音频数据。 * * * * <p>在没有更多输入缓冲区时,反复调用{@link #playToEndOfStream()}以播放所有数据 * *将通过{@link #handleBuffer(ByteBuffer,long,int)}提供,直到下一个{@link * * #flush()}。当不再需要该实例时,调用{@link #reset()}。 * * * * <p>该实现可由平台{@link AudioTrack}支持。在这种情况下,{@ link * * #setAudioSessionId(int)},{@ link #setAudioAttributes(AudioAttributes)},{@ link * *#enableTunnelingV21()}和{@link #disableTunneling()}可能会在向其写入数据之前调用 * * 下沉。在将数据写入接收器之后,也可以调用这些方法,在这种情况下 * *根据需要重新初始化。对于不基于平台的实现{@link * * AudioTrack},与音频会话,音频属性和隧道有关的调用方法 * *无效。 */ public interface AudioSink {}

第一句就表明了他的态度:音频数据接收器。

里面这个枚举可能会用到

/** * The level of support the sink provides for a format. One of {@link * #SINK_FORMAT_SUPPORTED_DIRECTLY}, {@link #SINK_FORMAT_SUPPORTED_WITH_TRANSCODING} or {@link * #SINK_FORMAT_UNSUPPORTED}. */ @Documented @Retention(RetentionPolicy.SOURCE) @IntDef({ SINK_FORMAT_SUPPORTED_DIRECTLY, SINK_FORMAT_SUPPORTED_WITH_TRANSCODING, SINK_FORMAT_UNSUPPORTED }) @interface SinkFormatSupport {}

/** The sink supports the format directly, without the need for internal transcoding.接收器直接支持格式,而无需内部转码 */ int SINK_FORMAT_SUPPORTED_DIRECTLY = 2; /** * The sink supports the format, but needs to transcode it internally to do so. Internal * transcoding may result in lower quality and higher CPU load in some cases. * 接收器支持格式,但需要在内部对其进行转码。 在某些情况下,内部转码可能会导致质量降低和CPU负载增加。 */ int SINK_FORMAT_SUPPORTED_WITH_TRANSCODING = 1; /** The sink does not support the format. */ int SINK_FORMAT_UNSUPPORTED = 0;

还可以取 支持的音频格式

/** * Returns whether the sink supports a given {@link Format}. * * @param format The format. * @return Whether the sink supports the format. */ boolean supportsFormat(Format format); /** * Returns the level of support that the sink provides for a given {@link Format}. * * @param format The format. * @return The level of support provided. */ @SinkFormatSupport int getFormatSupport(Format format);

同样的,对于音频的基本操作

/** * Starts or resumes consuming audio if initialized. */ void play();

/** * Sets the playback volume. * * @param volume A volume in the range [0.0, 1.0]. */ void setVolume(float volume); /** * Pauses playback. */ void pause(); /** * Flushes the sink, after which it is ready to receive buffers from a new playback position. * * <p>The audio session may remain active until {@link #reset()} is called. */ void flush();

/** Resets the renderer, releasing any resources that it currently holds. */ void reset();

还有几个 眼熟的操作

/** Sets the auxiliary effect. */ void setAuxEffectInfo(AuxEffectInfo auxEffectInfo);

/** 如果可能,启用隧道。 如果先前禁用了隧道,则将重置接收器。 仅当接收器基于平台{@link AudioTrack},并且要求平台API版本21以上时,才可以启用隧道。 * Enables tunneling, if possible. The sink is reset if tunneling was previously disabled. * Enabling tunneling is only possible if the sink is based on a platform {@link AudioTrack}, and * requires platform API version 21 onwards. * * @throws IllegalStateException Thrown if enabling tunneling on platform API version &lt; 21. */ void enableTunnelingV21();

现在可以看默认实现了。。。。

创建时的返回

return new DefaultAudioSink( AudioCapabilities.getCapabilities(context), new DefaultAudioProcessorChain(), enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload);

传入了两个AudioCapabilities 和 DefaultAudioProcesorChain().

4.2.1)创建AudioCapabilities

第一个issues上pass through的老朋友

/** Represents the set of audio formats that a device is capable of playing.表示设备能够播放的一组音频格式。 */ public final class AudioCapabilities {}

调用,取当前设备支持的音频格式

/** * Returns the current audio capabilities for the device. * * @param context A context for obtaining the current audio capabilities. * @return The current audio capabilities for the device. */ @SuppressWarnings("InlinedApi") public static AudioCapabilities getCapabilities(Context context) { Intent intent = context.registerReceiver( /* receiver= */ null, new IntentFilter(AudioManager.ACTION_HDMI_AUDIO_PLUG)); return getCapabilities(context, intent); }

这是,注册了个广播????谷会玩系列。。。。

到了android sdk里了。。。。

/**
 * 广播操作:粘性广播,指示已插入或拔出HDMI电缆。
 * Broadcast Action: A sticky broadcast indicating an HDMI cable was plugged or unplugged.
 *
 * The intent will have the following extra values: {@link #EXTRA_AUDIO_PLUG_STATE},
 * {@link #EXTRA_MAX_CHANNEL_COUNT}, {@link #EXTRA_ENCODINGS}.
 * <p>It can only be received by explicitly registering for it with
 * {@link Context#registerReceiver(BroadcastReceiver, IntentFilter)}.
 */
@SdkConstant(SdkConstantType.BROADCAST_INTENT_ACTION)
public static final String ACTION_HDMI_AUDIO_PLUG =
        "android.media.action.HDMI_AUDIO_PLUG";

也就是说,插入和拔出HDMI是有监听的。。。。

然后继续

@SuppressLint("InlinedApi") /* package */ static AudioCapabilities getCapabilities(Context context, @Nullable Intent intent) { if (deviceMaySetExternalSurroundSoundGlobalSetting() && Global.getInt(context.getContentResolver(), EXTERNAL_SURROUND_SOUND_KEY, 0) == 1) { Logger.w("AudioCapabilities.getCapabilities 第一个return"); return EXTERNAL_SURROUND_SOUND_CAPABILITIES; } if (intent == null || intent.getIntExtra(AudioManager.EXTRA_AUDIO_PLUG_STATE, 0) == 0) { Logger.w("AudioCapabilities.getCapabilities 第二个return"); return DEFAULT_AUDIO_CAPABILITIES; } Logger.w("AudioCapabilities.getCapabilities 第三个return"); return new AudioCapabilities( intent.getIntArrayExtra(AudioManager.EXTRA_ENCODINGS), intent.getIntExtra( AudioManager.EXTRA_MAX_CHANNEL_COUNT, /* defaultValue= */ DEFAULT_MAX_CHANNEL_COUNT)); }

第一个判断

/** * 返回设备用于指定外部环绕声的全局设置{@link Uri};如果设备不支持此功能,则返回null。 * Returns the global settings {@link Uri} used by the device to specify external surround sound, * or null if the device does not support this functionality. */ @Nullable /* package */ static Uri getExternalSurroundSoundGlobalSettingUri() { return deviceMaySetExternalSurroundSoundGlobalSetting() ? Global.getUriFor(EXTERNAL_SURROUND_SOUND_KEY) : null; }

private static boolean deviceMaySetExternalSurroundSoundGlobalSetting() { return Util.SDK_INT >= 17 && ("Amazon".equals(Util.MANUFACTURER) || "Xiaomi".equals(Util.MANUFACTURER)); }

/** * Like {@link Build#MANUFACTURER}, but in a place where it can be conveniently overridden for * local testing. */ public static final String MANUFACTURER = Build.MANUFACTURER;

SDK >=17 4.2吧 并且主板是Amazon或者小米的就 调

/**
 * Construct the content URI for a particular name/value pair,
 * useful for monitoring changes with a ContentObserver.
 * @param name to look up in the table
 * @return the corresponding content URI, or null if not present
 */
public static Uri getUriFor(String name) {
    return getUriFor(CONTENT_URI, name);
}

/** Global settings key for devices that can specify external surround sound. */ private static final String EXTERNAL_SURROUND_SOUND_KEY = "external_surround_sound_enabled";

否则就为null,说明设备没有外部环绕声的全局设置。

果然金钱是第一生产力。。。。

这个getCapabilities()方法应该是跟系统的实际配置有关,第一个return先处理掉 Amazon或者小米这种系统配置了外部环绕声的,第二个return再处理所有设备都能支持的最低配置,剩下的就新建一个。。。。

前两个return的返回

private static final int DEFAULT_MAX_CHANNEL_COUNT = 8; /** The minimum audio capabilities supported by all devices.所有设备支持的最低音频功能。 */ public static final AudioCapabilities DEFAULT_AUDIO_CAPABILITIES = new AudioCapabilities(new int[] {AudioFormat.ENCODING_PCM_16BIT}, DEFAULT_MAX_CHANNEL_COUNT); /** Audio capabilities when the device specifies external surround sound设备指定外部环绕声时的音频功能. */ private static final AudioCapabilities EXTERNAL_SURROUND_SOUND_CAPABILITIES = new AudioCapabilities( new int[] { AudioFormat.ENCODING_PCM_16BIT, AudioFormat.ENCODING_AC3, AudioFormat.ENCODING_E_AC3 }, DEFAULT_MAX_CHANNEL_COUNT);

找设备跑跑看看

海美迪Q5 Plus默认配置播AC3走了第三个return;

MINIX S922X 默认配置播AC3也走了第三个return;

就开始了AudioCapabilities的new

/** * 基于一组受支持的编码和最大通道数来构造新的音频功能。 * 应用程序通常应调用getCapabilities(Context)以基于平台发布的功能来获取实例,而不是调用此构造方法。 * Constructs new audio capabilities based on a set of supported encodings and a maximum channel * count. * * <p>Applications should generally call {@link #getCapabilities(Context)} to obtain an instance * based on the capabilities advertised by the platform, rather than calling this constructor. * * @param supportedEncodings Supported audio encodings from {@link android.media.AudioFormat}'s * {@code ENCODING_*} constants. Passing {@code null} indicates that no encodings are * supported. 支持的编码 * @param maxChannelCount The maximum number of audio channels that can be played simultaneously.可以同时播放的最大音频通道数。 */ public AudioCapabilities(@Nullable int[] supportedEncodings, int maxChannelCount) { Logger.w("AudioCapabilities",Arrays.toString(supportedEncodings),maxChannelCount); if (supportedEncodings != null) { this.supportedEncodings = Arrays.copyOf(supportedEncodings, supportedEncodings.length); Arrays.sort(this.supportedEncodings); } else { this.supportedEncodings = new int[0]; } this.maxChannelCount = maxChannelCount; }

日志竟有多次输出

MINIX S922X的输出:

2021-05-17 16:59:28.999 20672-20672/com.google.android.exoplayer2.demo W/TEST:     0:AudioCapabilities
        1:[2]
        2:8
2021-05-17 16:59:28.999 20672-20672/com.google.android.exoplayer2.demo W/TEST:     0:AudioCapabilities
        1:[2, 5, 6]
        2:8
2021-05-17 16:59:29.000 20672-20672/com.google.android.exoplayer2.demo W/TEST: AudioCapabilities.getCapabilities  第三个return
2021-05-17 16:59:29.000 20672-20672/com.google.android.exoplayer2.demo W/TEST:     0:AudioCapabilities
        1:[2, 5, 6, 7, 13]
        2:8

海美迪Q5 Plus的输出:

2021-05-17 17:04:21.392 7649-7649/com.google.android.exoplayer2.demo W/TEST:     0:AudioCapabilities
        1:[2]
        2:8
2021-05-17 17:04:21.392 7649-7649/com.google.android.exoplayer2.demo W/TEST:     0:AudioCapabilities
        1:[2, 5, 6]
        2:8
2021-05-17 17:04:21.393 7649-7649/com.google.android.exoplayer2.demo W/TEST: AudioCapabilities.getCapabilities  第三个return
2021-05-17 17:04:21.393 7649-7649/com.google.android.exoplayer2.demo W/TEST:     0:AudioCapabilities
        1:[2, 5]
        2:6

这俩都是只接了一台没声音的显示器。。。。

这个地方的值,来自于android sdk 的 AudioFormat类

/** Invalid audio data format */
public static final int ENCODING_INVALID = 0;
/** Default audio data format */
public static final int ENCODING_DEFAULT = 1;

// These values must be kept in sync with core/jni/android_media_AudioFormat.h
// Also sync av/services/audiopolicy/managerdefault/ConfigParsingUtils.h
/** Audio data format: PCM 16 bit per sample. Guaranteed to be supported by devices. */
public static final int ENCODING_PCM_16BIT = 2;
/** Audio data format: PCM 8 bit per sample. Not guaranteed to be supported by devices. */
public static final int ENCODING_PCM_8BIT = 3;
/** Audio data format: single-precision floating-point per sample */
public static final int ENCODING_PCM_FLOAT = 4;
/** Audio data format: AC-3 compressed, also known as Dolby Digital */
public static final int ENCODING_AC3 = 5;
/** Audio data format: E-AC-3 compressed, also known as Dolby Digital Plus or DD+ */
public static final int ENCODING_E_AC3 = 6;
/** Audio data format: DTS compressed */
public static final int ENCODING_DTS = 7;
/** Audio data format: DTS HD compressed */
public static final int ENCODING_DTS_HD = 8;
/** Audio data format: MP3 compressed */
public static final int ENCODING_MP3 = 9;
/** Audio data format: AAC LC compressed */
public static final int ENCODING_AAC_LC = 10;
/** Audio data format: AAC HE V1 compressed */
public static final int ENCODING_AAC_HE_V1 = 11;
/** Audio data format: AAC HE V2 compressed */
public static final int ENCODING_AAC_HE_V2 = 12;

/** Audio data format: compressed audio wrapped in PCM for HDMI
 * or S/PDIF passthrough.
 * IEC61937 uses a stereo stream of 16-bit samples as the wrapper.
 * So the channel mask for the track must be {@link #CHANNEL_OUT_STEREO}.
 * Data should be written to the stream in a short[] array.
 * If the data is written in a byte[] array then there may be endian problems
 * on some platforms when converting to short internally.
 */
public static final int ENCODING_IEC61937 = 13;
/** Audio data format: DOLBY TRUEHD compressed
 **/
public static final int ENCODING_DOLBY_TRUEHD = 14;
/** Audio data format: AAC ELD compressed */
public static final int ENCODING_AAC_ELD = 15;
/** Audio data format: AAC xHE compressed */
public static final int ENCODING_AAC_XHE = 16;
/** Audio data format: AC-4 sync frame transport format */
public static final int ENCODING_AC4 = 17;
/** Audio data format: E-AC-3-JOC compressed
 * E-AC-3-JOC streams can be decoded by downstream devices supporting {@link #ENCODING_E_AC3}.
 * Use {@link #ENCODING_E_AC3} as the AudioTrack encoding when the downstream device
 * supports {@link #ENCODING_E_AC3} but not {@link #ENCODING_E_AC3_JOC}.
 **/
public static final int ENCODING_E_AC3_JOC = 18;
/** Audio data format: Dolby MAT (Metadata-enhanced Audio Transmission)
 * Dolby MAT bitstreams are used to transmit Dolby TrueHD, channel-based PCM, or PCM with
 * metadata (object audio) over HDMI (e.g. Dolby Atmos content).
 **/
public static final int ENCODING_DOLBY_MAT = 19;
/** Audio data format: OPUS compressed. */
public static final int ENCODING_OPUS = 20;

这个的功能,应该就是 确定设备能支持的音频格式。。。。


4.2.2)创建DefaultAudioProcessorChain

这个比较干脆,直接上来就是new,虽然直接new,但怎么能少得了接口呢。。。。

/** * The default audio processor chain, which applies a (possibly empty) chain of user-defined audio * processors followed by {@link SilenceSkippingAudioProcessor} and {@link SonicAudioProcessor}. * 默认音频处理器链,它应用(可能为空)用户定义的音频处理器链,后跟{@link SilenceSkippingAudioProcessor}和{@link SonicAudioProcessor}。 */ public static class DefaultAudioProcessorChain implements AudioProcessorChain {}

先看接口定义

/** * Provides a chain of audio processors, which are used for any user-defined processing and * applying playback parameters (if supported). Because applying playback parameters can skip and * stretch/compress audio, the sink will query the chain for information on how to transform its * output position to map it onto a media position, via {@link #getMediaDuration(long)} and {@link * #getSkippedOutputFrameCount()}. * 提供一串音频处理器,这些音频处理器用于任何用户定义的处理和应用播放参数(如果支持)。 * 由于应用播放参数可以跳过和拉伸/压缩音频,因此接收器将通过{@link #getMediaDuration(long)}和{@link# getSkippedOutputFrameCount()}。 */ public interface AudioProcessorChain { /** * Returns the fixed chain of audio processors that will process audio. This method is called * once during initialization, but audio processors may change state to become active/inactive * during playback. */ AudioProcessor[] getAudioProcessors(); /** * Configures audio processors to apply the specified playback parameters immediately, returning * the new playback parameters, which may differ from those passed in. Only called when * processors have no input pending. * 返回将处理音频的音频处理器的固定链。 * 此方法在初始化期间被调用一次,但是音频处理器可以将状态更改为在播放过程中变为活动/非活动状态。 * * @param playbackParameters The playback parameters to try to apply. * @return The playback parameters that were actually applied. */ PlaybackParameters applyPlaybackParameters(PlaybackParameters playbackParameters); /** * Configures audio processors to apply whether to skip silences immediately, returning the new * value. Only called when processors have no input pending. * 配置音频处理器以应用是否立即跳过静音,并返回新值。 仅当处理器没有输入待处理时才调用。 * * @param skipSilenceEnabled Whether silences should be skipped in the audio stream. * @return The new value. */ boolean applySkipSilenceEnabled(boolean skipSilenceEnabled); /** * Returns the media duration corresponding to the specified playout duration, taking speed * adjustment due to audio processing into account. * * <p>The scaling performed by this method will use the actual playback speed achieved by the * audio processor chain, on average, since it was last flushed. This may differ very slightly * from the target playback speed. * 返回与指定的播放持续时间相对应的媒体持续时间,同时考虑到由于音频处理而导致的速度调整。 * <p>通过此方法执行的缩放将平均使用音频处理器链自上次刷新以来所达到的实际播放速度。 这可能与目标播放速度略有不同。 * * @param playoutDuration The playout duration to scale. * @return The corresponding media duration, in the same units as {@code duration}. */ long getMediaDuration(long playoutDuration); /** * Returns the number of output audio frames skipped since the audio processors were last * flushed. */ long getSkippedOutputFrameCount(); }

看起来,这个是用于音频处理的chain,就是一串链。。。。也叫音频处理器。。。。

然后看default new....

/** * Creates a new default chain of audio processors, with the user-defined {@code * audioProcessors} applied before silence skipping and speed adjustment processors. */ public DefaultAudioProcessorChain(AudioProcessor... audioProcessors) { this(audioProcessors, new SilenceSkippingAudioProcessor(), new SonicAudioProcessor()); } /** * Creates a new default chain of audio processors, with the user-defined {@code * audioProcessors} applied before silence skipping and speed adjustment processors. * 创建一个新的默认音频处理器链,并在静音跳过和速度调整处理器之前应用用户定义的{@code audioProcessors}。 */ public DefaultAudioProcessorChain( AudioProcessor[] audioProcessors, //空 SilenceSkippingAudioProcessor silenceSkippingAudioProcessor, SonicAudioProcessor sonicAudioProcessor) { // The passed-in type may be more specialized than AudioProcessor[], so allocate a new array // rather than using Arrays.copyOf.传入的类型可能比AudioProcessor []更专用,因此请分配一个新数组,而不要使用Arrays.copyOf。 this.audioProcessors = new AudioProcessor[audioProcessors.length + 2]; System.arraycopy( /* src= */ audioProcessors, /* srcPos= */ 0, /* dest= */ this.audioProcessors, /* destPos= */ 0, /* length= */ audioProcessors.length); this.silenceSkippingAudioProcessor = silenceSkippingAudioProcessor; this.sonicAudioProcessor = sonicAudioProcessor; this.audioProcessors[audioProcessors.length] = silenceSkippingAudioProcessor; this.audioProcessors[audioProcessors.length + 1] = sonicAudioProcessor; }

不用看,就知道肯定又是仨接口。。。。

第一个,new AudioProcessor()

/** * Interface for audio processors, which take audio data as input and transform it, potentially * modifying its channel count, encoding and/or sample rate. * * <p>In addition to being able to modify the format of audio, implementations may allow parameters * to be set that affect the output audio and whether the processor is active/inactive. * 音频处理器的接口,它将音频数据作为输入并进行转换,从而有可能修改其通道数,编码和/或采样率。 * <p>除了能够修改音频的格式外,实现还可以设置影响输出音频以及处理器是否处于活动状态的参数。 */ public interface AudioProcessor { /** PCM audio format that may be handled by an audio processor.音频处理器可以处理的PCM音频格式。 */ final class AudioFormat { public static final AudioFormat NOT_SET = new AudioFormat( /* sampleRate= */ Format.NO_VALUE, /* channelCount= */ Format.NO_VALUE, /* encoding= */ Format.NO_VALUE); /** The sample rate in Hertz. */ public final int sampleRate; /** The number of interleaved channels. */ public final int channelCount; /** The type of linear PCM encoding. */ @C.PcmEncoding public final int encoding; /** The number of bytes used to represent one audio frame. */ public final int bytesPerFrame;

。。。。

}

。。。。

}

这是第一个,音频处理器。

第二个,new SilenceSkippingAudioProcessor,用来跳过输入流中的静音

/** * An {@link AudioProcessor} that skips silence in the input stream. Input and output are 16-bit * PCM. * {@link AudioProcessor},可跳过输入流中的静音。 输入和输出是16位PCM。 */ public final class SilenceSkippingAudioProcessor extends BaseAudioProcessor {}

/** * Base class for audio processors that keep an output buffer and an internal buffer that is reused * whenever input is queued. Subclasses should override {@link #onConfigure(AudioFormat)} to return * the output audio format for the processor if it's active. * 音频处理器的基类,其中保留输出缓冲区和内部缓冲区,只要输入排队,该缓冲区就可以重用。 * 子类应重写{@link #onConfigure(AudioFormat)},以返回处理器处于活动状态时的输出音频格式。 */ public abstract class BaseAudioProcessor implements AudioProcessor {}

new操作,创建了默认的 静音跳过音频处理器

/** Creates a new silence skipping audio processor.创建一个新的静音跳过音频处理器。 */ public SilenceSkippingAudioProcessor() { this( DEFAULT_MINIMUM_SILENCE_DURATION_US, //150s DEFAULT_PADDING_SILENCE_US, //20s DEFAULT_SILENCE_THRESHOLD_LEVEL); //1024 } /** * Creates a new silence skipping audio processor. * * @param minimumSilenceDurationUs The minimum duration of audio that must be below {@code * silenceThresholdLevel} to classify that part of audio as silent, in microseconds. * 音频的最小持续时间必须低于{@code silenceThresholdLevel}才能将音频的该部分分类为无声,以微秒为单位。 * @param paddingSilenceUs The duration of silence by which to extend non-silent sections, in * microseconds. The value must not exceed {@code minimumSilenceDurationUs}.延长非沉默部分的沉默持续时间(以微秒为单位)。 * 该值不能超过{@code minimumSilenceDurationUs}。 * @param silenceThresholdLevel The absolute level below which an individual PCM sample is * classified as silent.绝对水平(低于该绝对水平时,单个PCM样本被分类为无声)。 */ public SilenceSkippingAudioProcessor( long minimumSilenceDurationUs, long paddingSilenceUs, short silenceThresholdLevel) { Assertions.checkArgument(paddingSilenceUs <= minimumSilenceDurationUs); this.minimumSilenceDurationUs = minimumSilenceDurationUs; this.paddingSilenceUs = paddingSilenceUs; this.silenceThresholdLevel = silenceThresholdLevel; maybeSilenceBuffer = Util.EMPTY_BYTE_ARRAY; paddingBuffer = Util.EMPTY_BYTE_ARRAY; }

说的就跟我能看懂一样。。。。

第三个,new SonicAudioProcessor

/** * An {@link AudioProcessor} that uses the Sonic library to modify audio speed/pitch/sample rate. * 一个{@link AudioProcessor},它使用Sonic库来修改音频速度/音高/采样率。 */ public final class SonicAudioProcessor implements AudioProcessor {}

这个好,简单明了,就是调速度的。。。。

/** Creates a new Sonic audio processor.创建一个新的 音速 音频处理器。 */ public SonicAudioProcessor() { speed = 1f; pitch = 1f; pendingInputAudioFormat = AudioFormat.NOT_SET; pendingOutputAudioFormat = AudioFormat.NOT_SET; inputAudioFormat = AudioFormat.NOT_SET; outputAudioFormat = AudioFormat.NOT_SET; buffer = EMPTY_BUFFER; shortBuffer = buffer.asShortBuffer(); outputBuffer = EMPTY_BUFFER; pendingOutputSampleRate = SAMPLE_RATE_NO_CHANGE; }

也就是说,通过DefaultAudioProcessorChain,对音频分别应用了用户定义的声音处理、静音150秒跳过处理、音速调整处理。

然后,生成DefaultAudioSink。。。。

4.2.3)创建DefaultAudioSink

竟然还没出去音频。。。。

老规矩,一上来肯定还是接口

/** * 播放音频数据。 该实现委托给{@link AudioTrack}并处理播放位置的平滑,无阻塞写入和重新配置。 * <p>如果启用了隧道模式,则必须注意音频处理器输出的缓冲区的持续时间与其输入的持续时间不同, * 并且缓冲区处理器必须在输入排队后立即生成与其最后输入相对应的输出。 这意味着,例如,在使用隧道传输时无法进行速度调节。 * Plays audio data. The implementation delegates to an {@link AudioTrack} and handles playback * position smoothing, non-blocking writes and reconfiguration. * * <p>If tunneling mode is enabled, care must be taken that audio processors do not output buffers * with a different duration than their input, and buffer processors must produce output * corresponding to their last input immediately after that input is queued. This means that, for * example, speed adjustment is not possible while using tunneling. */ public final class DefaultAudioSink implements AudioSink {}

AudioSink竟然在4.2)一开始就写过。。。。

那就开始 new DefaultAudioSink

/** * Creates a new default audio sink, optionally using float output for high resolution PCM and * with the specified {@code audioProcessorChain}. * 创建一个新的默认音频接收器,还可以选择将浮点输出用于高分辨率PCM,并使用指定的{@code audioProcessorChain}。 * * @param audioCapabilities The audio capabilities for playback on this device在此设备上播放的音频功能. May be null if the * default capabilities (no encoded audio passthrough support) should be assumed. * @param audioProcessorChain An {@link AudioProcessorChain} which is used to apply playback * parameters adjustments. The instance passed in must not be reused in other sinks. * @param enableFloatOutput Whether to enable 32-bit float output. Where possible, 32-bit float * output will be used if the input is 32-bit float, and also if the input is high resolution * (24-bit or 32-bit) integer PCM. Float output is supported from API level 21. Audio * processing (for example, speed adjustment) will not be available when float output is in * use. * @param enableAudioTrackPlaybackParams Whether to enable setting playback speed using {@link * android.media.AudioTrack#setPlaybackParams(PlaybackParams)}, if supported. * @param enableOffload Whether to enable audio offload. If an audio format can be both played * with offload and encoded audio passthrough, it will be played in offload. Audio offload is * supported from API level 29. Most Android devices can only support one offload {@link * android.media.AudioTrack} at a time and can invalidate it at any time. Thus an app can * never be guaranteed that it will be able to play in offload. Audio processing (for example, * speed adjustment) will not be available when offload is in use. */ public DefaultAudioSink( @Nullable AudioCapabilities audioCapabilities, AudioProcessorChain audioProcessorChain, boolean enableFloatOutput, boolean enableAudioTrackPlaybackParams, boolean enableOffload) { this.audioCapabilities = audioCapabilities; this.audioProcessorChain = Assertions.checkNotNull(audioProcessorChain); this.enableFloatOutput = Util.SDK_INT >= 21 && enableFloatOutput; this.enableAudioTrackPlaybackParams = Util.SDK_INT >= 23 && enableAudioTrackPlaybackParams; this.enableOffload = Util.SDK_INT >= 29 && enableOffload; releasingConditionVariable = new ConditionVariable(true); audioTrackPositionTracker = new AudioTrackPositionTracker(new PositionTrackerListener()); channelMappingAudioProcessor = new ChannelMappingAudioProcessor(); trimmingAudioProcessor = new TrimmingAudioProcessor(); ArrayList<AudioProcessor> toIntPcmAudioProcessors = new ArrayList<>(); Collections.addAll( toIntPcmAudioProcessors, new ResamplingAudioProcessor(), channelMappingAudioProcessor, trimmingAudioProcessor); Collections.addAll(toIntPcmAudioProcessors, audioProcessorChain.getAudioProcessors()); toIntPcmAvailableAudioProcessors = toIntPcmAudioProcessors.toArray(new AudioProcessor[0]); toFloatPcmAvailableAudioProcessors = new AudioProcessor[] {new FloatResamplingAudioProcessor()}; volume = 1f; audioAttributes = AudioAttributes.DEFAULT; audioSessionId = C.AUDIO_SESSION_ID_UNSET; auxEffectInfo = new AuxEffectInfo(AuxEffectInfo.NO_AUX_EFFECT_ID, 0f); mediaPositionParameters = new MediaPositionParameters( PlaybackParameters.DEFAULT, DEFAULT_SKIP_SILENCE, /* mediaTimeUs= */ 0, /* audioTrackPositionUs= */ 0); audioTrackPlaybackParameters = PlaybackParameters.DEFAULT; drainingAudioProcessorIndex = C.INDEX_UNSET; activeAudioProcessors = new AudioProcessor[0]; outputBuffers = new ByteBuffer[0]; mediaPositionParametersCheckpoints = new ArrayDeque<>(); initializationExceptionPendingExceptionHolder = new PendingExceptionHolder<>(AUDIO_TRACK_RETRY_DURATION_MS); writeExceptionPendingExceptionHolder = new PendingExceptionHolder<>(AUDIO_TRACK_RETRY_DURATION_MS); }

。。。。。好大的信息量,总之就是初始化了一堆东西。。。。

然后new成功了AudioSink....

然后回到了创建 audio sink....

/** * Builds an {@link AudioSink} to which the audio renderers will output. * 构建一个{@link AudioSink},音频渲染器将输出到该音频。 * * @param context The {@link Context} associated with the player. * @param enableFloatOutput Whether to enable use of floating point audio output, if available. * @param enableAudioTrackPlaybackParams Whether to enable setting playback speed using {@link * android.media.AudioTrack#setPlaybackParams(PlaybackParams)}, if supported. * @param enableOffload Whether to enable use of audio offload for supported formats, if * available. * @return The {@link AudioSink} to which the audio renderers will output. May be {@code null} if * no audio renderers are required. If {@code null} is returned then {@link * #buildAudioRenderers} will not be called. */ @Nullable protected AudioSink buildAudioSink( Context context, boolean enableFloatOutput, boolean enableAudioTrackPlaybackParams, boolean enableOffload) { return new DefaultAudioSink( AudioCapabilities.getCapabilities(context), new DefaultAudioProcessorChain(), enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload); }


4.2.4)创建音频Renderers

创建成功了AudioSink,终于可以创建Renderers了

buildAudioRenderers( context, extensionRendererMode, mediaCodecSelector, enableDecoderFallback, audioSink, eventHandler, audioRendererEventListener, renderersList);

套路跟视频一样,先建个默认的,然后判断要不要用扩展。。。。

/** * Builds audio renderers for use by the player. * * @param context The {@link Context} associated with the player. * @param extensionRendererMode The extension renderer mode. * @param mediaCodecSelector A decoder selector. * @param enableDecoderFallback Whether to enable fallback to lower-priority decoders if decoder * initialization fails. This may result in using a decoder that is slower/less efficient than * the primary decoder. * @param audioSink A sink to which the renderers will output. * @param eventHandler A handler to use when invoking event listeners and outputs. * @param eventListener An event listener. * @param out An array to which the built renderers should be appended. */ protected void buildAudioRenderers( Context context, @ExtensionRendererMode int extensionRendererMode, MediaCodecSelector mediaCodecSelector, boolean enableDecoderFallback, AudioSink audioSink, Handler eventHandler, AudioRendererEventListener eventListener, ArrayList<Renderer> out) { MediaCodecAudioRenderer audioRenderer = new MediaCodecAudioRenderer( context, mediaCodecSelector, enableDecoderFallback, eventHandler, eventListener, audioSink); audioRenderer.experimentalSetAsynchronousBufferQueueingEnabled(enableAsyncQueueing); audioRenderer.experimentalSetForceAsyncQueueingSynchronizationWorkaround( forceAsyncQueueingSynchronizationWorkaround); audioRenderer.experimentalSetSynchronizeCodecInteractionsWithQueueingEnabled( enableSynchronizeCodecInteractionsWithQueueing); out.add(audioRenderer); if (extensionRendererMode == EXTENSION_RENDERER_MODE_OFF) { return; } int extensionRendererIndex = out.size(); if (extensionRendererMode == EXTENSION_RENDERER_MODE_PREFER) { extensionRendererIndex--; } try { // Full class names used for constructor args so the LINT rule triggers if any of them move. // LINT.IfChange Class<?> clazz = Class.forName("com.google.android.exoplayer2.ext.opus.LibopusAudioRenderer"); Constructor<?> constructor = clazz.getConstructor( android.os.Handler.class, com.google.android.exoplayer2.audio.AudioRendererEventListener.class, com.google.android.exoplayer2.audio.AudioSink.class); // LINT.ThenChange(../../../../../../../proguard-rules.txt) Renderer renderer = (Renderer) constructor.newInstance(eventHandler, eventListener, audioSink); out.add(extensionRendererIndex++, renderer); Log.i(TAG, "Loaded LibopusAudioRenderer."); } catch (ClassNotFoundException e) { // Expected if the app was built without the extension. } catch (Exception e) { // The extension is present, but instantiation failed. throw new RuntimeException("Error instantiating Opus extension", e); } try { // Full class names used for constructor args so the LINT rule triggers if any of them move. // LINT.IfChange Class<?> clazz = Class.forName("com.google.android.exoplayer2.ext.flac.LibflacAudioRenderer"); Constructor<?> constructor = clazz.getConstructor( android.os.Handler.class, com.google.android.exoplayer2.audio.AudioRendererEventListener.class, com.google.android.exoplayer2.audio.AudioSink.class); // LINT.ThenChange(../../../../../../../proguard-rules.txt) Renderer renderer = (Renderer) constructor.newInstance(eventHandler, eventListener, audioSink); out.add(extensionRendererIndex++, renderer); Log.i(TAG, "Loaded LibflacAudioRenderer."); } catch (ClassNotFoundException e) { // Expected if the app was built without the extension. } catch (Exception e) { // The extension is present, but instantiation failed. throw new RuntimeException("Error instantiating FLAC extension", e); } try { // Full class names used for constructor args so the LINT rule triggers if any of them move. // LINT.IfChange Class<?> clazz = Class.forName("com.google.android.exoplayer2.ext.ffmpeg.FfmpegAudioRenderer"); Constructor<?> constructor = clazz.getConstructor( android.os.Handler.class, com.google.android.exoplayer2.audio.AudioRendererEventListener.class, com.google.android.exoplayer2.audio.AudioSink.class); // LINT.ThenChange(../../../../../../../proguard-rules.txt) Renderer renderer = (Renderer) constructor.newInstance(eventHandler, eventListener, audioSink); out.add(extensionRendererIndex++, renderer); Log.i(TAG, "Loaded FfmpegAudioRenderer."); } catch (ClassNotFoundException e) { // Expected if the app was built without the extension. } catch (Exception e) { // The extension is present, but instantiation failed. throw new RuntimeException("Error instantiating FFmpeg extension", e); } }

默认的解码器先不往里看了,后续 跟默认视频解码器 单独整一篇,这个流程太长了。。。。

4.3)创建其他几个Renderers

buildTextRenderers(context, textRendererOutput, eventHandler.getLooper(), extensionRendererMode, renderersList); buildMetadataRenderers(context, metadataRendererOutput, eventHandler.getLooper(), extensionRendererMode, renderersList); buildCameraMotionRenderers(context, extensionRendererMode, renderersList); buildMiscellaneousRenderers(context, eventHandler, extensionRendererMode, renderersList);

分别是 字幕Renderers、视频Metadata Renderers、相机运动Renderers、miscellaneous(各种各样) Renderers。。。。

真有意思,一会doSomeWork、一会 miscellaneous的。。。。

4.4)后续应用

创建完renderers后,一路传递到ExoPlayerImplInternal,然后调用Renderers接口方法进行操作。

private void startRenderers() throws ExoPlaybackException { isRebuffering = false; mediaClock.start(); for (Renderer renderer : renderers) { Logger.w(TAG,renderer,"是否启用 "+isRendererEnabled(renderer)); if (isRendererEnabled(renderer)) { renderer.start(); } } } private void stopRenderers() throws ExoPlaybackException { mediaClock.stop(); for (Renderer renderer : renderers) { if (isRendererEnabled(renderer)) { ensureStopped(renderer); } } }

这样,renderers就运行起来了。。。。

2021年05月17日18:36:25

--
senRsl
2021年05月17日11:59:50

没有评论 :

发表评论