東川印記

一本東川,笑看爭龍斗虎;寰茫兦者,度橫佰昧人生。
显示标签为“偃月青龍”的博文。显示所有博文
显示标签为“偃月青龍”的博文。显示所有博文

Android APK 打包系统签名方式实现静默安装卸载 补充

2021年12月30日星期四



1,使用singed.jar及系统签名打包

签名证书位于

[root@p-amlogic_20190720_aosp/build/target/product/security]#ll
total 50M
-rwxr-xr-x 1 ops  ops   656 2019-08-08 17:13 Android.mk
-rwxr-xr-x 1 ops  ops  1.2K 2019-08-08 17:13 media.pk8
-rwxr-xr-x 1 ops  ops  1.7K 2019-08-08 17:13 media.x509.pem
-rwxr-xr-x 1 ops  ops  1.2K 2019-08-08 17:13 platform.pk8
-rwxr-xr-x 1 ops  ops  1.7K 2019-08-08 17:13 platform.x509.pem
-rwxr-xr-x 1 ops  ops  3.1K 2019-08-08 17:13 README
-rwxr-xr-x 1 ops  ops  1.2K 2019-08-08 17:13 shared.pk8
-rwxr-xr-x 1 ops  ops  1.7K 2019-08-08 17:13 shared.x509.pem
-rwxr-xr-x 1 ops  ops  1.2K 2019-08-08 17:13 testkey.pk8
-rwxr-xr-x 1 ops  ops  1.7K 2019-08-08 17:13 testkey.x509.pem
-rwxr-xr-x 1 ops  ops   524 2019-08-08 17:13 verity_key
-rwxr-xr-x 1 ops  ops  1.2K 2019-08-08 17:13 verity.pk8
-rwxr-xr-x 1 ops  ops  1.5K 2019-08-08 17:13 verity.x509.pem
[root@p-amlogic_20190720_aosp/build/target/product/security]

打包jar位于

[root@p-amlogic_20190720_aosp/prebuilts/sdk/tools/lib]#ll
total 24M
-rwxr-xr-x 1 ops ops  21M 2019-08-08 17:22 d8.jar
-rwxr-xr-x 1 ops ops 969K 2019-08-08 17:22 dx.jar
-rwxr-xr-x 1 ops ops  29K 2019-08-08 17:22 shrinkedAndroid.jar
-rwxr-xr-x 1 ops ops 2.2M 2019-08-08 17:22 signapk.jar
[root@p-amlogic_20190720_aosp/prebuilts/sdk/tools/lib]#

此文上文中的方式

此时,只需要gradlw生成未签名包就可以了

SENRSL:a_displayer senrsl$ ../gradlew clean build assemble --info

然后使用系统签名打包

SENRSL:Downloads senrsl$ java -jar signapk.jar S905X3/platform.x509.pem S905X3/platform.pk8 salto/rainbow/a_displayer/build/outputs/apk/df/release/a_displayer-df-release-unsigned.apk sing_rainbow.apk
SENRSL:Downloads senrsl$

需要注意,build.gradle中需要注释掉 默认签名配置和命名规则,不然打出来的包让人疑惑。。。。

然后突然想到,是不是可以 把 pk8和x509.pem转换成gradle所需要的签名格式,配置到build.gradle中,就不需要手动打了,可以省事很多。


2,系统签名转换app签名

找到了一个现成的库,十几年前的作品。。。。

[root@]#wget https://raw.githubusercontent.com/getfatday/keytool-importkeypair/master/keytool-importkeypair

[root@]#./keytool-importkeypair -k rainbow -p 密码 -pk8 p-amlogic_20190720_aosp/build/target/product/security/platform.pk8 -cert p-amlogic_20190720_aosp/build/target/product/security/platform.x509.pem -alias rainbow
Importing "rainbow" with SHA1 Fingerprint=27:19:6E:38:6B:87:5E:76:AD:F7:00:E7:EA:84:E4:C6:EE:E3:3D:FA
Importing keystore /tmp/keytool-importkeypair.lz67/p12 to rainbow...
Entry for alias rainbow successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
<rainbow> uses the MD5withRSA signature algorithm which is considered a security risk.
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore rainbow -destkeystore rainbow -deststoretype pkcs12".
[root@]

一致,都是F9、FA、B8

但这个提示 证书 使用的 MD5withRSA 签名算法存在安全风险。


3,查看apk签名信息

三方复制粘贴公司的:

SENRSL:Downloads senrsl$ keytool -printcert -jarfile /Users/senrsl/Downloads/120_df7b093c4f2f363b7033a6997c7f0614.apk
签名者 #1:

签名:

所有者: CN=WilsonWu, OU=3G Department, O=Tencent, L=Guangzhou, ST=GD, C=CN
发布者: CN=WilsonWu, OU=3G Department, O=Tencent, L=Guangzhou, ST=GD, C=CN
序列号: 4c9215d2
有效期为 Thu Sep 16 21:04:18 CST 2010 至 Mon Feb 01 21:04:18 CST 2038
证书指纹:
     MD5:  01:1A:40:26:6C:8C:75:D1:81:DD:D8:E4:DD:C5:00:75
     SHA1: B2:E0:B6:4D:75:36:E4:AF:83:63:B4:02:2A:9F:74:72:D5:80:FA:0B
     SHA256: 66:89:6A:E0:AA:E4:8B:9E:96:3A:3E:03:4F:A2:CA:81:3C:A2:F6:61:F5:36:19:A9:22:63:A8:A5:C2:E3:F3:88
签名算法名称: SHA1withRSA
主体公共密钥算法: 1024 位 RSA 密钥
版本: 3

SENRSL:Downloads senrsl$

singedapk.jar生成的:

SENRSL:Downloads senrsl$ keytool -printcert -jarfile /Users/senrsl/Downloads/sing_rainbow.apk
签名者 #1:

签名:

所有者: EMAILADDRESS=android@android.com, CN=Android, OU=Android, O=Android, L=Mountain View, ST=California, C=US
发布者: EMAILADDRESS=android@android.com, CN=Android, OU=Android, O=Android, L=Mountain View, ST=California, C=US
序列号: b3998086d056cffa
有效期为 Wed Apr 16 06:40:50 CST 2008 至 Sun Sep 02 06:40:50 CST 2035
证书指纹:
     MD5:  8D:DB:34:2F:2D:A5:40:84:02:D7:56:8A:F2:1E:29:F9
     SHA1: 27:19:6E:38:6B:87:5E:76:AD:F7:00:E7:EA:84:E4:C6:EE:E3:3D:FA
     SHA256: C8:A2:E9:BC:CF:59:7C:2F:B6:DC:66:BE:E2:93:FC:13:F2:FC:47:EC:77:BC:6B:2B:0D:52:C1:1F:51:19:2A:B8
签名算法名称: MD5withRSA (弱)
主体公共密钥算法: 2048 位 RSA 密钥
版本: 3

扩展:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 4F E4 A0 B3 DD 9C BA 29   F7 1D 72 87 C4 E7 C3 8F  O......)..r.....
0010: 20 86 C2 99                                         ...
]
[EMAILADDRESS=android@android.com, CN=Android, OU=Android, O=Android, L=Mountain View, ST=California, C=US]
SerialNumber: [    b3998086 d056cffa]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 4F E4 A0 B3 DD 9C BA 29   F7 1D 72 87 C4 E7 C3 8F  O......)..r.....
0010: 20 86 C2 99                                         ...
]
]



Warning:
证书 使用的 MD5withRSA 签名算法存在安全风险。
SENRSL:Downloads senrsl$

转换系统签名后生成的:

SENRSL:Downloads senrsl$ keytool -printcert -jarfile /Users/senrsl/android/Project/mtime/salto/rainbow/a_displayer/build/outputs/apk/df/release/rainbow_df_v1.0.0c1_release.apk
签名者 #1:

签名:

所有者: EMAILADDRESS=android@android.com, CN=Android, OU=Android, O=Android, L=Mountain View, ST=California, C=US
发布者: EMAILADDRESS=android@android.com, CN=Android, OU=Android, O=Android, L=Mountain View, ST=California, C=US
序列号: b3998086d056cffa
有效期为 Wed Apr 16 06:40:50 CST 2008 至 Sun Sep 02 06:40:50 CST 2035
证书指纹:
     MD5:  8D:DB:34:2F:2D:A5:40:84:02:D7:56:8A:F2:1E:29:F9
     SHA1: 27:19:6E:38:6B:87:5E:76:AD:F7:00:E7:EA:84:E4:C6:EE:E3:3D:FA
     SHA256: C8:A2:E9:BC:CF:59:7C:2F:B6:DC:66:BE:E2:93:FC:13:F2:FC:47:EC:77:BC:6B:2B:0D:52:C1:1F:51:19:2A:B8
签名算法名称: MD5withRSA (弱)
主体公共密钥算法: 2048 位 RSA 密钥
版本: 3

扩展:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 4F E4 A0 B3 DD 9C BA 29   F7 1D 72 87 C4 E7 C3 8F  O......)..r.....
0010: 20 86 C2 99                                         ...
]
[EMAILADDRESS=android@android.com, CN=Android, OU=Android, O=Android, L=Mountain View, ST=California, C=US]
SerialNumber: [    b3998086 d056cffa]
]

#2: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
  CA:true
  PathLen:2147483647
]

#3: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 4F E4 A0 B3 DD 9C BA 29   F7 1D 72 87 C4 E7 C3 8F  O......)..r.....
0010: 20 86 C2 99                                         ...
]
]



Warning:
证书 使用的 MD5withRSA 签名算法存在安全风险。
SENRSL:Downloads senrsl$


4,系统安全

看了下p-amlogic_20190720_aosp/build/target/product/security内README的介绍https://source.android.com/devices/tech/ota/sign_builds.html

里面介绍了怎么生成自己的密钥集。。。。

需要生成自己的签名

然后回头看了下之前的几个工厂,都是用的默认签名。。。。


5,keystore转换成pk8、x509.pem


跳到了 os_build.md

2021年12月30日11:47:25



--
senRsl
2021年08月26日12:02:08

Flutter学习之复制粘贴运行



flutter的设计,到处透漏着啰里啰嗦老太太的气息。。。。


1,忽略低龄lint

跟成熟的语言真是没法比。。。。

lint配置文件为 analysis_options.yaml,在项目根目录

默认为:

# This file configures the analyzer, which statically analyzes Dart code to
# check for errors, warnings, and lints.
include: package:flutter_lints/flutter.yaml

linter:
  
  rules:
    # avoid_print: false  # Uncomment to disable the `avoid_print` rule
    # prefer_single_quotes: true  # Uncomment to enable the `prefer_single_quotes` rule


1)Use key in widget constructors. (Documentation)

新建一个类,没有给他加带key的构造函数时,会报这个

class Test10BannerDemo extends StatefulWidget {
  //const Test10BannerDemo({Key? key}) : super(key: key);
}

在rules中,增加

use_key_in_widget_constructors: false

类就不提示了,但是,调用的地方之前是以

const Test10BannerDemo()

调用的,没有了Key构造函数,就提示

The constructor being called isn't a const constructor. (Documentation)  Try removing 'const' from the constructor invocation.

也就是说,构造函数没有了Key,就不能用const关键字创建了。。。。

这可以,默认lint到处提示让价const,太恶心了。。。。


2)Prefer const with constant constructors. (Documentation)

到处让加const,因为这个lint....

在rule中,增加

prefer_const_constructors : false

之后就不到处提示让加const了,看着舒服多了。。。。


3) Prefer const literals as parameters of constructors on @immutable classes. (Documentation)

看起来是定义了一个数组,不加const就提示这个

在rule中增加

prefer_const_literals_to_create_immutables: false

保存即可


4)小结

加上这三个,目前代码看起来舒服多了。。。。

include: package:flutter_lints/flutter.yaml

linter:
  
  rules:
    # avoid_print: false  # Uncomment to disable the `avoid_print` rule
    # prefer_single_quotes: true  # Uncomment to enable the `prefer_single_quotes` rule
    use_key_in_widget_constructors: false #不提示类创建带Key的构造函数
    prefer_const_constructors: false #不提示让加const
    prefer_const_literals_to_create_immutables: false #数组不提示让加const



2,The argument type 'double?' can't be assigned to the parameter type 'double'. (Documentation)

来源

selectedFontSize: textTheme.caption.fontSize,

是caption可能为空

修改为

selectedFontSize: textTheme.caption?.fontSize ?? 10


3,读取本地图片

1)创建目录

SENRSL:hello senrsl$ tree assets/
assets/
├── images
│   └── bg2021.png
└── music

2 directories, 1 file
SENRSL:hello senrsl$

2)修改pubspec.yaml

flutter:

  # The following line ensures that the Material Icons font is
  # included with your application, so that you can use the icons in
  # the material Icons class.
  uses-material-design: true

  # To add assets to your application, add an assets section, like this:
  assets:
    #   - images/a_dot_burr.jpeg
    #   - images/a_dot_ham.jpeg
    - assets/images/  #加载images目录下(不含子目录)所有图片

放开注释,加目录。。。。

assets是一个集合,如果没有加前面的-,会 报错 Expected "assets" to be a list, but got assets/images/ (String).

3)调用

child: Image.asset('assets/images/bg2021.png'),

4)多平台

理论上来说,在Android手机跟iPhone手机上、mac等电脑上,同一张图片分辨率应该是大不相同的。

但是flutter只提供了1.0倍,2.0倍像素比率的选项。。。。

按照官方的介绍,增加了2.0x和3.0x的目录,其中1.0x是根目录,也就是基准

SENRSL:hello senrsl$ tree assets/
assets/
├── images
│   ├── 2.0x
│   │   └── bg2021.png
│   ├── 3.0x
│   │   └── bg2021.png
│   └── bg2021.png
└── music

4 directories, 3 files
SENRSL:hello senrsl$

然后测试了下,在macbook pro Chrome上使用的是2.0x的图片,在pixel4上使用的竟然是3.0x的图片。。。。

另外,没有修改pubspec.yaml文件增加2.0x和3.0x的目录,看起来是可以自动识别了。。。。


4,Scaffold.of() called with a context that does not contain a Scaffold

里面调用了Scaffold.of(context).showBottomSheet<void>(),但树形结构里没有Scaffold控件

外面包一层Scaffold控件。。。。

class Test10BottomSheetWidget extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return Scaffold(
      body: Test10PersistentBottomSheetDemo(),
    );
  }
}


5, Error: The argument type 'Route<String> Function(BuildContext, Object)' can't be assigned to the parameter type 'Route<Object?> Function(BuildContext, Object?)' because 'Object?' is nullable and 'Object' isn't

查了一遭,应该是新版本默认开启了空安全,但是sdk跟旧的源码里没开,大部分是能改的,但是小部分更改难度变大。

在不好改的dart文件,使用低版本sdk

// @dart=2.9
// 必须在dart文件的第一行,可以加在任何dart文件中

// Copyright 2019 The Flutter team. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

import 'package:flutter/material.dart';

指定之后,新的语法方式就不能用了,如  ?、!、required 以及 late 。

此时,编译器不报错了,但是运行报错

Error: A library can't opt out of null safety by default, when using sound null safety.

然后需要在main入口的dart文件,也加上

// @dart=2.9

然后,项目就变成了混合版本的程序,即非健全的空安全。。。。


6,Flutter 多平台支持

默认建的项目,支持 Android、iOS,Web。

怎么让它同时支持 Window、Linux、Mac呢?

1)启用桌面平台支持

SENRSL:hello senrsl$ flutter config --enable-windows-desktop
Setting "enable-windows-desktop" value to "true".

You may need to restart any open editors for them to read new settings.
SENRSL:hello senrsl$ flutter config --enable-macos-desktop
Setting "enable-macos-desktop" value to "true".

You may need to restart any open editors for them to read new settings.
SENRSL:hello senrsl$ flutter config --enable-linux-desktop
Setting "enable-linux-desktop" value to "true".

You may need to restart any open editors for them to read new settings.


2)检查是否成功

SENRSL:hello senrsl$ flutter devices
3 connected devices:

Pixel 4 (mobile) • 99111FFAZ0042Q • android-arm64  • Android 12 (API 31)
macOS (desktop)  • macos          • darwin-x64     • macOS 12.0.1 21A559 darwin-x64
Chrome (web)     • chrome         • web-javascript • Google Chrome 96.0.4664.110
SENRSL:hello senrsl$

可以看到设备列表里多了macos....

3)运行在桌面设备上

这时候打开旧的项目,可用设备里有了macos,但是运行会提示

Exception: No macOS desktop project configured。

按照文档说法,这时候flutter create新项目,就会自带 window、linux、macos的设备运行目录了。

4)旧项目增加 桌面平台运行配置

SENRSL:hello senrsl$ flutter create --platforms=windows,macos,linux .
Recreating project ....
  windows/runner/flutter_window.cpp (created)
  windows/runner/utils.h (created)
  windows/runner/utils.cpp (created)
  windows/runner/runner.exe.manifest (created)
  windows/runner/CMakeLists.txt (created)
  windows/runner/win32_window.h (created)
  windows/runner/Runner.rc (created)
  windows/runner/win32_window.cpp (created)
  windows/runner/resources/app_icon.ico (created)
  windows/runner/main.cpp (created)
  windows/runner/resource.h (created)
  windows/runner/flutter_window.h (created)
  windows/flutter/CMakeLists.txt (created)
  windows/.gitignore (created)
  windows/CMakeLists.txt (created)
  macos/Runner.xcworkspace/contents.xcworkspacedata (created)
  macos/Runner.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_16.png (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_1024.png (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_256.png (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_64.png (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_512.png (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_128.png (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json (created)
  macos/Runner/Assets.xcassets/AppIcon.appiconset/app_icon_32.png (created)
  macos/Runner/DebugProfile.entitlements (created)
  macos/Runner/Base.lproj/MainMenu.xib (created)
  macos/Runner/MainFlutterWindow.swift (created)
  macos/Runner/Configs/Debug.xcconfig (created)
  macos/Runner/Configs/Release.xcconfig (created)
  macos/Runner/Configs/Warnings.xcconfig (created)
  macos/Runner/Configs/AppInfo.xcconfig (created)
  macos/Runner/AppDelegate.swift (created)
  macos/Runner/Info.plist (created)
  macos/Runner/Release.entitlements (created)
  macos/Runner.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist (created)
  macos/Runner.xcodeproj/project.pbxproj (created)
  macos/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme (created)
  macos/Flutter/Flutter-Debug.xcconfig (created)
  macos/Flutter/Flutter-Release.xcconfig (created)
  macos/.gitignore (created)
  linux/main.cc (created)
  linux/my_application.h (created)
  linux/my_application.cc (created)
  linux/flutter/CMakeLists.txt (created)
  linux/.gitignore (created)
  linux/CMakeLists.txt (created)
Running "flutter pub get" in hello...                               8.8s
Wrote 47 files.

All done!
In order to run your application, type:

  $ cd .
  $ flutter run

Your application code is in ./lib/main.dart.

SENRSL:hello senrsl$

这时,再看项目,根目录增加了windows、Linux、macos的项目了,跟ios和Android平级。

这时候,再打开IDE,运行macos,就成功了。。。。。

此处配图1



这个多平台兼容可以。。。。。

7,网络请求

查了一遭,就三个,dart自带的httpClient、官网的http、还有flutterchina club的dio。。。。

1)HttpClient

先从眼熟的开始

2)http lib


8,RenderBox was not laid out: RenderPointerListener

之前Row[Text(),TextField(),]

这个错误是说,TextField控件需要有宽度限制。。。。

所以给他一个可以识别的宽度才行,如 TextFiled()铺满剩下的控件,

修改后,变成了

Row[Text(),Expanded(TextFiled())]


9, Avoid `print` calls in production code. (Documentation)

print()换成 debugPrint();


10, 组件间传值

如果有两个组件,比如一个 RadioButton,一个按钮,他俩是平行的,那么如何在点击按钮的时候获取radioButton的值呢?

查了一上午,竟然是用回调。。。。

一般的回调有  VoidCallback, Function(x), Valuechanged<T>, Valuesetter<T>。。。。。

11,Unsupported operation: Platform._version

web端不支持 dart.io 库  github.com/flutter/flutter/issues/39998

mobile 跟 desktop可以。。。。

12,原生通信

此处配图2



通信的三个方法

  1. MethodChannel:Flutter 与 Native 端相互调用,调用后可以返回结果,可以 Native 端主动调用,也可以Flutter主动调用,属于双向通信。此方式为最常用的方式, Native 端调用需要在主线程中执行。
  2. BasicMessageChannel:用于使用指定的编解码器对消息进行编码和解码,属于双向通信,可以 Native 端主动调用,也可以Flutter主动调用。
  3. EventChannel:用于数据流(event streams)的通信, Native 端主动发送数据给 Flutter,通常用于状态的监听,比如网络变化、传感器数据等。


啰里啰嗦。。。。

--
senRsl
2021年12月16日10:34:43

Flutter国际化

2021年12月15日星期三



dart语言太啰嗦了,无休止的嵌套和括号。。。。

1,添加依赖配置

pubspec.yaml

dependencies:
  flutter:
    sdk: flutter
  #i18n
  flutter_localizations:
    sdk: flutter

   。。。
  

  # 从这里往下新增依赖库
  english_words: ^4.0.0 # 英语字母库
  #  animations: ^2.0.0 #动画
  intl: ^0.17.0 # 本地配置国际化

。。。

flutter:
  。。。

  # 本地国际化
  generate: true

注意,这个文件对空格有要求,格式化也不能解决。。。。


2,项目根目录创建i18n.yaml

跟pubspec.yaml平级

arb-dir: lib/l10n
template-arb-file: intl_en.arb
output-localization-file: test_hello_localizations.dart
output-class: TestHelloLocalizations
preferred-supported-locales:
  - en
use-deferred-loading: true

指定生成的文件和类名


3,增加本地化模板

新增英文 lib/i10n/intl_en.arb  json格式

{
  "replyDraftsLabel": "Drafts",
  "@replyDraftsLabel": {
    "description": "Text label for Drafts destination."
  }
}


4.,增加翻译

如增加中文

lib/i10n/intl_zh.arb

{
  "replyDraftsLabel": "草稿"
}


5,生成

编译后,在项目根目录 .dart_tool/flutter_gen/gen_i10n/下生成了三个文件

SENRSL:hello senrsl$ ls .dart_tool/flutter_gen/gen_l10n/
test_hello_localizations.dart        test_hello_localizations_en.dart    test_hello_localizations_zh.dart
SENRSL:hello senrsl$

对应

abstract class TestHelloLocalizations {}
class TestHelloLocalizationsEn extends TestHelloLocalizations {}
class TestHelloLocalizationsZh extends TestHelloLocalizations {}

动态代理模式,返回指定语言下的指定内容


6,应用入口配置使用本地语言

import 'package:flutter_gen/gen_l10n/test_hello_localizations.dart';

void main() {
  runApp(const MaterialApp(
    title: "顶级title",
    localizationsDelegates: [
      TestHelloLocalizations.delegate,
      GlobalMaterialLocalizations.delegate,
      GlobalWidgetsLocalizations.delegate,
      GlobalCupertinoLocalizations.delegate,
    ],
    // localizationsDelegates: TestHelloLocalizations.localizationsDelegates,
    supportedLocales: TestHelloLocalizations.supportedLocales,
    // supportedLocales: [
    //   Locale("en",""),
    //   Locale("cn",""),
    // ],
    home: TestRun10Widget(),
  ));
}

Global啥的应该不用加,配置方法在TestHelloLocalizations中有详细介绍。。。。


7,使用

import 'package:flutter_gen/gen_l10n/test_hello_localizations.dart';

label: TestHelloLocalizations.of(context)?.bottomNavigationCommentsTab,


8,总结

flutter一定是一堆无聊透顶的老太太们设计的,逻辑啰嗦,无语至极。


--
senRsl
2021年12月15日17:43:54

Android APK 打包系统签名方式实现静默安装卸载

2021年8月21日星期六



需要用到静默安装、卸载升级其他app....

1,控制app调整

增加 

android:sharedUserId="android.uid.system"

找到系统ROM所用签名文件

platform.pk8

platform.x509.pem

不加系统签名会报错:

adb: failed to install signed.apk: Failure [INSTALL_FAILED_UPDATE_INCOMPATIBLE: Package xxxxapps.rainbow signatures do not match previously installed version; ignoring!]

加系统签名:

SENRSL:Downloads senrsl$ java -jar signapk.jar 9.0系统签名/platform.x509.pem 9.0系统签名/platform.pk8 a_displayer/build/outputs/apk/df/debug/a_displayer-df-debug.apk sing-rainbow.apk
SENRSL:Downloads senrsl$ adb install -t sing-rainbow.apk
Success
SENRSL:Downloads senrsl$

2,安三方应用

avc:  denied  { read } for  scontext=u:r:system_server:s0 tcontext=u:object_r:sdcardfs:s0 tclass=file permissive=1

在android P以后,不能随便安了,只能直接安装 固定目录下的

K10Pro:/ # pm install -r /data/local/tmp/test.apk                                                                                                     
Success
K10Pro:/ #

或者

cat $apkfile.apk | pm install -S $apkfile.length

3,Android 9.0

发现在android 9.0 P上,不能再用pm来操作了。。。。

一个开源的系统,逐渐走向封闭。。。。

找到的能用的,只能通过反射解决。。。。

4,在Android 9 上静默安装

public static boolean install(Context context, String apkPath) { if (android.os.Build.VERSION.SDK_INT < android.os.Build.VERSION_CODES.LOLLIPOP) return false; PackageInstaller packageInstaller = context.getPackageManager().getPackageInstaller(); PackageInstaller.SessionParams params = new PackageInstaller.SessionParams(PackageInstaller.SessionParams.MODE_FULL_INSTALL); String pkgName = getApkPackageName(context, apkPath); if (pkgName == null) { return false; } params.setAppPackageName(pkgName); try { Method allowDowngrade = PackageInstaller.SessionParams.class.getMethod("setAllowDowngrade", boolean.class); allowDowngrade.setAccessible(true); allowDowngrade.invoke(params, true); } catch (Exception e) { e.printStackTrace(); } OutputStream os = null; InputStream is = null; try { int sessionId = packageInstaller.createSession(params); PackageInstaller.Session session = packageInstaller.openSession(sessionId); os = session.openWrite(pkgName, 0, -1); is = new FileInputStream(apkPath); byte[] buffer = new byte[1024]; int len; while ((len = is.read(buffer)) != -1) { os.write(buffer, 0, len); } session.fsync(os); os.close(); os = null; is.close(); is = null; Intent intent = new Intent(getContext(), DistributeStatusReceiver.class);//new Intent(Intent.ACTION_MAIN) session.commit(PendingIntent.getBroadcast(context, sessionId, intent, 0).getIntentSender()); } catch (Exception e) { Logger.w("" + e.getMessage()); return false; } finally { if (os != null) { try { os.close(); } catch (IOException e) { e.printStackTrace(); } } if (is != null) { try { is.close(); } catch (IOException e) { e.printStackTrace(); } } } return true; } /** * 获取apk的包名 */ public static String getApkPackageName(Context context, String apkPath) { PackageManager pm = context.getPackageManager(); PackageInfo info = pm.getPackageArchiveInfo(apkPath, 0); if (info != null) { return info.packageName; } else { return null; } }


5,在Android 9 上静默卸载

/** * 根据包名卸载应用 * * @param packageName */ public static void uninstall(String packageName) { if (android.os.Build.VERSION.SDK_INT < android.os.Build.VERSION_CODES.LOLLIPOP) return; try { Intent broadcastIntent = new Intent(getContext(), DistributeStatusReceiver.class); PendingIntent pendingIntent = PendingIntent.getBroadcast(getContext(), 1, broadcastIntent, PendingIntent.FLAG_UPDATE_CURRENT); PackageInstaller packageInstaller = getContext().getPackageManager().getPackageInstaller(); packageInstaller.uninstall(packageName, pendingIntent.getIntentSender()); } catch (Exception e) { e.printStackTrace(); } }


6,打开三方应用

public static void start(@NonNull String packageName, @NonNull String className) throws ActivityNotFoundException { Intent intent = new Intent(); intent.setClassName(packageName, className); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); getContext().startActivity(intent); }

感觉google马上就把android玩死了,毕竟是一贯的风格。。。。

--
senRsl
2021年08月21日15:13:36

从jcenter到mavenCentral

2021年8月12日星期四



2月份就收到了消息,拖到了8月份才决定要弄。。。。

1. 仓库权限

注册账号 https://issues.sonatype.org/

原来18年就上传过库,后面不知为什么搁置了。。。。

然后到这个地址 https://issues.sonatype.org/projects 创建 issue

项目看起来,Community Support - Open Source Project Repository Hosting (OSSRH) 这个更像一些

This Jira project is for issues related to publishing to Maven Central. Selecting the correct Issue Type will help us process your ticket faster:

  • New Project: For registering a new Group ID on OSSRH for publishing to Central
  • Publishing Support: For general publishing support on existing Group IDs such as permission changes or publishing errors.

那就是选 New Project.

此处配图1


gourpid必须使用自己的域名,二级的也行 central.sonatype.org/publish/requirements/coordinates

Project URL 直接用 github库地址

scm url 直接上一个地址 加.git

其他默认

创建。


很快,过了没两分钟,就提示让验证 域名了

Permalink
central-ossrh
Central OSSRH added a comment - 7 minutes ago
Do you own the domain dcjz.ml? If so, please verify ownership via one of the following methods:

Add a TXT record to your DNS referencing this JIRA ticket: OSSRH-72029 (Fastest) https://central.sonatype.org/faq/how-to-set-txt-record/
Setup a redirect to your https://github.com/senrsl page (if it does not already exist)

然后,跑到 域名DNS配置,增加一条

OSSRH-72029   TXT  3600  https://issues.sonatype.org/browse/OSSRH-72029

然后回issue回复: I own this domain dcjz.ml and have added TXT record to DNS resolution.

然后,去查 txt记录

SENRSL:Downloads senrsl$ nslookup dcjz.ml
Server:        8.8.8.8
Address:    8.8.8.8#53

Non-authoritative answer:
Name:    dcjz.ml
Address: 34.92.166.185

SENRSL:Downloads senrsl$ nslookup -q=txt dcjz.ml
;; Got SERVFAIL reply from 8.8.8.8, trying next server
Server:        114.114.114.114
Address:    114.114.114.114#53

** server can't find dcjz.ml: SERVFAIL

SENRSL:Downloads senrsl$

难道是加的时间短?

折腾了半天,依然没有查到txt记录,但是回去一刷新,竟然通过了。。。。

ml.dcjz has been prepared, now user(s) dcjz can:
Publish snapshot and release artifacts to s01.oss.sonatype.org
Have a look at this section of our official guide for deployment instructions:
https://central.sonatype.org/publish/publish-guide/#deployment

Depending on your build configuration, your first component(s) might be released automatically after a successful deployment.
If that happens, you will see a comment on this ticket confirming that your artifact has synced to Maven Central.
If you do not see this comment within an hour or two, you can follow the steps in this section of our guide:
https://central.sonatype.org/publish/release/

######

As part of our efforts to improve the security and quality posture of the open source supply chain,
we plan to enable additional scanning of dependencies for security alerts soon. Since you're already
hosting your source code in Github, you can get these insights today by enabling Sonatype Lift.
Sonatype Lift is free forever on public repositories! Lift tells you about open source vulnerabilities
during code review, and goes beyond open source to scan your code for both code quality and security issues,
providing feedback right in your pull requests.
More information can be found at https://links.sonatype.com/products/lift/github-integration

######

这。。。。

补充: 这个是对的,需要区分大小写。。。。

SENRSL:Downloads senrsl$ nslookup -q=txt ossrh-72029.dcjz.ml
Server:        8.8.8.8
Address:    8.8.8.8#53

Non-authoritative answer:
ossrh-72029.dcjz.ml    text = "https://issues.sonatype.org/browse/OSSRH-72029"

Authoritative answers can be found from:

SENRSL:Downloads senrsl$

回去看一眼 mavenCentral的 dashboard,jcenter倒了以后,mavenCentral工作量明显变大了。。。。

发布包的目标为 https://s01.oss.sonatype.org/   或 https://oss.sonatype.org/


2. gpg加密

gpg依然可以用 jcenter现成的。

SENRSL:Downloads senrsl$ gpg -k
/Users/senrsl/.gnupg/pubring.kbx
--------------------------------
pub   rsa2048 2018-09-12 [SC]
      37。。。。。。。。。。。。。FD
uid           [ 绝对 ] senRsl DC (thrid4all) <d。。。。@yeah.net>
sub   rsa2048 2018-09-12 [E]

SENRSL:Downloads senrsl$

SENRSL:Downloads senrsl$ gpg --export-secret-keys  -o secring.gpg

生成了 secring.gpg文件,后面要用。

想了半天,终于想起了密码,isY218

然后把公钥发送到keyserver

SENRSL:Downloads senrsl$ gpg --keyserver hkp://keyserver.ubuntu.com:11371 --send-keys 37。。。。。FD
gpg: 正在发送密钥 2AC2774CF08C95FD 到 hkp://keyserver.ubuntu.com:11371
SENRSL:Downloads senrsl$ gpg --keyserver keyserver.ubuntu.com --send-keys 37。。。。。FD
gpg: 正在发送密钥 2AC2774CF08C95FD 到 hkp://keyserver.ubuntu.com
SENRSL:Downloads senrsl$

二选一就行了

然后一直 收不到创建成功。。。。

SENRSL:Downloads senrsl$ gpg --search-keys 37。。。。FD
gpg: error searching keyserver: No name
gpg: 公钥服务器搜索失败:No name
SENRSL:Downloads senrsl$

这个好像行

SENRSL:Downloads senrsl$ gpg --keyserver hkp://keyserver.ubuntu.com:11371 --recv-keys 37。。。。。。FD
gpg: 密钥 。。。。。FD:"senRsl DC (thrid4all) <d。。。。@yeah.net>" 未改变
gpg: 处理的总数:1
gpg:              未改变:1
SENRSL:Downloads senrsl$

3. 发布

依据jcenter规则,大体分 几个目录,借机合并

根项目 core、bridge、shell

根目录 bridge/adapter、common/net、libs/*、plugin/*、aider/*,还有个 java过来的顶级common

那就保留根目录、common合 net和java、libs、plugin.

按照通用玩法,应该是 按产品或者技术栈归类,更明确

但是 个人就省点事吧。。。。

然后去 连续申请了三个,就被无情的打击了,貌似是不需要了?

Central OSSRH added a comment - 08/10/21 03:53 AM
Only one JIRA Issue per top-level groupId is necessary. You should already have all the necessary permissions to deploy any new artifacts under this groupId or any sub-groups, thanks to OSSRH-72029 .

可是应该怎么理解这句呢?

屮,查了半天,然后回来看这句话,any sub-groups,原来如此。。。。

然后写脚本,这里要注意

singing信息需要写到 gradle.prop里面,不然会读不到

然后 repository里面的version一直读到 unspecified ,导致把snapshot包传到release环境,结果一直返回400 Received status code 400 from server: Bad Request

明天继续

今天只安排了六场面试呢。。。。。

2021年08月11日12:10:29

昨天就上传了snapshot,上传后去哪了呢?

找了一遭,staging repository里没有,但是在外部staging里面是有的。。。。

snapshot里面也有。。。。

不需要close后校验?直接就校验了?

然后version去掉-SNAPSHOT,上传release,发现 就出现在了staging repository里,点击close,等发布正式版。

还可以通过这个链接看到 s01.oss.sonatype.org/content/repositories/

过了一会,状态变成了closed,也就是校验完毕,然后点击release,进行最终发布。

点完release,staging repository里面就没了,原来 staging跟git 的staging是一样的,是发布前的暂存区,发布后自己就没了。

怪不得发布snapshot不会在这里显示。

然后查看 release跟snapshot里面都是有的。。。。


4. 引用

此时,去拉取 上传的包 失败

可以明确看到这三个的调用地址

    - https://dl.google.com/dl/android/maven2/ml/dcjz/core/1.0.10/core-1.0.10.pom
    - https://dl.google.com/dl/android/maven2/ml/dcjz/core/1.0.10/core-1.0.10.jar
    - https://jcenter.bintray.com/ml/dcjz/core/1.0.10/core-1.0.10.pom
    - https://jcenter.bintray.com/ml/dcjz/core/1.0.10/core-1.0.10.jar
    - https://repo.maven.apache.org/maven2/ml/dcjz/core/1.0.10/core-1.0.10.pom
    - https://repo.maven.apache.org/maven2/ml/dcjz/core/1.0.10/core-1.0.10.jar

对应

        google()
        jcenter()
        mavenCentral()

过了有十分钟,就能拉取到了!

5. 后续

然后发现打的包不全,发布同一个版本时,无法release了。。。。

Artifact updating: Repository ='releases:Releases' does not allow updating artifact='/ml/dcjz/core/1.0.10/core-1.0.10.pom'

看起来是 不能自动替换?

发布过的也没找到地方删除。。。。

查了下,竟然无法删除,也不能覆盖。。。。

在release之前是可以drop删掉的。。。。


6. 删除github提交记录

singing信息必须要写到gradle.prop里才能读到,这就坑了

SENRSL:sample senrsl$ git log

SENRSL:sample senrsl$ git reset --hard 目标

SENRSL:sample senrsl$ git push origin HEAD --force

这样本地和GitHub代码都会变成以前的,再重新拷一遍。。。。

7,siging的配置

默认必须放在gradle.prop里面,不然 singing插件会读不到

但是呢,其实是 读到了ext里面

依然可以 放到local.prop里,然后再手动读到ext中

ext["signing.keyId"] = properties.getProperty("signing.keyId")
ext["signing.password"] = properties.getProperty("signing.password")
ext["signing.secretKeyRingFile"] = properties.getProperty("signing.secretKeyRingFile")

这样就可以了。

--
senRsl
2021年08月09日15:16:19

Exoplayer学习07 音频处理器 AudioProcessor

2021年6月3日星期四



继承关系

AudioProcessor.png

接口定义

/** * Interface for audio processors, which take audio data as input and transform it, potentially * modifying its channel count, encoding and/or sample rate. * * <p>In addition to being able to modify the format of audio, implementations may allow parameters * to be set that affect the output audio and whether the processor is active/inactive. * 音频处理器的接口,它将音频数据作为输入并进行转换,从而有可能修改其通道数,编码和/或采样率。 * <p>除了能够修改音频的格式外,实现还可以设置影响输出音频以及处理器是否处于活动状态的参数。 */ public interface AudioProcessor { /** PCM audio format that may be handled by an audio processor.音频处理器可以处理的PCM音频格式。 */ final class AudioFormat { public static final AudioFormat NOT_SET = //没有设置,就都是-1 new AudioFormat( /* sampleRate= */ Format.NO_VALUE, /* channelCount= */ Format.NO_VALUE, /* encoding= */ Format.NO_VALUE); /** The sample rate in Hertz赫兹采样率. */ public final int sampleRate; /** The number of interleaved channels. */ public final int channelCount; /** The type of linear PCM encoding. */ @C.PcmEncoding public final int encoding; /** The number of bytes used to represent one audio frame用来表示一个音频帧的字节数. */ public final int bytesPerFrame; public AudioFormat(int sampleRate, int channelCount, @C.PcmEncoding int encoding) { this.sampleRate = sampleRate; this.channelCount = channelCount; this.encoding = encoding; bytesPerFrame = Util.isEncodingLinearPcm(encoding) ? Util.getPcmFrameSize(encoding, channelCount) : Format.NO_VALUE; Logger.w("AudioProcessor",sampleRate,channelCount,encoding,bytesPerFrame); } @Override public String toString() { return "AudioProcessor 音频处理 AudioFormat[" + "sampleRate=" + sampleRate + ", channelCount=" + channelCount + ", encoding=" + encoding + ']'; } } /** Exception thrown when a processor can't be configured for a given input audio format * 无法为给定的输入音频格式配置处理器时抛出异常. */ final class UnhandledAudioFormatException extends Exception { public UnhandledAudioFormatException(AudioFormat inputAudioFormat) { super("Unhandled format: " + inputAudioFormat); } } /** An empty, direct {@link ByteBuffer}. */ ByteBuffer EMPTY_BUFFER = ByteBuffer.allocateDirect(0).order(ByteOrder.nativeOrder()); /** * Configures the processor to process input audio with the specified format. After calling this * method, call {@link #isActive()} to determine whether the audio processor is active. Returns * the configured output audio format if this instance is active. * * <p>After calling this method, it is necessary to {@link #flush()} the processor to apply the * new configuration. Before applying the new configuration, it is safe to queue input and get * output in the old input/output formats. Call {@link #queueEndOfStream()} when no more input * will be supplied in the old input format. * 将处理器配置为处理指定格式的输入音频。 调用此方法后,调用isActive()以确定音频处理器是否处于活动状态。 如果此实例处于活动状态,则返回配置的输出音频格式。 * 调用此方法后,有必要flush()处理器以应用新配置。 在应用新配置之前,可以安全地将输入排队并以旧的输入/输出格式获取输出。 当不再以旧的输入格式提供输入时,请调用queueEndOfStream()。 * * @param inputAudioFormat The format of audio that will be queued after the next call to {@link * #flush()}. * @return The configured output audio format if this instance is {@link #isActive() active}. * @throws UnhandledAudioFormatException Thrown if the specified format can't be handled as input. */ AudioFormat configure(AudioFormat inputAudioFormat) throws UnhandledAudioFormatException; /** Returns whether the processor is configured and will process input buffers返回是否配置了处理器并将处理输入缓冲区. */ boolean isActive(); /** * Queues audio data between the position and limit of the input {@code buffer} for processing. * {@code buffer} must be a direct byte buffer with native byte order. Its contents are treated as * read-only. Its position will be advanced by the number of bytes consumed (which may be zero). * The caller retains ownership of the provided buffer. Calling this method invalidates any * previous buffer returned by {@link #getOutput()}. * 在输入缓冲区的位置和限制之间排队音频数据以进行处理。 * 缓冲区必须是具有本地字节顺序的直接字节缓冲区。 其内容被视为只读。 它的位置将增加所消耗的字节数(可能为零)。 * 调用方保留提供的缓冲区的所有权。 调用此方法会使getOutput()返回的所有先前缓冲区无效。 * * @param buffer The input buffer to process. */ void queueInput(ByteBuffer buffer); /** * Queues an end of stream signal. After this method has been called, * {@link #queueInput(ByteBuffer)} may not be called until after the next call to * {@link #flush()}. Calling {@link #getOutput()} will return any remaining output data. Multiple * calls may be required to read all of the remaining output data. {@link #isEnded()} will return * {@code true} once all remaining output data has been read. * 将流信号的末尾排队。 * 调用此方法后,直到下一次调用flush()之后,才能调用queueInput(ByteBuffer)。 * 调用getOutput()将返回所有剩余的输出数据。 可能需要多次调用才能读取所有剩余的输出数据。 一旦读取了所有剩余的输出数据,isEnded()将返回true。 */ void queueEndOfStream(); /** * Returns a buffer containing processed output data between its position and limit. The buffer * will always be a direct byte buffer with native byte order. Calling this method invalidates any * previously returned buffer. The buffer will be empty if no output is available. * 返回一个缓冲区,该缓冲区包含在其位置和限制之间的已处理输出数据。 缓冲区将始终是具有本地字节顺序的直接字节缓冲区。 * 调用此方法会使以前返回的所有缓冲区无效。 如果没有可用的输出,缓冲区将为空。 * * 返回值: * 包含在其位置和极限之间的已处理输出数据的缓冲区。 * * @return A buffer containing processed output data between its position and limit. */ ByteBuffer getOutput(); /** * Returns whether this processor will return no more output from {@link #getOutput()} until it * has been {@link #flush()}ed and more input has been queued. */ boolean isEnded(); /** * Clears any buffered data and pending output. If the audio processor is active, also prepares * the audio processor to receive a new stream of input in the last configured (pending) format. * 清除所有缓冲的数据和挂起的输出。 如果音频处理器处于活动状态,则还要准备音频处理器以接收最后配置的(待定)格式的新输入流。 */ void flush(); /** Resets the processor to its unconfigured state, releasing any resources. */ void reset(); }

在demo中,实际使用在DefaultAudioSink

入口在DefaultRenderFactory

/** * Builds an {@link AudioSink} to which the audio renderers will output. * 构建一个{@link AudioSink},音频渲染器将输出到该音频。 * * @param context The {@link Context} associated with the player. * @param enableFloatOutput Whether to enable use of floating point audio output, if available. * @param enableAudioTrackPlaybackParams Whether to enable setting playback speed using {@link * android.media.AudioTrack#setPlaybackParams(PlaybackParams)}, if supported. * @param enableOffload Whether to enable use of audio offload for supported formats, if * available. * @return The {@link AudioSink} to which the audio renderers will output. May be {@code null} if * no audio renderers are required. If {@code null} is returned then {@link * #buildAudioRenderers} will not be called. */ @Nullable protected AudioSink buildAudioSink( Context context, boolean enableFloatOutput, boolean enableAudioTrackPlaybackParams, boolean enableOffload) { return new DefaultAudioSink( AudioCapabilities.getCapabilities(context), new DefaultAudioProcessorChain(), enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload); }

对于AudioProcess的调用,就是循环各个processor。。。。

/** * Creates a new default audio sink, optionally using float output for high resolution PCM and * with the specified {@code audioProcessorChain}. * 创建一个新的默认音频接收器,还可以选择将浮点输出用于高分辨率PCM,并使用指定的{@code audioProcessorChain}。 * * @param audioCapabilities The audio capabilities for playback on this device在此设备上播放的音频功能. May be null if the * default capabilities (no encoded audio passthrough support) should be assumed. * @param audioProcessorChain An {@link AudioProcessorChain} which is used to apply playback * parameters adjustments. The instance passed in must not be reused in other sinks. * @param enableFloatOutput Whether to enable 32-bit float output. Where possible, 32-bit float * output will be used if the input is 32-bit float, and also if the input is high resolution * (24-bit or 32-bit) integer PCM. Float output is supported from API level 21. Audio * processing (for example, speed adjustment) will not be available when float output is in * use. * @param enableAudioTrackPlaybackParams Whether to enable setting playback speed using {@link * android.media.AudioTrack#setPlaybackParams(PlaybackParams)}, if supported. * @param enableOffload Whether to enable audio offload. If an audio format can be both played * with offload and encoded audio passthrough, it will be played in offload. Audio offload is * supported from API level 29. Most Android devices can only support one offload {@link * android.media.AudioTrack} at a time and can invalidate it at any time. Thus an app can * never be guaranteed that it will be able to play in offload. Audio processing (for example, * speed adjustment) will not be available when offload is in use. */ public DefaultAudioSink( @Nullable AudioCapabilities audioCapabilities, AudioProcessorChain audioProcessorChain, boolean enableFloatOutput, boolean enableAudioTrackPlaybackParams, boolean enableOffload) { this.audioCapabilities = audioCapabilities; this.audioProcessorChain = Assertions.checkNotNull(audioProcessorChain); this.enableFloatOutput = Util.SDK_INT >= 21 && enableFloatOutput; this.enableAudioTrackPlaybackParams = Util.SDK_INT >= 23 && enableAudioTrackPlaybackParams; this.enableOffload = Util.SDK_INT >= 29 && enableOffload; releasingConditionVariable = new ConditionVariable(true); audioTrackPositionTracker = new AudioTrackPositionTracker(new PositionTrackerListener()); channelMappingAudioProcessor = new ChannelMappingAudioProcessor(); trimmingAudioProcessor = new TrimmingAudioProcessor(); ArrayList<AudioProcessor> toIntPcmAudioProcessors = new ArrayList<>(); Collections.addAll( toIntPcmAudioProcessors, new ResamplingAudioProcessor(), channelMappingAudioProcessor, trimmingAudioProcessor); Collections.addAll(toIntPcmAudioProcessors, audioProcessorChain.getAudioProcessors()); toIntPcmAvailableAudioProcessors = toIntPcmAudioProcessors.toArray(new AudioProcessor[0]); toFloatPcmAvailableAudioProcessors = new AudioProcessor[] {new FloatResamplingAudioProcessor()}; volume = 1f; audioAttributes = AudioAttributes.DEFAULT; audioSessionId = C.AUDIO_SESSION_ID_UNSET; auxEffectInfo = new AuxEffectInfo(AuxEffectInfo.NO_AUX_EFFECT_ID, 0f); mediaPositionParameters = new MediaPositionParameters( PlaybackParameters.DEFAULT, DEFAULT_SKIP_SILENCE, /* mediaTimeUs= */ 0, /* audioTrackPositionUs= */ 0); audioTrackPlaybackParameters = PlaybackParameters.DEFAULT; drainingAudioProcessorIndex = C.INDEX_UNSET; activeAudioProcessors = new AudioProcessor[0]; outputBuffers = new ByteBuffer[0]; mediaPositionParametersCheckpoints = new ArrayDeque<>(); initializationExceptionPendingExceptionHolder = new PendingExceptionHolder<>(AUDIO_TRACK_RETRY_DURATION_MS); writeExceptionPendingExceptionHolder = new PendingExceptionHolder<>(AUDIO_TRACK_RETRY_DURATION_MS); }

搞了两个AudioProcessor数组,一个是toIntPcm Avaliable AudioProcessor,一个是toFloatPcm Avaliable AudioProcess。

分别对应解码后的输出类型,因为好多设备不支持浮点输出

然后去config

@Override public void configure(Format inputFormat, int specifiedBufferSize, @Nullable int[] outputChannels) throws ConfigurationException { int inputPcmFrameSize; @Nullable AudioProcessor[] availableAudioProcessors; 。。。 Logger.w(TAG,"configure方法",inputFormat.toString(),specifiedBufferSize,outputChannels);//Format(2, null, null, audio/ac3, null, -1, en, [-1, -1, -1.0], [6, 48000]),0,null if (MimeTypes.AUDIO_RAW.equals(inputFormat.sampleMimeType)) { Assertions.checkArgument(Util.isEncodingLinearPcm(inputFormat.pcmEncoding)); inputPcmFrameSize = Util.getPcmFrameSize(inputFormat.pcmEncoding, inputFormat.channelCount); availableAudioProcessors = shouldUseFloatOutput(inputFormat.pcmEncoding) ? toFloatPcmAvailableAudioProcessors : toIntPcmAvailableAudioProcessors; 。。。 AudioProcessor.AudioFormat outputFormat = new AudioProcessor.AudioFormat( inputFormat.sampleRate, inputFormat.channelCount, inputFormat.pcmEncoding); for (AudioProcessor audioProcessor : availableAudioProcessors) { try { AudioProcessor.AudioFormat nextFormat = audioProcessor.configure(outputFormat); if (audioProcessor.isActive()) { outputFormat = nextFormat; Logger.w(TAG,"configure",outputFormat); } } catch (UnhandledAudioFormatException e) { throw new ConfigurationException(e, inputFormat); } } outputMode = OUTPUT_MODE_PCM; outputEncoding = outputFormat.encoding; outputSampleRate = outputFormat.sampleRate; outputChannelConfig = Util.getAudioTrackChannelConfig(outputFormat.channelCount); outputPcmFrameSize = Util.getPcmFrameSize(outputEncoding, outputFormat.channelCount); } else { inputPcmFrameSize = C.LENGTH_UNSET; availableAudioProcessors = new AudioProcessor[0]; outputSampleRate = inputFormat.sampleRate; outputPcmFrameSize = C.LENGTH_UNSET; Logger.w(TAG,"configure方法x2",enableOffload,isOffloadedPlaybackSupported(inputFormat, audioAttributes));//false,false if (enableOffload && isOffloadedPlaybackSupported(inputFormat, audioAttributes)) { outputMode = OUTPUT_MODE_OFFLOAD; outputEncoding = MimeTypes.getEncoding( Assertions.checkNotNull(inputFormat.sampleMimeType), inputFormat.codecs); outputChannelConfig = Util.getAudioTrackChannelConfig(inputFormat.channelCount); } else { outputMode = OUTPUT_MODE_PASSTHROUGH;//直通输出 @Nullable Pair<Integer, Integer> encodingAndChannelConfig = getEncodingAndChannelConfigForPassthrough(inputFormat, audioCapabilities); Logger.w("pass though x2",encodingAndChannelConfig);//Pair{5 252} if (encodingAndChannelConfig == null) { throw new ConfigurationException( "Unable to configure passthrough for: " + inputFormat, inputFormat); } outputEncoding = encodingAndChannelConfig.first;//5 outputChannelConfig = encodingAndChannelConfig.second;//252 } } 。。。 }

也就是说,需要满足inputFormat.sampleMImeType是 /raw才会去配置 AudioProcessor

也就是会走解码输出

否则会自动的去走音频分载或者passthrough

然后实际的干活方法里

AudioSink接口

/** * Attempts to process data from a {@link ByteBuffer}, starting from its current position and * ending at its limit (exclusive). The position of the {@link ByteBuffer} is advanced by the * number of bytes that were handled. {@link Listener#onPositionDiscontinuity()} will be called if * {@code presentationTimeUs} is discontinuous with the last buffer handled since the last reset. * 尝试处理ByteBuffer的数据,从其当前位置开始,直到其限制(不包括限制)。 * ByteBuffer的位置提前处理的字节数。 如果presentationTimeUs与自上次重置以来处理的最后一个缓冲区不连续,则将调用AudioSink.Listener.onPositionDiscontinuity()。 * 返回数据是否已全部处理。 如果未对数据进行完整处理,则必须将相同的ByteBuffer提供给后续调用,直到完全消耗完为止, * 除非是对flush()(或configure(Format,int,int [])的中间调用) 导致水槽被冲洗)。 * * <p>Returns whether the data was handled in full. If the data was not handled in full then the * same {@link ByteBuffer} must be provided to subsequent calls until it has been fully consumed, * except in the case of an intervening call to {@link #flush()} (or to {@link #configure(Format, * int, int[])} that causes the sink to be flushed). * * @param buffer The buffer containing audio data. * @param presentationTimeUs The presentation timestamp of the buffer in microseconds. * @param encodedAccessUnitCount The number of encoded access units in the buffer, or 1 if the * buffer contains PCM audio. This allows batching multiple encoded access units in one * buffer. * @return Whether the buffer was handled fully. * @throws InitializationException If an error occurs initializing the sink. * @throws WriteException If an error occurs writing the audio data. */ boolean handleBuffer(ByteBuffer buffer, long presentationTimeUs, int encodedAccessUnitCount) throws InitializationException, WriteException;

默认实现

@Override @SuppressWarnings("ReferenceEquality") public boolean handleBuffer( ByteBuffer buffer, long presentationTimeUs, int encodedAccessUnitCount) throws InitializationException, WriteException { Assertions.checkArgument(inputBuffer == null || buffer == inputBuffer); if (pendingConfiguration != null) { 。。。。 // Re-apply playback parameters. applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); } if (!isAudioTrackInitialized()) { try { initializeAudioTrack(); //初始化 audioTrack } catch (InitializationException e) { 。。。。 return false; } } initializationExceptionPendingExceptionHolder.clear(); if (startMediaTimeUsNeedsInit) { 。。。。 applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); if (playing) { play();//audioTrack开始播放 } } if (!audioTrackPositionTracker.mayHandleBuffer(getWrittenFrames())) { return false; } if (inputBuffer == null) { // We are seeing this buffer for the first time. Assertions.checkArgument(buffer.order() == ByteOrder.LITTLE_ENDIAN); if (!buffer.hasRemaining()) { // The buffer is empty. return true; } 。。。。 if (afterDrainParameters != null) { if (!drainToEndOfStream()) { // Don't process any more input until draining completes. return false; } applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); afterDrainParameters = null; } // Check that presentationTimeUs is consistent with the expected value. long expectedPresentationTimeUs = startMediaTimeUs + configuration.inputFramesToDurationUs( getSubmittedFrames() - trimmingAudioProcessor.getTrimmedFrameCount()); if (!startMediaTimeUsNeedsSync && Math.abs(expectedPresentationTimeUs - presentationTimeUs) > 200000) { Log.e( TAG, "Discontinuity detected [expected " + expectedPresentationTimeUs + ", got " + presentationTimeUs + "]"); startMediaTimeUsNeedsSync = true; } if (startMediaTimeUsNeedsSync) { if (!drainToEndOfStream()) { // Don't update timing until pending AudioProcessor buffers are completely drained. return false; } // Adjust startMediaTimeUs to be consistent with the current buffer's start time and the // number of bytes submitted. long adjustmentUs = presentationTimeUs - expectedPresentationTimeUs; startMediaTimeUs += adjustmentUs; startMediaTimeUsNeedsSync = false; // Re-apply playback parameters because the startMediaTimeUs changed. applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs); if (listener != null && adjustmentUs != 0) { listener.onPositionDiscontinuity(); } } if (configuration.outputMode == OUTPUT_MODE_PCM) { submittedPcmBytes += buffer.remaining(); } else { submittedEncodedFrames += framesPerEncodedSample * encodedAccessUnitCount; } inputBuffer = buffer; inputBufferAccessUnitCount = encodedAccessUnitCount; } processBuffers(presentationTimeUs);//调用02篇的build时创建的音频处理器处理 if (!inputBuffer.hasRemaining()) { inputBuffer = null; inputBufferAccessUnitCount = 0; return true; } if (audioTrackPositionTracker.isStalled(getWrittenFrames())) { Log.w(TAG, "Resetting stalled audio track"); flush(); return true; } return false; }

两处对audio processor调用,一个是设置audioProcessor,设置完成后用来处理

设置

private void setupAudioProcessors() { AudioProcessor[] audioProcessors = configuration.availableAudioProcessors; ArrayList<AudioProcessor> newAudioProcessors = new ArrayList<>(); for (AudioProcessor audioProcessor : audioProcessors) { if (audioProcessor.isActive()) { newAudioProcessors.add(audioProcessor); } else { audioProcessor.flush(); } } int count = newAudioProcessors.size(); activeAudioProcessors = newAudioProcessors.toArray(new AudioProcessor[count]); outputBuffers = new ByteBuffer[count]; flushAudioProcessors(); } private void flushAudioProcessors() { for (int i = 0; i < activeAudioProcessors.length; i++) { AudioProcessor audioProcessor = activeAudioProcessors[i]; audioProcessor.flush(); outputBuffers[i] = audioProcessor.getOutput(); } }

用来处理

private void processBuffers(long avSyncPresentationTimeUs) throws WriteException { int count = activeAudioProcessors.length; int index = count; Logger.w(TAG,"processBuffers",avSyncPresentationTimeUs,count);//0,0 | 32000,0 | 64000,0|96000,0 while (index >= 0) { ByteBuffer input = index > 0 ? outputBuffers[index - 1] : (inputBuffer != null ? inputBuffer : AudioProcessor.EMPTY_BUFFER); if (index == count) { writeBuffer(input, avSyncPresentationTimeUs); } else { AudioProcessor audioProcessor = activeAudioProcessors[index]; if (index > drainingAudioProcessorIndex) { audioProcessor.queueInput(input); //输入 } ByteBuffer output = audioProcessor.getOutput();//输出 outputBuffers[index] = output; if (output.hasRemaining()) { // Handle the output as input to the next audio processor or the AudioTrack. index++; continue; } } if (input.hasRemaining()) { // The input wasn't consumed and no output was produced, so give up for now. return; } // Get more input from upstream. index--; } }

这逻辑写的。。。。

index == count就输出音频到audioTrack,index--

音频processor处理完成后就index++


也就是说,DefaultAudioSink对AudioProcessor接口的调用是这样的

狄仁杰上线

狄仁杰:我想事情的真相应该是这样的.jpg

new DefaultAudioSink()的时候,增加了增加了几个AudioProcessor的实现,也就是增加了几个音频处理器的不同实现到toIntPcmAudioProcessors数组;

然后在DefaultAudioSink.configure()的时候,判断如果是解码输出,就先判断是不是浮点输出,如果是toIntPcm而不是toFloatPcm,就把avaliableAudioProcessors指定为toIntPcmAudioProcessor数组,然后循环avaliableProcessor数组,挨个调用AudioProcessor实现类的configure()方法,返回AudioProcess.AudioFormat,判断AudioProcessor.isActive(),如果is active,就会使用返回的AudioFormat去new Configuration(),这个后面的大配置。

接下来,就到了处理数据流的时候,首先设置 setupAudioProcess(),依然是循环那几个实现,取得就是大配置configuration.avaliableAudioProcessor数组,挨个判断是否启用isActive,如果启用就增加到 activeAudioProcessor数组,干完后循环 调用activeAudioProcessor.flush(),flush后配置全类私有数组outputBuffers[i] = activieAudioProcess[i].getOutputj(); 也就是保留了对应的输出通道。

接下来,终于要实际处理流了,依然是要循环子处理器实现通过上面的outBuffer[index]取对应子处理器的返回,然后调用audioProcessor.queueInput(),audioProcess.getOutput(),写入到audioTrack....

对于AudioProcessor的调用顺序,简单理解为  configure() -> isActive() -> flush() -> queueInput() -> getOutput()。

回到configuration方法

Configuration pendingConfiguration = new Configuration( inputFormat, inputPcmFrameSize, outputMode, outputPcmFrameSize, outputSampleRate, outputChannelConfig, outputEncoding, specifiedBufferSize, enableAudioTrackPlaybackParams, availableAudioProcessors);

解码输出时,avaliableAudioProcessors为 toIntPcmAvaliableAudioProcessors或toIntPcmXxxxxx

不解码输出时,为empty

有了 avaliableAudioProcessors才有了后面的 audioProcessor.process()

所以如果要直通输出,可以在这里增加audio processor....

解码输出,构造函数初始化时就配置了四个默认的处理

channelMappingAudioProcessor = new ChannelMappingAudioProcessor(); trimmingAudioProcessor = new TrimmingAudioProcessor(); ArrayList<AudioProcessor> toIntPcmAudioProcessors = new ArrayList<>(); Collections.addAll( toIntPcmAudioProcessors, new ResamplingAudioProcessor(), channelMappingAudioProcessor, trimmingAudioProcessor); Collections.addAll(toIntPcmAudioProcessors, audioProcessorChain.getAudioProcessors()); toIntPcmAvailableAudioProcessors = toIntPcmAudioProcessors.toArray(new AudioProcessor[0]); toFloatPcmAvailableAudioProcessors = new AudioProcessor[] {new FloatResamplingAudioProcessor()};

分别为

ResamplingAudioProcessor  重采样

ChannelMappingAudioProcessor 声道

TrimmingAudioProcessor 修剪?

还有传过来的 audioProcessorChain.getAudioProcessors()

/** * Creates a new default chain of audio processors, with the user-defined {@code * audioProcessors} applied before silence skipping and speed adjustment processors. */ public DefaultAudioProcessorChain(AudioProcessor... audioProcessors) { this(audioProcessors, new SilenceSkippingAudioProcessor(), new SonicAudioProcessor()); }

按照添加顺序,分别为

SilenceSkippingAudioProcessor  静音跳过

SonicAudioProcessor uses the Sonic library to modify audio speed/pitch/sample rate

demo中对解码输出 总共应用了这五个音频处理器。

1)ResamplingAudioProcessor

/** * An {@link AudioProcessor} that converts different PCM audio encodings to 16-bit integer PCM. The * following encodings are supported as input: * 将不同的PCM音频编码转换为16位整数PCM的AudioProcessor。 支持以下编码作为输入: * * <ul> * <li>{@link C#ENCODING_PCM_8BIT} * <li>{@link C#ENCODING_PCM_16BIT} ({@link #isActive()} will return {@code false}) * <li>{@link C#ENCODING_PCM_16BIT_BIG_ENDIAN} * <li>{@link C#ENCODING_PCM_24BIT} * <li>{@link C#ENCODING_PCM_32BIT} * <li>{@link C#ENCODING_PCM_FLOAT} * </ul> */ /* package */ final class ResamplingAudioProcessor extends BaseAudioProcessor {}

实现也是简单粗暴,只重写了configure() 和 queueInput()方法

@Override public AudioFormat onConfigure(AudioFormat inputAudioFormat) throws UnhandledAudioFormatException { @C.PcmEncoding int encoding = inputAudioFormat.encoding; if (encoding != C.ENCODING_PCM_8BIT && encoding != C.ENCODING_PCM_16BIT && encoding != C.ENCODING_PCM_16BIT_BIG_ENDIAN && encoding != C.ENCODING_PCM_24BIT && encoding != C.ENCODING_PCM_32BIT && encoding != C.ENCODING_PCM_FLOAT) { throw new UnhandledAudioFormatException(inputAudioFormat); } return encoding != C.ENCODING_PCM_16BIT ? new AudioFormat( inputAudioFormat.sampleRate, inputAudioFormat.channelCount, C.ENCODING_PCM_16BIT) : AudioFormat.NOT_SET; }

onConfigure时,不是支持的格式直接抛异常,不是16bit 返回16bit....

onConfigure定义与BaseAudioProcessor抽象类,是AudioProcessor接口中configure方法的延伸

BaseAudioProcessor抽象类:

@Override public final AudioFormat configure(AudioFormat inputAudioFormat) throws UnhandledAudioFormatException { pendingInputAudioFormat = inputAudioFormat; pendingOutputAudioFormat = onConfigure(inputAudioFormat); return isActive() ? pendingOutputAudioFormat : AudioFormat.NOT_SET; }

/** Called when the processor is configured for a new input format. */ protected AudioFormat onConfigure(AudioFormat inputAudioFormat) throws UnhandledAudioFormatException { return AudioFormat.NOT_SET; }

数据输入时进行重采样

@Override public void queueInput(ByteBuffer inputBuffer) { // Prepare the output buffer. int position = inputBuffer.position(); int limit = inputBuffer.limit(); int size = limit - position; int resampledSize; switch (inputAudioFormat.encoding) { case C.ENCODING_PCM_8BIT: resampledSize = size * 2; break; case C.ENCODING_PCM_16BIT_BIG_ENDIAN: resampledSize = size; break; 。。。 } // Resample the little endian input and update the input/output buffers. ByteBuffer buffer = replaceOutputBuffer(resampledSize); switch (inputAudioFormat.encoding) { case C.ENCODING_PCM_8BIT: // 8 -> 16 bit resampling. Shift each byte from [0, 256) to [-128, 128) and scale up. for (int i = position; i < limit; i++) { buffer.put((byte) 0); buffer.put((byte) ((inputBuffer.get(i) & 0xFF) - 128)); } break; case C.ENCODING_PCM_16BIT_BIG_ENDIAN: // Big endian to little endian resampling. Swap the byte order. for (int i = position; i < limit; i += 2) { buffer.put(inputBuffer.get(i + 1)); buffer.put(inputBuffer.get(i)); } break; 。。。 } inputBuffer.position(inputBuffer.limit()); buffer.flip(); }

简单粗暴的转换


2ChannelMappingAudioProcessor

/** * An {@link AudioProcessor} that applies a mapping from input channels onto specified output * channels. This can be used to reorder, duplicate or discard channels. * 一个AudioProcessor,将输入通道的映射应用于指定的输出通道。 这可用于重新排序,复制或放弃频道。 */ /* package */ final class ChannelMappingAudioProcessor extends BaseAudioProcessor {}

这个是重新映射输出通道,demo中把8声道改成了6声道,把双声道改成了六声道。。。。

configure()的时候,把之前的inputAudioFormat中的outputChannel变更到了设置的pendingOutChannel。

/** * Resets the channel mapping. After calling this method, call {@link #configure(AudioFormat)} to * start using the new channel map. * * @param outputChannels The mapping from input to output channel indices, or {@code null} to * leave the input unchanged. * @see AudioSink#configure(com.google.android.exoplayer2.Format, int, int[]) */ public void setChannelMap(@Nullable int[] outputChannels) { pendingOutputChannels = outputChannels; } @Override public AudioFormat onConfigure(AudioFormat inputAudioFormat) throws UnhandledAudioFormatException { @Nullable int[] outputChannels = pendingOutputChannels; 。。。 return active ? new AudioFormat(inputAudioFormat.sampleRate, outputChannels.length, C.ENCODING_PCM_16BIT) : AudioFormat.NOT_SET; }

数据流输入的时候,对新的目标channel循环做了变更

@Override public void queueInput(ByteBuffer inputBuffer) { int[] outputChannels = Assertions.checkNotNull(this.outputChannels); int position = inputBuffer.position(); int limit = inputBuffer.limit(); int frameCount = (limit - position) / inputAudioFormat.bytesPerFrame; int outputSize = frameCount * outputAudioFormat.bytesPerFrame; ByteBuffer buffer = replaceOutputBuffer(outputSize); while (position < limit) { for (int channelIndex : outputChannels) { buffer.putShort(inputBuffer.getShort(position + 2 * channelIndex)); } position += inputAudioFormat.bytesPerFrame; } inputBuffer.position(limit); buffer.flip(); }


3TrimmingAudioProcessor

/** Audio processor for trimming samples from the start/end of data音频处理器,用于从数据的开始/结束处修剪样本. */ /* package */ final class TrimmingAudioProcessor extends BaseAudioProcessor {}

设置起止帧

/** * Sets the number of audio frames to trim from the start and end of audio passed to this * processor. After calling this method, call {@link #configure(AudioFormat)} to apply the new * trimming frame counts. * 设置要从传递给此处理器的音频的开头和结尾修剪的音频帧数。 调用此方法后,调用 configure(AudioProcessor.AudioFormat) 以应用新的修剪帧计数。 * * 参数: * trimStartFrames – 从音频开始要修剪的音频帧数。 * trimEndFrames – 要从音频末尾修剪的音频帧数。 * * @param trimStartFrames The number of audio frames to trim from the start of audio. * @param trimEndFrames The number of audio frames to trim from the end of audio. * @see AudioSink#configure(com.google.android.exoplayer2.Format, int, int[]) */ public void setTrimFrameCount(int trimStartFrames, int trimEndFrames) { this.trimStartFrames = trimStartFrames; this.trimEndFrames = trimEndFrames; }

在DefaultAudioSink.configure方法进行了设置

trimmingAudioProcessor.setTrimFrameCount( inputFormat.encoderDelay, inputFormat.encoderPadding);

这俩变量的定义

/** * The number of frames to trim from the start of the decoded audio stream, or 0 if not * applicable. */ public final int encoderDelay; /** * The number of frames to trim from the end of the decoded audio stream, or 0 if not applicable. */ public final int encoderPadding;


配置

@Override public AudioFormat onConfigure(AudioFormat inputAudioFormat) throws UnhandledAudioFormatException { if (inputAudioFormat.encoding != OUTPUT_ENCODING) { throw new UnhandledAudioFormatException(inputAudioFormat); } reconfigurationPending = true; return trimStartFrames != 0 || trimEndFrames != 0 ? inputAudioFormat : AudioFormat.NOT_SET; }

在DefaultAudioSink.handleBuffer方法处理流前,减去了剪去帧的时间

// Check that presentationTimeUs is consistent with the expected value. long expectedPresentationTimeUs = startMediaTimeUs + configuration.inputFramesToDurationUs( getSubmittedFrames() - trimmingAudioProcessor.getTrimmedFrameCount());

剪帧

@Override public void queueInput(ByteBuffer inputBuffer) { int position = inputBuffer.position(); int limit = inputBuffer.limit(); int remaining = limit - position; if (remaining == 0) { return; } // Trim any pending start bytes from the input buffer. int trimBytes = min(remaining, pendingTrimStartBytes);//取小 trimmedFrameCount += trimBytes / inputAudioFormat.bytesPerFrame; pendingTrimStartBytes -= trimBytes; inputBuffer.position(position + trimBytes); if (pendingTrimStartBytes > 0) { // Nothing to output yet. return; } remaining -= trimBytes; // endBuffer must be kept as full as possible, so that we trim the right amount of media if we // don't receive any more input. After taking into account the number of bytes needed to keep // endBuffer as full as possible, the output should be any surplus bytes currently in endBuffer // followed by any surplus bytes in the new inputBuffer.endBuffer 必须尽可能地保持满,这样如果我们不再接收到任何输入, // 我们就可以修剪适量的媒体。 考虑到保持 endBuffer 尽可能满所需的字节数后,输出应该是当前 endBuffer 中的任何剩余字节,然后是新 inputBuffer 中的任何剩余字节。 int remainingBytesToOutput = endBufferSize + remaining - endBuffer.length; ByteBuffer buffer = replaceOutputBuffer(remainingBytesToOutput); // Output from endBuffer. int endBufferBytesToOutput = Util.constrainValue(remainingBytesToOutput, 0, endBufferSize); buffer.put(endBuffer, 0, endBufferBytesToOutput); remainingBytesToOutput -= endBufferBytesToOutput; // Output from inputBuffer, restoring its limit afterwards. int inputBufferBytesToOutput = Util.constrainValue(remainingBytesToOutput, 0, remaining); inputBuffer.limit(inputBuffer.position() + inputBufferBytesToOutput); buffer.put(inputBuffer); inputBuffer.limit(limit); remaining -= inputBufferBytesToOutput; // Compact endBuffer, then repopulate it using the new input. endBufferSize -= endBufferBytesToOutput; System.arraycopy(endBuffer, endBufferBytesToOutput, endBuffer, 0, endBufferSize); inputBuffer.get(endBuffer, endBufferSize, remaining); endBufferSize += remaining; buffer.flip(); }


4)SilenceSkippingAudioProcessor

/** * An {@link AudioProcessor} that skips silence in the input stream. Input and output are 16-bit * PCM. * {@link AudioProcessor},可跳过输入流中的静音。 输入和输出是16位PCM。 */ public final class SilenceSkippingAudioProcessor extends BaseAudioProcessor {}

定义了三个状态

private @interface State {} /** State when the input is not silent 非静音时. */ private static final int STATE_NOISY = 0; /** State when the input may be silent but we haven't read enough yet to know说明何时输入可能是无声的,但我们还没有阅读足够的内容. */ private static final int STATE_MAYBE_SILENT = 1; /** State when the input is silent输入静音时的状态. */ private static final int STATE_SILENT = 2;

默认配置

/** Creates a new silence skipping audio processor.创建一个新的静音跳过音频处理器。 */ public SilenceSkippingAudioProcessor() { this( DEFAULT_MINIMUM_SILENCE_DURATION_US, //150s 音频的最小持续时间必须低于{@code silenceThresholdLevel}才能将音频的该部分分类为无声,以微秒为单位。 DEFAULT_PADDING_SILENCE_US, //20s 延长非沉默部分的沉默持续时间(以微秒为单位)。 该值不能超过{@code minimumSilenceDurationUs}。 DEFAULT_SILENCE_THRESHOLD_LEVEL); //1024 绝对水平(低于该绝对水平时,单个PCM样本被分类为无声)。 }

onFlush会默认配置成noisy状态,然后在实际处理状态时变更状态

@Override protected void onFlush() { if (enabled) { bytesPerFrame = inputAudioFormat.bytesPerFrame; int maybeSilenceBufferSize = durationUsToFrames(minimumSilenceDurationUs) * bytesPerFrame; if (maybeSilenceBuffer.length != maybeSilenceBufferSize) { maybeSilenceBuffer = new byte[maybeSilenceBufferSize]; } paddingSize = durationUsToFrames(paddingSilenceUs) * bytesPerFrame; if (paddingBuffer.length != paddingSize) { paddingBuffer = new byte[paddingSize]; } } state = STATE_NOISY; skippedFrames = 0; maybeSilenceBufferSize = 0; hasOutputNoise = false; }

输入处理

@Override public void queueInput(ByteBuffer inputBuffer) { while (inputBuffer.hasRemaining() && !hasPendingOutput()) { switch (state) { case STATE_NOISY: processNoisy(inputBuffer); break; case STATE_MAYBE_SILENT: processMaybeSilence(inputBuffer); break; case STATE_SILENT: processSilence(inputBuffer); break; default: throw new IllegalStateException(); } } }


处理非静音

// Internal methods. /** * Incrementally processes new input from {@code inputBuffer} while in {@link #STATE_NOISY}, * updating the state if needed. * 在 STATE_NOISY 中增量处理来自 inputBuffer 的新输入,如果需要更新状态。 */ private void processNoisy(ByteBuffer inputBuffer) { int limit = inputBuffer.limit(); // Check if there's any noise within the maybe silence buffer duration. inputBuffer.limit(min(limit, inputBuffer.position() + maybeSilenceBuffer.length)); int noiseLimit = findNoiseLimit(inputBuffer); if (noiseLimit == inputBuffer.position()) { // The buffer contains the start of possible silence. state = STATE_MAYBE_SILENT; } else { inputBuffer.limit(noiseLimit); output(inputBuffer); } // Restore the limit. inputBuffer.limit(limit); }


5)SonicAudioProcessor

/** * An {@link AudioProcessor} that uses the Sonic library to modify audio speed/pitch/sample rate. * 一个{@link AudioProcessor},它使用Sonic库来修改音频速度/音高/采样率。 */ public final class SonicAudioProcessor implements AudioProcessor {}

一样的调用套路,不同的是 用了一个Sonic库

输入

@Override public void queueInput(ByteBuffer inputBuffer) { if (!inputBuffer.hasRemaining()) { return; } Sonic sonic = checkNotNull(this.sonic); ShortBuffer shortBuffer = inputBuffer.asShortBuffer(); int inputSize = inputBuffer.remaining(); inputBytes += inputSize; sonic.queueInput(shortBuffer); inputBuffer.position(inputBuffer.position() + inputSize); }

输出

@Override public ByteBuffer getOutput() { @Nullable Sonic sonic = this.sonic; if (sonic != null) { int outputSize = sonic.getOutputSize(); if (outputSize > 0) { if (buffer.capacity() < outputSize) { buffer = ByteBuffer.allocateDirect(outputSize).order(ByteOrder.nativeOrder()); shortBuffer = buffer.asShortBuffer(); } else { buffer.clear(); shortBuffer.clear(); } sonic.getOutput(shortBuffer); outputBytes += outputSize; buffer.limit(outputSize); outputBuffer = buffer; } } ByteBuffer outputBuffer = this.outputBuffer; this.outputBuffer = EMPTY_BUFFER; return outputBuffer; }


6)用来调试的TeeAudioProcessor

这个demo里没调用,用在了test里。。。。

/** * Audio processor that outputs its input unmodified and also outputs its input to a given sink. * This is intended to be used for diagnostics and debugging. * * <p>This audio processor can be inserted into the audio processor chain to access audio data * before/after particular processing steps have been applied. For example, to get audio output * after playback speed adjustment and silence skipping have been applied it is necessary to pass a * custom {@link com.google.android.exoplayer2.audio.DefaultAudioSink.AudioProcessorChain} when * creating the audio sink, and include this audio processor after all other audio processors. * 音频处理器,其输出保持不变,并且将其输入输出到给定的接收器。 这旨在用于诊断和调试。 * 该音频处理器可以插入音频处理器链中,以在应用特定处理步骤之前/之后访问音频数据。 * 例如,要在应用了播放速度调整和静音跳过之后才能获得音频输出,则在创建音频接收器时必须传递自定义的DefaultAudioSink.AudioProcessorChain,并将此音频处理器包括在所有其他音频处理器之后。 */ public final class TeeAudioProcessor extends BaseAudioProcessor { /** A sink for audio buffers handled by the audio processor. */ public interface AudioBufferSink {}

/** * A sink for audio buffers that writes output audio as .wav files with a given path prefix. When * new audio data is handled after flushing the audio processor, a counter is incremented and its * value is appended to the output file name. * * <p>Note: if writing to external storage it's necessary to grant the {@code * WRITE_EXTERNAL_STORAGE} permission. * 音频缓冲区的接收器,用于将输出音频作为具有给定路径前缀的.wav文件写入。 在刷新音频处理器后处理新的音频数据时,计数器会增加,并将其值附加到输出文件名。 * 注意:如果要写入外部存储,则必须授予WRITE_EXTERNAL_STORAGE权限。 */ public static final class WavFileAudioBufferSink implements AudioBufferSink {}

}

直接在Chain里增加就可以调用

/** * Builds an {@link AudioSink} to which the audio renderers will output. * 构建一个{@link AudioSink},音频渲染器将输出到该音频。 * * @param context The {@link Context} associated with the player. * @param enableFloatOutput Whether to enable use of floating point audio output, if available. * @param enableAudioTrackPlaybackParams Whether to enable setting playback speed using {@link * android.media.AudioTrack#setPlaybackParams(PlaybackParams)}, if supported. * @param enableOffload Whether to enable use of audio offload for supported formats, if * available. * @return The {@link AudioSink} to which the audio renderers will output. May be {@code null} if * no audio renderers are required. If {@code null} is returned then {@link * #buildAudioRenderers} will not be called. */ @Nullable protected AudioSink buildAudioSink( Context context, boolean enableFloatOutput, boolean enableAudioTrackPlaybackParams, boolean enableOffload) { DefaultAudioSink.AudioProcessorChain chain = new DefaultAudioProcessorChain( new TeeAudioProcessor(new TeeAudioProcessor.WavFileAudioBufferSink("/mnt/sdcard/SENRSL/dc")) ); AudioSink audioSink = new DefaultAudioSink( AudioCapabilities.getCapabilities(context), chain, enableFloatOutput, enableAudioTrackPlaybackParams, enableOffload); return audioSink; }

捕获的文件也可以抓取到多声道

概览
完整名称                           : /Users/senrsl/Downloads/dc-06021912-0000.wav
格式                             : Wave
文件大小                           : 25.4 MiB
时长                             : 46 秒 272 毫秒
总体码率模式                         : 恒定码率 (CBR)
总体码率                           : 4 608 kb/s

音频
格式                             : PCM
格式设置                           : Little / Signed
编解码器 ID                        : 1
时长                             : 46 秒 272 毫秒
码率模式                           : 恒定码率 (CBR)
码率                             : 4 608 kb/s
声道数                            : 6 声道
采样率                            : 48.0 kHz
位深                             : 16 位
流大小                            : 25.4 MiB (100%)

适当调整,也就能捕获音频输出了,可以替代AudioPort之类的 隐藏API来用了。。。。

回家做饭,明天再发。。。。

2021年06月02日19:22:35

--
senRsl
2021年05月26日14:08:17