翻译了一个星期的文档,发现自己还是个渣,音频单元这块只是大概概念懂了,但是使用还是渣,因此这篇文章主要是来将零碎的知识串联起来。
完整的音频编码组成
- AUGraph
- 音频单元
- 音频单元之间的连接
AUGraph
AUGraph 表面意思理解为音频管理上下文,可以用来增加、删除、 音频单元,连接、断开音频单元之间的连接,算是音频单元的管理者。如果用生成汽车车间来比喻的话,就是汽车生产线了。
音频单元
音频单元在完整的音频会话中其实算是一个部件,该部件在音频会话中至少要有一个。如果用生成汽车车间来比喻的话,就是汽车零件啦,比如,汽车盖子,发动机
音频单元之间的连接
这里为什么要把音频单元的之间的连接拿出来呢?就那上面的生成汽车车间来说话,有生产线和汽车零部件还不能出产汽车,需要把他们组装起来才行,音频单元就相当于汽车零部件在生成线上的组装而已。
AUGraph
我们上面基本概念讲了AUGraph 主要用来管理音频单元的。那么AUGraph是如何创建的呢?
-(void)createGrahpAndChooseNodeBlock:(void(^)(AUGraph grahp))chooseNodeBlock nodeOperationBlock:(void(^)(AUGraph grahp))nodeOperationBlock{
OSStatus result = noErr;
result = NewAUGraph (&processingGraph);
if (noErr != result) {[self printErrorMessage: @"NewAUGraph" withStatus: result]; return;}
chooseNodeBlock(processingGraph);
result = AUGraphOpen (processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphOpen" withStatus: result]; return;}
nodeOperationBlock(processingGraph);
result = AUGraphInitialize (processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphInitialize" withStatus: result];
}
}
AUGraph使用起来比较简单
通过调用NewAUGraph获取AUGraph 引用(获取汽车生产线)
在调用AUGraphOpen之前,我们需要从AUGraph 引用中获取所需的音频单元引用IONode(这里可以理解我们从汽车生产线上获取下所需要的零部件)
调用AUGraphOpen 打开音频单元组件(这里可以理解为打开汽车零部件所在的仓库)
调用AUGraphInitialize 之前,我们需要初始化音频单元和对音频单元进行配置(这里可以理解为从仓库中获取我们需要的汽车零部件)
调用AUGraphInitialize ,生成AUGraph对象(这里相当于把零部件搬运到生成线准备就绪,准备开工)
如下图
音频单元
音频单元是由scope 和 element 组成的,
scope 是由 element 组成的,
而element 可以包含多个scope
官方图很明确
element scope 和 渲染回调函数的关系
这里我们知道 input element 是获取数据,output element 是将数据进行输出,而渲染回调函数是可以绑定在 input element 和output element 上的。
其实读完官方文档不懂主要就是这里了,没有搞明白他们之间的关系。这里需要详细讲解下
input element scope 和渲染回调函数
看下列结构
从上面结构图中,我们能看出来 input element 在两个 scope只给你,并且前后各有一个渲染回调函数,标示是 1 和2。
这里渲染回调函数1 和2 是有区别的。
渲染回调函数1在input scope 和input element 上,我们知道input element 在input scope 上是接收数据,因此,这里的回调函数需要提供数据,即要是我我们在地方配置渲染回调函数,那么该渲染回调函数是需要我们活着硬件提供数据的
同理 渲染函数在2 处是提供给外界数据的,我们能从这里获取到音频组件提供给我们的数据。
这里是设置 回调函数1 的代码,回调函数需要硬件活着app提供数据
OSStatus result = noErr;
AURenderCallbackStruct inputCallbackStruct;
inputCallbackStruct.inputProc = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon = soundStructArray;
result = AUGraphSetNodeInputCallback (
processingGraph,
mixerNode,
1,
&inputCallbackStruct
);
这里是回调函数2 的代码 这里需要提供数据给我们使用
AURenderCallbackStruct cb;
cb.inputProcRefCon = (__bridge void *)(self);
cb.inputProc = handleInputBuffer1;
// /// 接受输入数据
AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 1, &cb, sizeof(cb));
kAudioOutputUnitProperty_SetInputCallback 代表output scope部分的配置。
output element scope 和渲染回调函数
看下列结构
其实和input 结构比较就是是一样的。看我们如何配置渲染和回调1 和渲染回调2
渲染回调1
OSStatus status ;
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback1;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callbackStruct,
sizeof(callbackStruct));
配置渲染回调2
AURenderCallbackStruct cb;
cb.inputProcRefCon = (__bridge void *)(self);
cb.inputProc = handleInputBuffer1;
// /// 接受输入数据
AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &cb, sizeof(cb));
这里要是我们分别在每个element 配置了渲染回调函数,那么两个音频之间的连接就需要我们自己连接了
音频单元之间的连接
音频单元之间的连接可以分成两种,一种是设置了渲染回调函数,另一种是没有设置渲染回调函数
没有设置渲染回调函数
这样的音频单元之间的连接需要调用
extern OSStatus AUGraphConnectNodeInput( AUGraph inGraph, AUNode inSourceNode,UInt32 inSourceOutputNumber,AUNode inDestNode,UInt32 inDestInputNumber)
函数。将output scope部分的element 0 或者element 1 连接到input scope 的element 0 或者 element 1上
带有渲染回调函数的调用
我们想将两个音频单元通过回调函数连接在一起,必须将两个音频单元都要设置渲染回调函数才行,这两个回调函数通过我们自己设置的缓存进行数据传输
整体上个demo 吧。
//
// IOGrahpUnit.m
// PlayAndRecordWithUnit
#import "IOGrahpUnit.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
const uint32_t CONST_BUFFER_SIZE1 = 0x10000;
@interface IOGrahpUnit ()
{
AUGraph processingGraph;
AUNode ioNode;
AudioUnit ioUnit;
AudioBufferList * buffList;
}
@property (nonatomic ,strong) NSOutputStream * stream;
@property (nonatomic,strong) NSInputStream *inputSteam;
@property (nonatomic ,strong) NSString * path;
@end
@implementation IOGrahpUnit
-(void)readPcm{
// open pcm stream
// NSURL *url = [[NSBundle mainBundle] URLForResource:@"abc" withExtension:@"pcm"];
NSURL * url = [NSURL fileURLWithPath:self.path];
self.inputSteam = [NSInputStream inputStreamWithURL:url];
if (!self.inputSteam) {
NSLog(@"打开文件失败 %@", url);
}
else {
[self.inputSteam open];
}
}
-(void)createFile{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
NSString *path = [[[paths objectAtIndex:0]stringByAppendingPathComponent:[NSUUID UUID].UUIDString] stringByAppendingString:@".pcm"];
NSLog(@"pcm path %@",path);
self.path = path;
self.stream =[[NSOutputStream alloc]initToFileAtPath:path append:YES];
}
- (instancetype)init
{
self = [super init];
if (self) {
[self createFile];
[self createGrahpAndChooseNodeBlock:^(AUGraph grahp) {
[self setAudioUnit:grahp];
} nodeOperationBlock:^(AUGraph grahp) {
[self getAudioUnitInstance];
[self setAudioUnitProperty];
}];
[self setAudioSession];
}
return self;
}
///打开session
-(void)setAudioSession{
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setPreferredSampleRate:44100 error:nil];
/// 申请的是播放和录制
[session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:nil];
[session setActive:YES error:nil];
}
-(void)createGrahpAndChooseNodeBlock:(void(^)(AUGraph grahp))chooseNodeBlock nodeOperationBlock:(void(^)(AUGraph grahp))nodeOperationBlock{
OSStatus result = noErr;
result = NewAUGraph (&processingGraph);
if (noErr != result) {[self printErrorMessage: @"NewAUGraph" withStatus: result]; return;}
chooseNodeBlock(processingGraph);
result = AUGraphOpen (processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphOpen" withStatus: result]; return;}
nodeOperationBlock(processingGraph);
result = AUGraphInitialize (processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphInitialize" withStatus: result];
}
}
-(void)setAudioUnit:(AUGraph)graph{
OSStatus result = noErr;
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescription.componentFlags = 0;
ioUnitDescription.componentFlagsMask = 0;
result = AUGraphAddNode (graph,
&ioUnitDescription,
&ioNode);
if (noErr != result) {[self printErrorMessage: @"AUGraphAddNode" withStatus: result]; return;}
}
-(void)getAudioUnitInstance{
OSStatus result = noErr;
result = AUGraphNodeInfo (processingGraph,ioNode,NULL,&ioUnit);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
}
-(void)setAudioUnitProperty{
UInt32 flagOne = 1;
AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flagOne, sizeof(flagOne));
[self setInputElement];
[self setOutputElement];
}
-(void)setInputElement{
///设置 input element 在out scope 输出的流格式
AudioStreamBasicDescription desc = {0};
desc.mSampleRate = 44100;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
desc.mChannelsPerFrame = 1;
desc.mFramesPerPacket = 1;
desc.mBitsPerChannel = 16;
desc.mBytesPerFrame = desc.mBitsPerChannel / 8 * desc.mChannelsPerFrame;
desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;
AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &desc, sizeof(desc));
}
-(void)setOutputElement{
AudioStreamBasicDescription desc = {0};
desc.mSampleRate = 44100;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
desc.mChannelsPerFrame = 1;
desc.mFramesPerPacket = 1;
desc.mBitsPerChannel = 16;
desc.mBytesPerFrame = desc.mBitsPerChannel / 8 * desc.mChannelsPerFrame;
desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;
AudioUnitSetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, sizeof(desc));
}
static OSStatus playbackCallback1(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
NSLog(@"play inBusNumber %d inNumberFrames %d",inBusNumber,inNumberFrames);
NSLog(@"%d",CONST_BUFFER_SIZE1);
IOGrahpUnit *player = (__bridge IOGrahpUnit *)inRefCon;
NSLog(@"out size: %d", ioData->mBuffers[0].mDataByteSize);
// ioData->mBuffers[0].mDataByteSize = player->buffList->mBuffers[0].mDataByteSize;
// memcpy(ioData->mBuffers[0].mData, player->buffList->mBuffers[0].mData, player->buffList->mBuffers[0].mDataByteSize);
ioData->mBuffers[0].mDataByteSize = (UInt32)[player.inputSteam read:ioData->mBuffers[0].mData maxLength:(NSInteger)ioData->mBuffers[0].mDataByteSize];;
NSLog(@"out size: %d", ioData->mBuffers[0].mDataByteSize);
if (ioData->mBuffers[0].mDataByteSize <= 0) {
dispatch_async(dispatch_get_main_queue(), ^{
});
}
return noErr;
}
static OSStatus handleInputBuffer1(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
@autoreleasepool {
NSLog(@"inBusNumber %d inNumberFrames %d",inBusNumber,inNumberFrames);
OSStatus status;
IOGrahpUnit *source = (__bridge IOGrahpUnit *)inRefCon;
if (!source) return -1;
AudioBuffer buffer;
buffer.mData = NULL;
buffer.mDataByteSize = 0;
buffer.mNumberChannels = 1;
AudioBufferList buffers;
buffers.mNumberBuffers = 1;
buffers.mBuffers[0] = buffer;
///获取 buffer 单声道
status = AudioUnitRender(source->ioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&buffers);
//
if (!status) {
NSLog(@"input %d",buffers.mBuffers[0].mDataByteSize);
// memcpy(source->buffList->mBuffers[0].mData, buffers.mBuffers[0].mData, buffers.mBuffers[0].mDataByteSize);
// source->buffList->mBuffers[0].mDataByteSize =buffers.mBuffers[0].mDataByteSize;
[source.stream write:buffers.mBuffers[0].mData maxLength:buffers.mBuffers[0].mDataByteSize];
}
return status;
}
}
-(void)addRecordCallBack{
AURenderCallbackStruct cb;
cb.inputProcRefCon = (__bridge void *)(self);
cb.inputProc = handleInputBuffer1;
// /// 接受输入数据
AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 1, &cb, sizeof(cb));
}
-(void)removeRecordCallBack{
AURenderCallbackStruct cb;
cb.inputProcRefCon =0;
cb.inputProc = 0;
/// 接受输入数据
AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 1, &cb, sizeof(cb));
}
-(void)addPlayCallBack{
OSStatus status ;
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = playbackCallback1;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callbackStruct,
sizeof(callbackStruct));
}
-(void)removePlayCallBack{
OSStatus status ;
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = 0;
callbackStruct.inputProcRefCon =0;
status = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackStruct,
sizeof(callbackStruct));
}
#pragma mark - event
-(void)startPlay{
[self addPlayCallBack];
[self removeRecordCallBack];
[self readPcm];
OSStatus result = AUGraphStart (processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
}
-(void)stopPlay{
OSStatus result = AUGraphStop(processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
}
-(void)startRecord{
OSStatus result ;
// [self addRecordCallBack];
// [self removePlayCallBack];
// [self.stream open];
Boolean isRunning = false;
result = AUGraphIsRunning (processingGraph, &isRunning);
if (noErr != result) {[self printErrorMessage: @"AUGraphIsRunning" withStatus: result]; return;}
if (isRunning) {
NSLog(@"record");
}
// [self addPlayCallBack];
//
result = AUGraphConnectNodeInput (
processingGraph,
ioNode, // source node
1, // source node output bus number
ioNode, // destination node
0 // desintation node input bus number
);
UInt32 outNumConnections;
result = AUGraphCountNodeInteractions(processingGraph,
ioNode,
&outNumConnections);
NSLog(@"%d",outNumConnections);
if (noErr != result) {[self printErrorMessage: @"AUGraphIsRunning" withStatus: result]; return;}
result = AUGraphStart (processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
// AudioOutputUnitStart(self.componetInstance);
}
-(void)stopRecord {
OSStatus result = AUGraphStop(processingGraph);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
[self.stream close];
}
#pragma mark - print error
- (void) printErrorMessage: (NSString *) errorString withStatus: (OSStatus) result {
char resultString[5];
UInt32 swappedResult = CFSwapInt32HostToBig (result);
bcopy (&swappedResult, resultString, 4);
resultString[4] = '\0';
NSLog (
@"*** %@ error: %d %08X %4.4s\n",
errorString,
(char*) &resultString
);
}
@end
以上代码实现功能是通过渲染回调函数将麦克风采集的音频通过扩音器播放出来。
其中有一部分废弃代码,是分开录制和播放注销掉了。