什么时候需要使用AUGraph

什么时候需要使用AUGraph

本文介绍了iOS音频单元:什么时候需要使用AUGraph?#39?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对iOS编程完全陌生(我更是一个Android人士..)并且必须构建一个处理音频DSP的应用程序.(我知道这不是接触iOS开发人员的最简单方法;))

I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) )

该应用必须能够接受来自的输入:

The app needs to be able to accept inputs both from :

1-内置麦克风2- iPod库

1- built-in microphone2- iPod library

然后可以将过滤器应用于输入声音,并将结果输出到:

Then filters may be applied to the input sound and the resulting is to be outputed to :

1-扬声器2-记录到文件

1- Speaker2- Record to a file

我的问题如下:为了能够对输入应用多个过滤器,是否需要AUGr​​aph?或者可以通过使用不同的渲染回调处理样本来应用这些不同的效果吗?

My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple filters to the input or can these different effects be applied by processing the samples with different render callbacks ?

如果我使用AUGraph,我是否需要:每个输入1个音频单元,每个输出1个音频单元和每个效果/滤镜1个音频输入?

If I go with AUGraph do I need : 1 Audio Unit for each input, 1 Audio Unit for the output and 1 Audio Input for each effect/filter ?

最后,如果我不可以,我只有1个音频单元并重新配置它,以便选择源/目的地?

And finally if I don't may I only have 1 Audio Unit and reconfigure it in order to select the source/destination ?

非常感谢您的回答!我迷失了这些东西...

Many thanks for your answers ! I'm getting lost with this stuff...

推荐答案

如果您确实愿意,可以确实使用渲染回调,但是内置音频单元很棒(而且有些事情我还不能在这里说在NDA等下,我已经说了太多了,如果您可以使用iOS 5 SDK,建议您看看.)

You may indeed use render callbacks if you so wished to but the built in Audio Units are great (and there are things coming that I can't say here yet under NDA etc., I've said too much, if you have access to the iOS 5 SDK I recommend you have a look).

您可以在不使用 AUGraph 的情况下实现所需的行为,但是建议您这样做,因为它可以处理很多事情,并且省时省力.

You can implement the behavior you wish without using AUGraph, however it is recommended you do as it takes care of a lot of things under the hood and saves you time and effort.

从:

选择设计模式(iOS开发人员库)详细介绍了如何选择如何实现音频单元环境.通过设置音频会话,绘制图表和配置/添加单位,编写回调.

Choosing A Design Pattern (iOS Developer Library) goes into some detail on how you would choose how to implement your Audio Unit environment. From setting up the audio session, graph and configuring/adding units, writing callbacks.

除了已经说明的内容外,关于图表中需要的音频单元,您还需要一个多声道混音器单元(请参见使用特定的音频单元(iOS开发者库))混合两个音频输入,然后将混合器连接到 Output 单元.

As for which Audio Units you would want in the graph, in addition to what you already stated, you will want to have a MultiChannel Mixer Unit (see Using Specific Audio Units (iOS Developer Library)) to mix your two audio inputs and then hook up the mixer to the Output unit.

或者,如果您不使用AUGraph而直接进行操作,则以下代码是将音频单元自己连接在一起的示例.(来自构建音频单元应用程序(iOS开发人员库))

Alternatively, if you were to do it directly without using AUGraph, the following code is a sample to hook up Audio units together yourself. (From Constructing Audio Unit Apps (iOS Developer Library))

/*Listing 2-6*/
AudioUnitElement mixerUnitOutputBus  = 0;
AudioUnitElement ioUnitOutputElement = 0;

AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit    = mixerUnitInstance;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber    = ioUnitOutputElement;

AudioUnitSetProperty (
    ioUnitInstance,                     // connection destination
    kAudioUnitProperty_MakeConnection,  // property key
    kAudioUnitScope_Input,              // destination scope
    ioUnitOutputElement,                // destination element
    &mixerOutToIoUnitIn,                // connection definition
    sizeof (mixerOutToIoUnitIn)
);

这篇关于iOS音频单元:什么时候需要使用AUGraph?#39?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 00:20