本文介绍了从 iphone 流媒体的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要将音频从麦克风流式传输到 http 服务器.
这些录音设置正是我所需要的:

I need to stream audio from the mic to a http server.
These recording settings are what I need:

NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                             [NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,
                                             [NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
                                             [NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
                                             [NSNumber numberWithInt:1],AVNumberOfChannelsKey,
                                             [NSNumber numberWithInt:64000],AVEncoderBitRateKey,
                                             nil];

API im 编码到状态:

API im coding to states:

向当前查看的摄像机发送连续的音频流.音频需要以 64 kbit/s 的速度在 G711 mu-law 上编码以传输到床边的安讯士摄像头.发送(这应该是 SSL 中的 POST URL到连接的服务器):POST/transmitaudio?id=内容类型:音频/基本内容长度:99999(忽略长度)

以下是我尝试使用的链接列表.

Below are a list of links I have tried to work with.

LINK - (SO) 基本解释,即只有音频单元和音频队列允许在通过录制时将 nsdata 作为输出麦克风 |不是示例,而是对所需内容(音频队列或音频单元)的良好定义

LINK - (SO)basic explanation that only audio unit and audio queues will allow for nsdata as output when recording via the mic | not an example but a good definition of whats needed (audio queues, or audio units)

LINK - (SO) 音频回调示例 |只包含回调

LINK - (SO)audio callback example | only includes the callback

LINK- (SO)REMOTE IO 示例 |没有启动/停止功能,用于保存到文件

LINK - (SO)REMOTE IO example | doesnt have start/stop and is for saving to a file

LINK - (SO)REMOTE IO 示例 |未答复无效

LINK - (SO)REMOTE IO example | unanswered not working

LINK - (SO)基本音频录音示例 |很好的例子,但记录到文件

LINK - (SO)Basic audio recording example | good example but records to file

LINK - (SO)Question 引导我进入 InMemoryAudioFile 类(无法获得工作) |跟踪了指向 inMemoryFile(或类似内容)的链接,但无法使其正常工作.

LINK - (SO)Question that guided me to InMemoryAudioFile class (couldnt get working) | followed links to inMemoryFile (or something like that) but couldn't get it working.

LINK - (SO) 更多音频单元和远程 io 示例/问题 |让这个工作正常,但再次没有停止功能,即使当我试图弄清楚呼叫是什么并让它停止时,它似乎仍然没有将音频传输到服务器.

LINK - (SO)more audio unit and remote io example/problems | got this one working but once again there isn't a stop function, and even when I tried to figure out what the call is and made it stop, it still didn't not seem to transmit the audio to the server.

LINK - 不错的 remoteIO和音频队列示例但是 |另一个很好的例子,几乎让它工作,但代码有一些问题(编译器认为它不是 obj-c++),并且再次不知道如何从中获取音频数据"而不是文件.

LINK - Decent remoteIO and audio queue example but | another good example and almost got it working but had some problems with the code (compiler thinking its not obj-c++) and once again dont know how to get audio "data" from it instead of to a file.

LINK - 音频队列的 Apple 文档 |框架有问题.完成了它(请参阅下面的问题),但最终无法使其正常工作,但是可能没有像其他人那样给这个时间,也许应该有.

LINK - Apple docs for audio queue | had problems with frameworks. worked through it (see question below) but in the end couldn't get it working however probably didn't give this one as much time as the others, and maybe should have.

LINK - 我在尝试实现音频队列/单元时遇到的(SO)问题|不是例子

LINK - (SO)problems I have had when trying to implement audio queue/unit | not an example

LINK - (SO)另一个 remoteIO 示例 |另一个很好的例子,但无法弄清楚如何将其获取到数据而不是文件.

LINK - (SO)another remoteIO example | another good example but cant figure out how to get it to data instead of file.

LINK -看起来也很有趣,循环缓冲区 |不知道如何将其与音频回调结合

LINK - also looks interesting, circular buffers | couldn't figure out how to incorporate this with the audio callback

这是我当前尝试直播的课程.尽管接收器端(连接到服务器)的扬声器发出静电,但这似乎有效.这似乎表明音频数据格式有问题.

Here is my current class attempting to stream. This seems to work although there is static coming out of the speakers at the receivers end (connected to the server). Which seems to indicate a problem with the audio data format.

IOS 版本(减去 GCD 套接字的委托方法):

IOS VERSION (minus delegate methods for GCD socket):

@implementation MicCommunicator {
AVAssetWriter * assetWriter;
AVAssetWriterInput * assetWriterInput;
}

@synthesize captureSession = _captureSession;
@synthesize output = _output;
@synthesize restClient = _restClient;
@synthesize uploadAudio = _uploadAudio;
@synthesize outputPath = _outputPath;
@synthesize sendStream = _sendStream;
@synthesize receiveStream = _receiveStream;

@synthesize socket = _socket;
@synthesize isSocketConnected = _isSocketConnected;

-(id)init {
    if ((self = [super init])) {

        _receiveStream = [[NSStream alloc]init];
        _sendStream = [[NSStream alloc]init];
        _socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
        _isSocketConnected = FALSE;

        _restClient = [RestClient sharedManager];
        _uploadAudio = false;

        NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        _outputPath = [NSURL fileURLWithPath:[[searchPaths objectAtIndex:0] stringByAppendingPathComponent:@"micOutput.output"]];

        NSError * assetError;

        AudioChannelLayout acl;
        bzero(&acl, sizeof(acl));
        acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; //kAudioChannelLayoutTag_Stereo;
        NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                             [NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,
                                             [NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
                                             [NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
                                             [NSNumber numberWithInt:1],AVNumberOfChannelsKey,
                                             [NSNumber numberWithInt:64000],AVEncoderBitRateKey,
                                             nil];

        assetWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings]retain];
        [assetWriterInput setExpectsMediaDataInRealTime:YES];

        assetWriter = [[AVAssetWriter assetWriterWithURL:_outputPath fileType:AVFileTypeWAVE error:&assetError]retain]; //AVFileTypeAppleM4A

        if (assetError) {
            NSLog (@"error initing mic: %@", assetError);
            return nil;
        }
        if ([assetWriter canAddInput:assetWriterInput]) {
            [assetWriter addInput:assetWriterInput];
        } else {
            NSLog (@"can't add asset writer input...!");
            return nil;
        }

    }
    return self;
}

-(void)dealloc {
    [_output release];
    [_captureSession release];
    [_captureSession release];
    [assetWriter release];
    [assetWriterInput release];
    [super dealloc];
}


-(void)beginStreaming {

    NSLog(@"avassetwrter class is %@",NSStringFromClass([assetWriter class]));

    self.captureSession = [[AVCaptureSession alloc] init];
    AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    NSError *error = nil;
    AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
    if (audioInput)
        [self.captureSession addInput:audioInput];
    else {
        NSLog(@"No audio input found.");
        return;
    }

    self.output = [[AVCaptureAudioDataOutput alloc] init];

    dispatch_queue_t outputQueue = dispatch_queue_create("micOutputDispatchQueue", NULL);
    [self.output setSampleBufferDelegate:self queue:outputQueue];
    dispatch_release(outputQueue);

    self.uploadAudio = FALSE;

    [self.captureSession addOutput:self.output];
    [assetWriter startWriting];
    [self.captureSession startRunning];
}

-(void)pauseStreaming
{
    self.uploadAudio = FALSE;
}

-(void)resumeStreaming
{
    self.uploadAudio = TRUE;
}

-(void)finishAudioWork
{
    [self dealloc];
}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {


    AudioBufferList audioBufferList;
    NSMutableData *data= [[NSMutableData alloc] init];
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);

    for (int y = 0; y < audioBufferList.mNumberBuffers; y++) {
        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        Float32 *frame = (Float32*)audioBuffer.mData;

        [data appendBytes:frame length:audioBuffer.mDataByteSize];
    }

    // append [data bytes] to your NSOutputStream

    // These two lines write to disk, you may not need this, just providing an example
    [assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
    [assetWriterInput appendSampleBuffer:sampleBuffer];

    //start upload audio data
    if (self.uploadAudio) {

        if (!self.isSocketConnected) {
            [self connect];
        }
            NSString *requestStr = [NSString stringWithFormat:@"POST /transmitaudio?id=%@ HTTP/1.0

",self.restClient.sessionId];

            NSData *requestData = [requestStr dataUsingEncoding:NSUTF8StringEncoding];
        [self.socket writeData:requestData withTimeout:5 tag:0];
        [self.socket writeData:data withTimeout:5 tag:0];
    }
    //stop upload audio data

    CFRelease(blockBuffer);
    blockBuffer=NULL;
    [data release];
}

和JAVA版本:

import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.Arrays;

import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;

import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;

public class AudioWorker extends Thread
{
    private boolean stopped = false;

    private String host;
    private int port;
    private long id=0;
    boolean run=true;
    AudioRecord recorder;

    //ulaw encoder stuff
    private final static String TAG = "UlawEncoderInputStream";

    private final static int MAX_ULAW = 8192;
    private final static int SCALE_BITS = 16;

    private InputStream mIn;

    private int mMax = 0;

    private final byte[] mBuf = new byte[1024];
    private int mBufCount = 0; // should be 0 or 1

    private final byte[] mOneByte = new byte[1];
    ////
    /**
     * Give the thread high priority so that it's not canceled unexpectedly, and start it
     */
    public AudioWorker(String host, int port, long id)
    {
        this.host = host;
        this.port = port;
        this.id = id;
        android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
//        start();
    }

    @Override
    public void run()
    {
        Log.i("AudioWorker", "Running AudioWorker Thread");
        recorder = null;
        AudioTrack track = null;
        short[][]   buffers  = new short[256][160];
        int ix = 0;

        /*
         * Initialize buffer to hold continuously recorded AudioWorker data, start recording, and start
         * playback.
         */
        try
        {
            int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
            recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
            track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,   AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
            recorder.startRecording();
//            track.play();
            /*
             * Loops until something outside of this thread stops it.
             * Reads the data from the recorder and writes it to the AudioWorker track for playback.
             */


            SSLContext sc = SSLContext.getInstance("SSL");
            sc.init(null, trustAllCerts, new java.security.SecureRandom());
            SSLSocketFactory sslFact = sc.getSocketFactory();
            SSLSocket socket = (SSLSocket)sslFact.createSocket(host, port);

            socket.setSoTimeout(10000);
            InputStream inputStream = socket.getInputStream();
            DataInputStream in = new DataInputStream(new BufferedInputStream(inputStream));
            OutputStream outputStream = socket.getOutputStream();
            DataOutputStream os = new DataOutputStream(new BufferedOutputStream(outputStream));
            PrintWriter socketPrinter = new PrintWriter(os);
            BufferedReader br = new BufferedReader(new InputStreamReader(in));

//          socketPrinter.println("POST /transmitaudio?patient=1333369798370 HTTP/1.0");
            socketPrinter.println("POST /transmitaudio?id="+id+" HTTP/1.0");
            socketPrinter.println("Content-Type: audio/basic");
            socketPrinter.println("Content-Length: 99999");
            socketPrinter.println("Connection: Keep-Alive");
            socketPrinter.println("Cache-Control: no-cache");
            socketPrinter.println();
            socketPrinter.flush();


            while(!stopped)
            {
                Log.i("Map", "Writing new data to buffer");
                short[] buffer = buffers[ix++ % buffers.length];

                N = recorder.read(buffer,0,buffer.length);
                track.write(buffer, 0, buffer.length);

                byte[] bytes2 = new byte[buffer.length * 2];
                ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);

                read(bytes2, 0, bytes2.length);
                os.write(bytes2,0,bytes2.length);

//
//                ByteBuffer byteBuf = ByteBuffer.allocate(2*N);
//              System.out.println("byteBuf length "+2*N);
//                int i = 0;
//                while (buffer.length > i) {
//                    byteBuf.putShort(buffer[i]);
//                    i++;
//                }
//                byte[] b = new byte[byteBuf.remaining()];
            }
            os.close();
        }
        catch(Throwable x)
        {
            Log.w("AudioWorker", "Error reading voice AudioWorker", x);
        }
        /*
         * Frees the thread's resources after the loop completes so that it can be run again
         */
        finally
        {
            recorder.stop();
            recorder.release();
            track.stop();
            track.release();
        }
    }

    /**
     * Called from outside of the thread in order to stop the recording/playback loop
     */
    public void close()
    {
         stopped = true;
    }
    public void resumeThread()
    {
         stopped = false;
         run();
    }

    TrustManager[] trustAllCerts = new TrustManager[]{
            new X509TrustManager() {
                public java.security.cert.X509Certificate[] getAcceptedIssuers() {
                    return null;
                }
                public void checkClientTrusted(
                        java.security.cert.X509Certificate[] certs, String authType) {
                }
                public void checkServerTrusted(
                        java.security.cert.X509Certificate[] chain, String authType) {
                    for (int j=0; j<chain.length; j++)
                    {
                        System.out.println("Client certificate information:");
                        System.out.println("  Subject DN: " + chain[j].getSubjectDN());
                        System.out.println("  Issuer DN: " + chain[j].getIssuerDN());
                        System.out.println("  Serial number: " + chain[j].getSerialNumber());
                        System.out.println("");
                    }
                }
            }
    };


    public static void encode(byte[] pcmBuf, int pcmOffset,
            byte[] ulawBuf, int ulawOffset, int length, int max) {

        // from  'ulaw' in wikipedia
        // +8191 to +8159                          0x80
        // +8158 to +4063 in 16 intervals of 256   0x80 + interval number
        // +4062 to +2015 in 16 intervals of 128   0x90 + interval number
        // +2014 to  +991 in 16 intervals of  64   0xA0 + interval number
        //  +990 to  +479 in 16 intervals of  32   0xB0 + interval number
        //  +478 to  +223 in 16 intervals of  16   0xC0 + interval number
        //  +222 to   +95 in 16 intervals of   8   0xD0 + interval number
        //   +94 to   +31 in 16 intervals of   4   0xE0 + interval number
        //   +30 to    +1 in 15 intervals of   2   0xF0 + interval number
        //     0                                   0xFF

        //    -1                                   0x7F
        //   -31 to    -2 in 15 intervals of   2   0x70 + interval number
        //   -95 to   -32 in 16 intervals of   4   0x60 + interval number
        //  -223 to   -96 in 16 intervals of   8   0x50 + interval number
        //  -479 to  -224 in 16 intervals of  16   0x40 + interval number
        //  -991 to  -480 in 16 intervals of  32   0x30 + interval number
        // -2015 to  -992 in 16 intervals of  64   0x20 + interval number
        // -4063 to -2016 in 16 intervals of 128   0x10 + interval number
        // -8159 to -4064 in 16 intervals of 256   0x00 + interval number
        // -8192 to -8160                          0x00

        // set scale factors
        if (max <= 0) max = MAX_ULAW;

        int coef = MAX_ULAW * (1 << SCALE_BITS) / max;

        for (int i = 0; i < length; i++) {
            int pcm = (0xff & pcmBuf[pcmOffset++]) + (pcmBuf[pcmOffset++] << 8);
            pcm = (pcm * coef) >> SCALE_BITS;

            int ulaw;
            if (pcm >= 0) {
                ulaw = pcm <= 0 ? 0xff :
                        pcm <=   30 ? 0xf0 + ((  30 - pcm) >> 1) :
                        pcm <=   94 ? 0xe0 + ((  94 - pcm) >> 2) :
                        pcm <=  222 ? 0xd0 + (( 222 - pcm) >> 3) :
                        pcm <=  478 ? 0xc0 + (( 478 - pcm) >> 4) :
                        pcm <=  990 ? 0xb0 + (( 990 - pcm) >> 5) :
                        pcm <= 2014 ? 0xa0 + ((2014 - pcm) >> 6) :
                        pcm <= 4062 ? 0x90 + ((4062 - pcm) >> 7) :
                        pcm <= 8158 ? 0x80 + ((8158 - pcm) >> 8) :
                        0x80;
            } else {
                ulaw = -1 <= pcm ? 0x7f :
                          -31 <= pcm ? 0x70 + ((pcm -   -31) >> 1) :
                          -95 <= pcm ? 0x60 + ((pcm -   -95) >> 2) :
                         -223 <= pcm ? 0x50 + ((pcm -  -223) >> 3) :
                         -479 <= pcm ? 0x40 + ((pcm -  -479) >> 4) :
                         -991 <= pcm ? 0x30 + ((pcm -  -991) >> 5) :
                        -2015 <= pcm ? 0x20 + ((pcm - -2015) >> 6) :
                        -4063 <= pcm ? 0x10 + ((pcm - -4063) >> 7) :
                        -8159 <= pcm ? 0x00 + ((pcm - -8159) >> 8) :
                        0x00;
            }
            ulawBuf[ulawOffset++] = (byte)ulaw;
        }
    }
    public static int maxAbsPcm(byte[] pcmBuf, int offset, int length) {
        int max = 0;
        for (int i = 0; i < length; i++) {
            int pcm = (0xff & pcmBuf[offset++]) + (pcmBuf[offset++] << 8);
            if (pcm < 0) pcm = -pcm;
            if (pcm > max) max = pcm;
        }
        return max;
    }

    public int read(byte[] buf, int offset, int length) throws IOException {
        if (recorder == null) throw new IllegalStateException("not open");

        // return at least one byte, but try to fill 'length'
        while (mBufCount < 2) {
            int n = recorder.read(mBuf, mBufCount, Math.min(length * 2, mBuf.length - mBufCount));
            if (n == -1) return -1;
            mBufCount += n;
        }

        // compand data
        int n = Math.min(mBufCount / 2, length);
        encode(mBuf, 0, buf, offset, n, mMax);

        // move data to bottom of mBuf
        mBufCount -= n * 2;
        for (int i = 0; i < mBufCount; i++) mBuf[i] = mBuf[i + n * 2];

        return n;
    }

}

推荐答案

我在这个主题上的工作是惊人而漫长的.我终于让它工作了,不管它可能被黑客入侵.因此,我将在发布答案之前列出一些警告:

My work on this topic has been staggering and long. I have finally gotten this to work however hacked it may be. Because of that I will list some warnings prior to posting the answer:

  1. 缓冲区之间仍有咔哒声

  1. There is still a clicking noise between buffers

由于我在 obj-c++ 类中使用 obj-c 类的方式,我收到警告,所以那里有问题(但是,从我使用池的研究来看,它与发布的结果相同,所以我不相信这很重要):

I get warnings due to the way I use my obj-c classes in the obj-c++ class, so there is something wrong there (however from my research using a pool does the same as release so I dont believe this matters to much):

__NSCFString 类的对象 0x13cd20 在没有池的情况下自动释放地方 - 只是泄漏 - 中断 objc_autoreleaseNoPool() 进行调试

  • 为了使这个工作正常,我不得不注释掉所有来自 SpeakHereController(见下文)的 AQPlayer 引用,因为我无法通过任何其他方式修复错误.不过这对我来说并不重要,因为我只是在录音

  • In order to get this working I had to comment out all AQPlayer references from SpeakHereController (see below) due to errors I couldnt fix any other way. It didnt matter for me however since I am only recording

    所以上面的主要答案是 AVAssetWriter 中存在一个错误,阻止它附加字节和写入音频数据.在联系苹果支持并让他们通知我之后,我终于发现了这一点.据我所知,该错误特定于 ulaw 和 AVAssetWriter,尽管我还没有尝试过许多其他格式来验证.
    对此,唯一的其他选择是/曾经使用 AudioQueues.我之前尝试过的东西,但带来了一堆问题.最大的问题是我缺乏 obj-c++ 的知识.下面让事情正常工作的类来自 speakHere 示例,略有更改,以便音频采用 ulaw 格式.其他问题是试图让所有文件都能很好地播放.然而,这很容易通过将链中的所有文件名更改为 .mm 来解决.下一个问题是尝试协调使用这些类.这仍然是一个 WIP,并且与警告编号 2 相关.但我对此的基本解决方案是使用 SpeakHereController(也包含在 speakhere 示例中)而不是直接访问 AQRecorder.

    So the main answer to the above is that there is a bug in AVAssetWriter that stopped it from appending the bytes and writing the audio data. I finally found this out after contacting apple support and have them notify me about this. As far as I know the bug is specific to ulaw and AVAssetWriter though I havnt tried many other formats to verify.
    In response to this the only other option is/was to use AudioQueues. Something I had tried before but had brought a bunch of problems. The biggest problem being my lack of knowledge in obj-c++. The class below that got things working is from the speakHere example with slight changes so that the audio is ulaw formatted. The other problems came about trying to get all files to play nicely. However this was easily remedied by changing all filenames in the chain to .mm . The next problem was trying to use the classes in harmony. This is still a WIP, and ties into warning number 2. But my basic solution to this was to use the SpeakHereController (also included in the speakhere example) instead of directly accessing AQRecorder.

    无论如何这里是代码:

    使用 obj-c 类中的 SpeakHereController

    Using the SpeakHereController from an obj-c class

    .h

    @property(nonatomic,strong) SpeakHereController * recorder;
    

    .mm

    [init method]
            //AQRecorder wrapper (SpeakHereController) allocation
            _recorder = [[SpeakHereController alloc]init];
            //AQRecorder wrapper (SpeakHereController) initialization
            //technically this class is a controller and thats why its init method is awakeFromNib
            [_recorder awakeFromNib];
    
    [recording]
         bool buttonState = self.audioRecord.isSelected;
    [self.audioRecord setSelected:!buttonState];
    
    if ([self.audioRecord isSelected]) {
    
        [self.recorder startRecord];
    }else {
        [self.recorder stopRecord];
    }
    

    SpeakHereController

    SpeakHereController

    #import "SpeakHereController.h"
    
    @implementation SpeakHereController
    
    @synthesize player;
    @synthesize recorder;
    
    @synthesize btn_record;
    @synthesize btn_play;
    @synthesize fileDescription;
    @synthesize lvlMeter_in;
    @synthesize playbackWasInterrupted;
    
    char *OSTypeToStr(char *buf, OSType t)
    {
        char *p = buf;
        char str[4], *q = str;
        *(UInt32 *)str = CFSwapInt32(t);
        for (int i = 0; i < 4; ++i) {
            if (isprint(*q) && *q != '\')
                *p++ = *q++;
            else {
                sprintf(p, "\x%02x", *q++);
                p += 4;
            }
        }
        *p = '';
        return buf;
    }
    
    -(void)setFileDescriptionForFormat: (CAStreamBasicDescription)format withName:(NSString*)name
    {
        char buf[5];
        const char *dataFormat = OSTypeToStr(buf, format.mFormatID);
        NSString* description = [[NSString alloc] initWithFormat:@"(%d ch. %s @ %g Hz)", format.NumberChannels(), dataFormat, format.mSampleRate, nil];
        fileDescription.text = description;
        [description release];
    }
    
    #pragma mark Playback routines
    
    -(void)stopPlayQueue
    {
    //  player->StopQueue();
        [lvlMeter_in setAq: nil];
        btn_record.enabled = YES;
    }
    
    -(void)pausePlayQueue
    {
    //  player->PauseQueue();
        playbackWasPaused = YES;
    }
    
    
    -(void)startRecord
    {
        //    recorder = new AQRecorder();
    
        if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
        {
            [self stopRecord];
        }
        else // If we're not recording, start.
        {
            //      btn_play.enabled = NO;
    
            // Set the button's state to "stop"
            //      btn_record.title = @"Stop";
    
            // Start the recorder
            recorder->StartRecord(CFSTR("recordedFile.caf"));
    
            [self setFileDescriptionForFormat:recorder->DataFormat() withName:@"Recorded File"];
    
            // Hook the level meter up to the Audio Queue for the recorder
            //      [lvlMeter_in setAq: recorder->Queue()];
        }
    }
    
    - (void)stopRecord
    {
        // Disconnect our level meter from the audio queue
    //  [lvlMeter_in setAq: nil];
    
        recorder->StopRecord();
    
        // dispose the previous playback queue
    //  player->DisposeQueue(true);
    
        // now create a new queue for the recorded file
        recordFilePath = (CFStringRef)[NSTemporaryDirectory() stringByAppendingPathComponent: @"recordedFile.caf"];
    //  player->CreateQueueForFile(recordFilePath);
    
        // Set the button's state back to "record"
    //  btn_record.title = @"Record";
    //  btn_play.enabled = YES;
    }
    
    - (IBAction)play:(id)sender
    {
        if (player->IsRunning())
        {
            if (playbackWasPaused) {
    //          OSStatus result = player->StartQueue(true);
    //          if (result == noErr)
    //              [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self];
            }
            else
    //          [self stopPlayQueue];
                nil;
        }
        else
        {
    //      OSStatus result = player->StartQueue(false);
    //      if (result == noErr)
    //          [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self];
        }
    }
    
    - (IBAction)record:(id)sender
    {
        if (recorder->IsRunning()) // If we are currently recording, stop and save the file.
        {
            [self stopRecord];
        }
        else // If we're not recording, start.
        {
    //      btn_play.enabled = NO;
    //
    //      // Set the button's state to "stop"
    //      btn_record.title = @"Stop";
    
            // Start the recorder
            recorder->StartRecord(CFSTR("recordedFile.caf"));
    
            [self setFileDescriptionForFormat:recorder->DataFormat() withName:@"Recorded File"];
    
            // Hook the level meter up to the Audio Queue for the recorder
            [lvlMeter_in setAq: recorder->Queue()];
        }
    }
    #pragma mark AudioSession listeners
    void interruptionListener(  void *  inClientData,
                                UInt32  inInterruptionState)
    {
        SpeakHereController *THIS = (SpeakHereController*)inClientData;
        if (inInterruptionState == kAudioSessionBeginInterruption)
        {
            if (THIS->recorder->IsRunning()) {
                [THIS stopRecord];
            }
            else if (THIS->player->IsRunning()) {
                //the queue will stop itself on an interruption, we just need to update the UI
                [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS];
                THIS->playbackWasInterrupted = YES;
            }
        }
        else if ((inInterruptionState == kAudioSessionEndInterruption) && THIS->playbackWasInterrupted)
        {
            // we were playing back when we were interrupted, so reset and resume now
    //      THIS->player->StartQueue(true);
            [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:THIS];
            THIS->playbackWasInterrupted = NO;
        }
    }
    
    void propListener(  void *                  inClientData,
                        AudioSessionPropertyID  inID,
                        UInt32                  inDataSize,
                        const void *            inData)
    {
        SpeakHereController *THIS = (SpeakHereController*)inClientData;
        if (inID == kAudioSessionProperty_AudioRouteChange)
        {
            CFDictionaryRef routeDictionary = (CFDictionaryRef)inData;
            //CFShow(routeDictionary);
            CFNumberRef reason = (CFNumberRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_Reason));
            SInt32 reasonVal;
            CFNumberGetValue(reason, kCFNumberSInt32Type, &reasonVal);
            if (reasonVal != kAudioSessionRouteChangeReason_CategoryChange)
            {
                /*CFStringRef oldRoute = (CFStringRef)CFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_OldRoute));
                if (oldRoute)
                {
                    printf("old route:
    ");
                    CFShow(oldRoute);
                }
                else
                    printf("ERROR GETTING OLD AUDIO ROUTE!
    ");
    
                CFStringRef newRoute;
                UInt32 size; size = sizeof(CFStringRef);
                OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);
                if (error) printf("ERROR GETTING NEW AUDIO ROUTE! %d
    ", error);
                else
                {
                    printf("new route:
    ");
                    CFShow(newRoute);
                }*/
    
                if (reasonVal == kAudioSessionRouteChangeReason_OldDeviceUnavailable)
                {
                    if (THIS->player->IsRunning()) {
                        [THIS pausePlayQueue];
                        [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS];
                    }
                }
    
                // stop the queue if we had a non-policy route change
                if (THIS->recorder->IsRunning()) {
                    [THIS stopRecord];
                }
            }
        }
        else if (inID == kAudioSessionProperty_AudioInputAvailable)
        {
            if (inDataSize == sizeof(UInt32)) {
                UInt32 isAvailable = *(UInt32*)inData;
                // disable recording if input is not available
                THIS->btn_record.enabled = (isAvailable > 0) ? YES : NO;
            }
        }
    }
    
    #pragma mark Initialization routines
    - (void)awakeFromNib
    {
        // Allocate our singleton instance for the recorder & player object
        recorder = new AQRecorder();
        player = nil;//new AQPlayer();
    
        OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, self);
        if (error) printf("ERROR INITIALIZING AUDIO SESSION! %d
    ", error);
        else
        {
            UInt32 category = kAudioSessionCategory_PlayAndRecord;
            error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
            if (error) printf("couldn't set audio category!");
    
            error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self);
            if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d
    ", error);
            UInt32 inputAvailable = 0;
            UInt32 size = sizeof(inputAvailable);
    
            // we do not want to allow recording if input is not available
            error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable);
            if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d
    ", error);
    //      btn_record.enabled = (inputAvailable) ? YES : NO;
    
            // we also need to listen to see if input availability changes
            error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, self);
            if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d
    ", error);
    
            error = AudioSessionSetActive(true);
            if (error) printf("AudioSessionSetActive (true) failed");
        }
    
    //  [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueStopped:) name:@"playbackQueueStopped" object:nil];
    //  [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueResumed:) name:@"playbackQueueResumed" object:nil];
    
    //  UIColor *bgColor = [[UIColor alloc] initWithRed:.39 green:.44 blue:.57 alpha:.5];
    //  [lvlMeter_in setBackgroundColor:bgColor];
    //  [lvlMeter_in setBorderColor:bgColor];
    //  [bgColor release];
    
        // disable the play button since we have no recording to play yet
    //  btn_play.enabled = NO;
    //  playbackWasInterrupted = NO;
    //  playbackWasPaused = NO;
    }
    
    # pragma mark Notification routines
    - (void)playbackQueueStopped:(NSNotification *)note
    {
        btn_play.title = @"Play";
        [lvlMeter_in setAq: nil];
        btn_record.enabled = YES;
    }
    
    - (void)playbackQueueResumed:(NSNotification *)note
    {
        btn_play.title = @"Stop";
        btn_record.enabled = NO;
        [lvlMeter_in setAq: player->Queue()];
    }
    
    #pragma mark Cleanup
    - (void)dealloc
    {
        [btn_record release];
        [btn_play release];
        [fileDescription release];
        [lvlMeter_in release];
    
    //  delete player;
        delete recorder;
    
        [super dealloc];
    }
    
    @end
    

    AQ记录器(.h 有 2 行重要

    AQRecorder(.h has 2 lines of importance

    #define kNumberRecordBuffers    3
    #define kBufferDurationSeconds 5.0
    

    )

    #include "AQRecorder.h"
    //#include "UploadAudioWrapperInterface.h"
    //#include "RestClient.h"
    
    RestClient * restClient;
    NSData* data;
    
    // ____________________________________________________________________________________
    // Determine the size, in bytes, of a buffer necessary to represent the supplied number
    // of seconds of audio data.
    int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float seconds)
    {
        int packets, frames, bytes = 0;
        try {
            frames = (int)ceil(seconds * format->mSampleRate);
    
            if (format->mBytesPerFrame > 0)
                bytes = frames * format->mBytesPerFrame;
            else {
                UInt32 maxPacketSize;
                if (format->mBytesPerPacket > 0)
                    maxPacketSize = format->mBytesPerPacket;    // constant packet size
                else {
                    UInt32 propertySize = sizeof(maxPacketSize);
                    XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
                                                     &propertySize), "couldn't get queue's maximum output packet size");
                }
                if (format->mFramesPerPacket > 0)
                    packets = frames / format->mFramesPerPacket;
                else
                    packets = frames;   // worst-case scenario: 1 frame in a packet
                if (packets == 0)       // sanity check
                    packets = 1;
                bytes = packets * maxPacketSize;
            }
        } catch (CAXException e) {
            char buf[256];
            fprintf(stderr, "Error: %s (%s)
    ", e.mOperation, e.FormatError(buf));
            return 0;
        }
        return bytes;
    }
    
    // ____________________________________________________________________________________
    // AudioQueue callback function, called when an input buffers has been filled.
    void AQRecorder::MyInputBufferHandler(  void *                              inUserData,
                                            AudioQueueRef                       inAQ,
                                            AudioQueueBufferRef                 inBuffer,
                                            const AudioTimeStamp *              inStartTime,
                                            UInt32                              inNumPackets,
                                            const AudioStreamPacketDescription* inPacketDesc)
    {
        AQRecorder *aqr = (AQRecorder *)inUserData;
    
    
        try {
            if (inNumPackets > 0) {
                // write packets to file
    //          XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
    //                                           inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
    //                     "AudioFileWritePackets failed");
                aqr->mRecordPacket += inNumPackets;
    
    
    
    //            int numBytes = inBuffer->mAudioDataByteSize;
    //            SInt8 *testBuffer = (SInt8*)inBuffer->mAudioData;
    //
    //            for (int i=0; i < numBytes; i++)
    //            {
    //                SInt8 currentData = testBuffer[i];
    //                printf("Current data in testbuffer is %d", currentData);
    //
    //                NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentData)];
    //            }
    
    
                data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];
    
                [restClient uploadAudioData:data url:nil];
    
            }
    
    
            // if we're not stopping, re-enqueue the buffer so that it gets filled again
            if (aqr->IsRunning())
                XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
        } catch (CAXException e) {
            char buf[256];
            fprintf(stderr, "Error: %s (%s)
    ", e.mOperation, e.FormatError(buf));
        }
    
    }
    
    AQRecorder::AQRecorder()
    {
        mIsRunning = false;
        mRecordPacket = 0;
    
        data = [[NSData alloc]init];
        restClient = [[RestClient sharedManager]retain];
    }
    
    AQRecorder::~AQRecorder()
    {
        AudioQueueDispose(mQueue, TRUE);
        AudioFileClose(mRecordFile);
    
        if (mFileName){
         CFRelease(mFileName);
        }
    
        [restClient release];
        [data release];
    }
    
    // ____________________________________________________________________________________
    // Copy a queue's encoder's magic cookie to an audio file.
    void AQRecorder::CopyEncoderCookieToFile()
    {
        UInt32 propertySize;
        // get the magic cookie, if any, from the converter
        OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySize);
    
        // we can get a noErr result and also a propertySize == 0
        // -- if the file format does support magic cookies, but this file doesn't have one.
        if (err == noErr && propertySize > 0) {
            Byte *magicCookie = new Byte[propertySize];
            UInt32 magicCookieSize;
            XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "get audio converter's magic cookie");
            magicCookieSize = propertySize; // the converter lies and tell us the wrong size
    
            // now set the magic cookie on the output file
            UInt32 willEatTheCookie = false;
            // the converter wants to give us one; will the file take it?
            err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookie);
            if (err == noErr && willEatTheCookie) {
                err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookie);
                XThrowIfError(err, "set audio file's magic cookie");
            }
            delete[] magicCookie;
        }
    }
    
    void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
    {
        memset(&mRecordFormat, 0, sizeof(mRecordFormat));
    
        UInt32 size = sizeof(mRecordFormat.mSampleRate);
        XThrowIfError(AudioSessionGetProperty(  kAudioSessionProperty_CurrentHardwareSampleRate,
                                            &size,
                                            &mRecordFormat.mSampleRate), "couldn't get hardware sample rate");
    
        //override samplearate to 8k from device sample rate
    
        mRecordFormat.mSampleRate = 8000.0;
    
        size = sizeof(mRecordFormat.mChannelsPerFrame);
        XThrowIfError(AudioSessionGetProperty(  kAudioSessionProperty_CurrentHardwareInputNumberChannels,
                                            &size,
                                            &mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
    
    
    //    mRecordFormat.mChannelsPerFrame = 1;
    
        mRecordFormat.mFormatID = inFormatID;
        if (inFormatID == kAudioFormatLinearPCM)
        {
            // if we want pcm, default to signed 16-bit little-endian
            mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
            mRecordFormat.mBitsPerChannel = 16;
            mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
            mRecordFormat.mFramesPerPacket = 1;
        }
    
        if (inFormatID == kAudioFormatULaw) {
    //        NSLog(@"is ulaw");
            mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
            mRecordFormat.mSampleRate = 8000.0;
    //        mRecordFormat.mFormatFlags = 0;
            mRecordFormat.mFramesPerPacket = 1;
            mRecordFormat.mChannelsPerFrame = 1;
            mRecordFormat.mBitsPerChannel = 16;//was 8
            mRecordFormat.mBytesPerPacket = 1;
            mRecordFormat.mBytesPerFrame = 1;
        }
    }
    
    NSString * GetDocumentDirectory(void)
    {
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
        return basePath;
    }
    
    
    void AQRecorder::StartRecord(CFStringRef inRecordFile)
    {
        int i, bufferByteSize;
        UInt32 size;
        CFURLRef url;
    
        try {
            mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile);
    
            // specify the recording format
            SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/);
    
            // create the queue
            XThrowIfError(AudioQueueNewInput(
                                          &mRecordFormat,
                                          MyInputBufferHandler,
                                          this /* userData */,
                                          NULL /* run loop */, NULL /* run loop mode */,
                                          0 /* flags */, &mQueue), "AudioQueueNewInput failed");
    
            // get the record format back from the queue's audio converter --
            // the file may require a more specific stream description than was necessary to create the encoder.
            mRecordPacket = 0;
    
            size = sizeof(mRecordFormat);
            XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
                                             &mRecordFormat, &size), "couldn't get queue's format");
    
            NSString *basePath = GetDocumentDirectory();
            NSString *recordFile = [basePath /*NSTemporaryDirectory()*/ stringByAppendingPathComponent: (NSString*)inRecordFile];
    
            url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);
    
            // create the audio file
            XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile,
                                              &mRecordFile), "AudioFileCreateWithURL failed");
            CFRelease(url);
    
            // copy the cookie first to give the file object as much info as we can about the data going in
            // not necessary for pcm, but required for some compressed audio
            CopyEncoderCookieToFile();
    
    
            // allocate and enqueue buffers
            bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds);   // enough bytes for half a second
            for (i = 0; i < kNumberRecordBuffers; ++i) {
                XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
                           "AudioQueueAllocateBuffer failed");
                XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
                           "AudioQueueEnqueueBuffer failed");
            }
            // start the queue
            mIsRunning = true;
            XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
        }
        catch (CAXException &e) {
            char buf[256];
            fprintf(stderr, "Error: %s (%s)
    ", e.mOperation, e.FormatError(buf));
        }
        catch (...) {
            fprintf(stderr, "An unknown error occurred
    ");
        }
    
    }
    
    void AQRecorder::StopRecord()
    {
        // end recording
        mIsRunning = false;
    //    XThrowIfError(AudioQueueReset(mQueue), "AudioQueueStop failed");
        XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed");
        // a codec may update its cookie at the end of an encoding session, so reapply it to the file now
        CopyEncoderCookieToFile();
        if (mFileName)
        {
            CFRelease(mFileName);
            mFileName = NULL;
        }
        AudioQueueDispose(mQueue, true);
        AudioFileClose(mRecordFile);
    }
    

    Please feel free to comment or refine my answer, I will accept it as the answer if its a better solution.Please note this was my first attempt and Im sure it is not the most elegant or proper solution.

    Please feel free to comment or refine my answer, I will accept it as the answer if its a better solution. Please note this was my first attempt and Im sure it is not the most elegant or proper solution.

    这篇关于从 iphone 流媒体的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

  • 07-30 17:12