问题描述
我打算开发一个通用应用程序来分析音频样本.当我说通用"时,我的意思是任何技术(Javascript,C,Java等)都可以使用它.基本上,我使用Apple的AVFoundation在iOS上创建了一个应用程序,该应用程序实时接收长度为512(bufferSize = 512)的麦克风采样.在Python上,我使用PyAudio做了同样的事情,但是不幸的是,我收到了非常不同的值...
I'm planning to make an universal application that analyses audio samples. When I say 'universal' I mean that any technology (Javascript, C, Java, etc) can use it.Basically I made an application on iOS, using Apple's AVFoundation, that receives on real time the microphone samples at a lenght of 512 (bufferSize = 512).At Python I made the same thing, using PyAudio, but unfortunately I received very different values...
看样品:
Samples of bufferSize = 512 on iOS:
[0.0166742969, 0.0181432627, 0.0184620395, 0.0182254426, 0.0181945376, 0.0185530782, 0.0192517322, 0.0199078992, 0.0204724055, 0.0212812237, 0.022370765, 0.0230008475, 0.0225516111, 0.0213304944, 0.0200473778, 0.019841563, 0.0206818394, 0.0211550407, 0.0207783803, 0.020227218 ....
Samples of bufferSize = 512 on Python:
[ -52. -32. -11. 10. 24. 31. 37. 38. 33. 25. 10. -4.
-18. -26. -29. -39. ....
更多:
Python代码:
https://gist.github.com/denisb411/7c6f601175e8bb9f735d8aa43a0db340
在两种情况下,我都使用同一台计算机.
On both cases I used the same computer.
我如何找到一种将它们转换"(不知道这是否是正确的词)到相同规模的方法?
How do I find a way to 'convert'(don't know if this is the proper word) them to the same scale?
如果不清楚我的问题,请通知我.
If I wasn't clear at the question please notify me.
推荐答案
音频样本通常以16或24位进行量化.但是这些样本可以采用的值范围有不同的约定:
Audio samples are typically quantized on 16 or 24 bits. But there are different conventions about the range of values these samples can take:
- 如果要量化8位,则样本通常会存储为0到255之间的无符号字节
- 如果您要量化16位,则样本通常将存储为2的补码有符号整数,范围从-32768到32767
- 如果您要量化24位,则样本通常会存储为无符号整数
- 等
基本上,当您决定存储样本时,有两个参数:
Basically, when you decide to store samples, you have two parameters:
- 已签名或未签名
- int或float
每个都有其优点和缺点.例如,将浮点数存储在[-1,1]范围内的优点是两个样本相乘将始终在[-1,1]的范围内……
Each has its advantages and drawbacks. For instance, storing in a float in the range [-1, 1] has the advantage that multiplying two samples will always be in the same range of [-1, 1]…
因此,要回答您的问题,只需更改打开 PyAudio 流的格式.当前,您使用format=pyaudio.paInt16
.尝试更改它pyaudio.paFloat32
,您应该获得与iOS实施相同的数据.
So, to answer your question, you just need to change the format with which you open your PyAudio stream. Currently, you use format=pyaudio.paInt16
. Try to change it pyaudio.paFloat32
and you should get the same data as with your iOS implementation.
这篇关于如何找到两个不同音频样本之间的标度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!