本文介绍了Web Audio Api:通过套接字从Node.js服务器播放数据块的正确方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用以下代码来解码来自nodejs套接字的音频块

I'm using the following code to decode audio chunks from nodejs's socket

window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;

client.on('stream', function(stream, meta){
    stream.on('data', function(data) {
        context.decodeAudioData(data, function(buffer) {
            audioStack.push(buffer);
            if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
                init++;
                scheduleBuffers();
            }
        }, function(err) {
            console.log("err(decodeAudioData): "+err);
        });
    });
});

function scheduleBuffers() {
    while ( audioStack.length) {
        var buffer = audioStack.shift();
        var source    = context.createBufferSource();
        source.buffer = buffer;
        source.connect(context.destination);
        if (nextTime == 0)
            nextTime = context.currentTime + 0.05;  /// add 50ms latency to work well across systems - tune this if you like
        source.start(nextTime);
        nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
    };
}

但是我无法弄清音频块之间的间隙/毛刺.

But it has some gaps/glitches between audio chunks that I'm unable to figure out.

我还读到MediaSource可以做到这一点,并且由播放器来处理时间,而不必手动进行.有人可以提供处理mp3数据的示例吗?

I've also read that with MediaSource it's possible to do the same and having the timing handled by the player instead of doing it manually. Can someone provide an example of handling mp3 data?

此外,使用Web音频API处理实时流媒体的正确方法是什么?我已经阅读了几乎所有与此主题有关的问题,并且似乎没有任何问题可以解决.有什么想法吗?

Moreover, which is the proper way to handle live streaming with web audio API? I've already read almost all questions os SO about this subject and none of them seem to work without glitches. Any ideas?

推荐答案

您可以以以下代码为例: https://github.com/kmoskwiak/node-tcp-streaming-server

You can take this code as an example: https://github.com/kmoskwiak/node-tcp-streaming-server

它基本上使用媒体源扩展名.您需要做的就是从视频更改为音频

It basically uses media source extensions. All you need to do is to change from video to audio

buffer = mediaSource.addSourceBuffer('audio/mpeg');

这篇关于Web Audio Api:通过套接字从Node.js服务器播放数据块的正确方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-14 01:16