问题描述
我正在尝试通过WebRTC的PeerConnection发送由 getUserMedia()
获取并通过Web Audio API更改的音频。 Web Audio API和WebRTC似乎有能力做到这一点,但我无法理解如何做到这一点。在Web Audio API中, AudioContext
对象包含一个方法 createMediaStreamSource()
,它提供了一种连接MediaStream的方法由getUserMedia()获得。此外,还有一个 createMediaStreamDestination()
方法,它似乎返回一个带有stream属性的对象。
I'm trying to send audio, obtained by getUserMedia()
and altered with the Web Audio API, over a PeerConnection from WebRTC. The Web Audio API and WebRTC seem to have the ability to do this but I'm having trouble understanding how this can be done. Within the Web Audio API, the AudioContext
object contains a method createMediaStreamSource()
, which provides a way to connect the MediaStream obtained by getUserMedia(). Also, there is a createMediaStreamDestination()
method, which seems to return an object with a stream attribute.
我从getUserMedia()方法获取音频和视频。我遇到的问题是如何将这个流对象(包括音频和视频)传递给这些方法(例如:createMediaStreamSource())?我是否首先需要以某种方式从流中提取音频(getAudioTracks)并找到将其与视频组合的方法?或者我按原样传递它并使视频不受影响?音频只能改变一次(在添加到PeerConnection之前)吗?
I'm getting both audio and video from the getUserMedia() method. What I'm having trouble with is how would I pass this stream object (with both audio and video) into those methods (ex: createMediaStreamSource())? Do I first need to extract, somehow, the audio from the stream (getAudioTracks) and find a way to combine it back with the video? Or do I pass it as is and it leaves the video unaffected? Can the audio only be altered once (before added to the PeerConnection)?
推荐答案
createMediaStreamSource()
方法将 MediaStream
对象作为其参数,然后从第一个 AudioMediaStreamTrack
获取此对象用作音频源。这可以与从 getUserMedia()
方法接收的MediaStream对象一起使用,即使该对象包含音频和视频。例如:
The createMediaStreamSource()
method takes a MediaStream
object as its parameter, which it then takes the first AudioMediaStreamTrack
from this object to be used as the audio source. This can be used with the MediaStream object received from the getUserMedia()
method even if that object contains both audio and video. For instance:
var source = context.createMediaStreamSource(localStream);
上述代码中的context是 AudioContext
object和localStream是从getUserMedia()获得的MediaStream对象。 createMediaStreamDestination()
方法创建一个目标节点对象,该对象在其stream属性中具有MediaStream对象。此MediaStream对象仅包含一个AudioMediaStreamTrack(即使源的输入流包含音频和视频或众多音频轨道):从源中的流获取的轨道的更改版本。例如:
Where "context", in the above code, is an AudioContext
object and "localStream" is a MediaStream object obtained from getUserMedia(). The createMediaStreamDestination()
method creates a destination node object which has a MediaStream object within its "stream" attribute. This MediaStream object only contains one AudioMediaStreamTrack (even if the input stream to the source contained both audio and video or numerous audio tracks): the altered version of the track obtained from the stream within the source. For instance:
var destination = context.createMediaStreamDestination();
现在,在您可以访问新创建的目标变量的stream属性之前,您必须创建音频通过将所有节点链接在一起的图形。对于此示例,假设我们有一个名为filter的BiquadFilter节点:
Now, before you can access the stream attribute of the newly created destination variable, you must create the audio graph by linking all the nodes together. For this example, lets assume we have a BiquadFilter node named filter:
source.connect(filter);
filter.connect(destination);
然后,我们可以从目标变量中获取stream属性。这可用于添加到 PeerConnection
对象以发送给远程对等方:
Then, we can obtain the stream attribute from the destination variable. And this can be used to add to the PeerConnection
object to send to a remote peer:
peerConnection.addStream(destination.stream);
注意:stream属性包含仅具有已更改的AudioMediaStreamTrack的MediaStream对象。因此,没有视频。如果您还想要发送视频,则必须将此曲目添加到包含视频曲目的流对象中:
Note: the stream attribute contains a MediaStream object with only the altered AudioMediaStreamTrack. Therefore, no video. If you want video to be sent as well, you'll have to add this track to a stream object that contains a video track:
var audioTracks = destination.stream.getAudioTracks();
var track = audioTracks[0]; //stream only contains one audio track
localStream.addTrack(track);
peerConnection.addStream(localStream);
请注意, addTrack
方法如果MediaStream对象中已存在具有相同ID的轨道,则不会添加轨道。因此,您可能必须先删除在源节点中获取的轨道。
Keep in mind, that the addTrack
method will not add the track if there is already one in the MediaStream object with the same id. Therefore, you may have to first remove the track that was obtained in the source node.
通过调整中间节点(源和目的地之间)的值,可以随时改变声音。这是因为流在被发送到另一个对等体之前通过节点。有关动态更改效果的录制的声音(对于流应该是相同的)。注意:我还没有测试过这段代码。虽然它在理论上有效,但可能存在一些跨浏览器问题,因为Web Audio API和WebRTC都处于工作草案阶段,尚未标准化。我假设它可以在Mozilla Firefox和Google Chrome中运行。
The sound should be able to be altered at any time by adjusting the values within the intermediate nodes (between the source and destination). This is because the stream passes through the nodes before being sent to the other peer. Check out this example on dynamically changing the effect on a recorded sound (should be the same for a stream). Note: I have not tested this code yet. Though it works in theory, there may be some cross browser issues since both the Web Audio API and WebRTC are in working draft stages and not yet standardized. I assume for it to work in Mozilla Firefox and Google Chrome.
参考
- Media Capture and Streams
- Web Audio API
这篇关于通过PeerConnection发送带有Web音频效果的MediaStream对象的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!