问题描述
我有麻烦的NetStream在AS3。我工作的这个项目允许用户浏览视频(局部),并进行播放。我遇到的问题是, netStream.seek(0)
从我可以告诉它不会做任何事情,但我得到一个将NetStatusEvent函数内部和 NetStream.Seek.Notify
被触发。我使用的NativeProcess和下面的函数是这使得任何区别。
I'm having trouble with NetStream in AS3. The project I am working on allows users to browse a video (locally) and play it back. The issue I am having is that netStream.seek(0);
from what I can tell it doesn't do anything, although I get inside a NetStatusEvent function and NetStream.Seek.Notify
is triggered. I'm using NativeProcess and the following function is this makes any difference.
public function ProgressEventOutputHandler(e:ProgressEvent):void {
videoByteArray = new ByteArray();
nativeProcess.standardOutput.readBytes(videoByteArray, 0, nativeProcess.standardOutput.bytesAvailable);
netStream.appendBytes(videoByteArray);
}
我失去了一些东西呢?我的NetStream暂停使用前 netStream.seek(0);。
编辑:
在解决这个问题,我跟着VC.One我已经做了说明,企图如下:
In an attempt to fix this issue I followed the instructions by VC.One I've done the following:
-
感动
videoByteArray =新的ByteArray();
来我的初始化函数,并创建tempVideoByteArray =新的ByteArray();
在此功能。
Moved
videoByteArray = new ByteArray();
to my init function and also createdtempVideoByteArray = new ByteArray();
in this function.
更新我的ProgressEventOutputHandler功能,使其不再创造了videoByteArray新ByteArray,并改变了这行 - nativeProcess.standardOutput.readBytes(videoByteArray,videoByteArray.length,nativeProcess.standardOutput.bytesAvailable) ;
Update my ProgressEventOutputHandler function so that it no longer created a new ByteArray for videoByteArray and changed this line - nativeProcess.standardOutput.readBytes(videoByteArray, videoByteArray.length, nativeProcess.standardOutput.bytesAvailable);
我已经改变闲来无事,现在的视频将不会加载。如果我允许的ProgressEventOutputHandler内创建一个新的ByteArray功能的视频并重新加载。
I have changed nothing else and now the video will not load. If I allow a new ByteArray to be created inside the ProgressEventOutputHandler function the video does load again.
推荐答案
短版本:
尝试code我粘贴在这里: Github上摘录链接
Try the code I pasted here: Github Snippet link
龙版本:
这个有点儿长,但希望它有助于一劳永逸 的...不要担心砖墙的事情,墙是用来被砸坏。为了让你的灵感,检查出一些内部演示从VC:使用对appendBytes
一实验室:
- MP4寻求实验 :研究
对appendBytes
帧数据存取和时间/寻求处理。从MP4仅使用AS3 code实时帧字节转换成FLV格式。 - 速度调节音频和放大器;视频 :在视频分离及放大器进行实时MP3音频;影响实验。需要在音轨MP3数据MP4 / FLV文件。
- 花样视频帧 :为多个视频显示由相同帧数。
- MP4 Seeking Experiment : research for
appendBytes
frame data access and time/seek handling. Real-time frame bytes convert from MP4 to FLV format using only AS3 code. - Speed Adjust of Audio & Video : for real-time MP3 audio in video separation & effecting experiment. Requires MP4/FLV file with MP3 data in the audio track.
- Synchronised Video Frames : for multiple videos displaying by the same frame number.
PS:我会用 URLStream
方法,因为这是那些加载本地或网络文件的更多有用的答案。你可以从 urlstream.progressEvent
更改为您常用的 nativeProcess.progressEvent
。
我知道FFMPEG但只使用AIR制作Android应用程序。因此,对于这个AIR / FFMPEG连接你知道的比我多。
PS: I'll be using URLStream
method as that's a more useful answer to those loading local or online files. You could change from urlstream.progressEvent
to your usual nativeProcess.progressEvent
.
I know FFMPEG but only used AIR for making Android apps. So for this AIR/FFMPEG connection you know more than me.
另外这个回答假设你使用的 FLV,支持MPEG H.264视频和放大器; MP3或AAC音频
Also this answer assumes you're using FLV with MPEG H.264 video & MP3 or AAC audio.
的ffmpeg -i input.mp4 -c:V复制-c:一个MP3 -b:一个128K -ac 2 -ar 44100 FLV_with_MP3.flv
这个假设很重要,因为它影响到什么样的字节我们寻找。另外,在上述的FLV与H.264视频和AAC或MP3音频我们可以预期的情况下,以下的(求时)如下:
This assumption matters because it affects what kind of bytes we look for.In the case of the above FLV with a H.264 video and AAC or MP3 audio we can expect the following (when seeking) :
- 由于这是MPEG第一视频标签将持有 AVC德codeR配置字节,第一个音频标记持有电台具体配置字节。这个数据是不实际的媒体框架,但简单地打包像音频/视频标签。这些都需要的MPEG播放。同样的字节可以在
STSD
的元数据项(MOOV
原子)MP4容器内被发现。现在,下找到视频标签将(或应该)是视频的实际第一帧。 - 视频关键帧:开始的×09 ,然后接下来的11字节是 0x17已&放大器; 12字节是 0×01
- 音频标签AAC :开始的 0x08的,然后接下来的11字节是 0xAF执行&放大器; 12字节是 0×01
- 音频标签MP3 :开始的 0x08的,然后接下来的11字节是值为0x2F &放大器; 12字节是 0xFF的
- Since this is MPEG the first video tag will hold AVC Decoder Config bytes and the first audio tag holds the Audio Specific Config bytes. This data is not actual media frames but simply packaged like an audio/video tag. These are needed for MPEG playback. The same bytes can be found in the
STSD
metadata entry (MOOV
atom) inside an MP4 container. Now the next found video tag will (or should) be the video's actual first frame. - Video keyframe : begins 0x09 and next 11th byte is 0x17 & 12th byte is 0x01
- Audio TAG AAC : begins 0x08 and next 11th byte is 0xAF & 12th byte is 0x01
- Audio TAG MP3 : begins 0x08 and next 11th byte is 0x2F & 12th byte is 0xFF
您正在寻找重新present视频标签字节。除了元数据标签,你现在可以期待标签来表示的音频或视频帧的容器。有两种方法可以标记字节到你的暂时的字节数组(我们将其命名为 temp_BA
)。
You are looking for bytes that represent a video "tag". Apart from the Metadata tag, you can now expect "tag" to mean a container of an audio or video frame. There are two ways to get tag bytes into your "temporary byte array" (we'll name it as temp_BA
).
-
的ReadBytes
(慢):提取物source_BA
-
WriteBytes
(快):从即时复制字节的开始/结束范围source_BA
ReadBytes
(slow) : extracts the individual byte values within a start/end range insource_BA
WriteBytes
(fast) : instant duplication of a start/end range of bytes fromsource_BA
的ReadBytes 解释:告诉来源读取其字节到目标。源将向前读取最多从目前的偏移长度(位置)。 转到正确的阅读开始之前 ... 的
源位置 source_BA.readBytes(成Target_BA,内Target_BA POS,所需的字节长度);
上面这行执行后,源位置现在的向前发展以占新长度行进。 (公式:源新名次= previousPos + BytesLengthRequired)的
Readbytes explained : tells Source to read its bytes into Target. Source will read forwards up to the length from its current offset (position). Go to correct Source position before reading onwards... source_BA.readBytes( into Target_BA, Pos within Target_BA, length of bytes required );
After the above line executes, Source position will now have moved forward to account for the new length travelled. (formula : Source new Pos = previousPos + BytesLengthRequired).
Writebytes 解释:通知目标复制一个范围从源代码的>。是因为从已经知名的信息(源文件)复制速度快。 目标写起,从当前位置 ... 的
target_BA.writeBytes(从source_BA,在source_BA POS,所需的字节长度);
上面这行执行后,请注意,源和目标位置不变 的
Writebytes explained : tells Target to duplicate a range of bytes from Source. Is fast since copying from already-known information (from Source). Target writes onwards from its current position... target_BA.writeBytes( from source_BA, Pos within source_BA, length of bytes required );
After the above line executes, note that both Source and Target positions are unchanged.
使用上述方法来获得所需的变量字节为 temp_BA
从特定 source_BA.position = X
。
Use above methods to get required tag bytes into temp_BA
from a specific source_BA.position = x
.
要检查任何一个字节(它的价值),使用下面的方法更新 INT
类型的某些变量:
To check any byte (its value), use the methods below to update some variable of int
type:
- 读一单字节值:使用
my_Integer = source_BA.readByte();
- 读一双字节值:使用
my_Integer = source_BA.readUnsignedShort();
- 读一四字节的值:使用
my_Integer = source_BA.readUnsignedInt();
- 变量
数
的八字节值:使用my_Number = source_BA.readDouble();
- Read a one-byte value : use
my_Integer = source_BA.readByte();
- Read a two-byte value : use
my_Integer = source_BA.readUnsignedShort();
- Read a four-byte value : use
my_Integer = source_BA.readUnsignedInt();
- variable
Number
for eight-byte value : usemy_Number = source_BA.readDouble();
注意:不要混淆 .readByte();
中提取数值(字节)的发音相似 .readBytes()
,它复制字节到另一个字节数组的一大块。
note : Don't confuse .readByte();
which extracts a numerical value (of byte) with the similar sounding .readBytes()
which copies a chunk of bytes to another byte array.
[图视频标签使用关键帧H264 / AAC图像]
[ illustration image of Video TAG with Keyframe H264/AAC ]
要找到一个视频关键帧
- 从一个起始偏移量,使用
,而
循环,现在通过搜索每个字节的单字节值 9 (十六进制:×09
),当发现我们检查远一点字节,以确认它确实是一个真正的关键帧,而不仅仅是一个随机occurence的9。 - 在的H.264视频codeC的情况下,在正确的 9 字节位置(XPOS),我们预计在 11&安培; 12字节永远领先是 17 和 01 分别。
-
如果
是==真
然后我们检查三个代码大小字节并添加 15 以这个整数有望被写入从源到目标的总字节长度(temp_BA
)。我们已经添加的 15 以占11个字节的在的,也是4个字节的 的预期TAG数据之后。这4个字节的标签结尾的previous标签大小,这实际上金额包括前11个字节,但不包括这些端4个字节自己。 - 我们告诉
temp_BA
来的来源的(你videoByteArray $的写字节 C $ C>)的从 POS 9 字节(XPOS的)为标签大小+ 15.你有长度开始现在提取的MPEG关键帧。
的 示例 的temp_BA.writeBytes(videoByteArray,INT(XPOS),INT(TAG_size));
- 这
temp_BA
具有关键帧的标签可以使用,现在被追加:
示例 的netStream.appendBytes(temp_BA); //显示一帧
- From a starting offset, use a
while
loop to now travel [forward] through the bytes searching each byte for a one-byte value of "9" ( hex:0x09
), when found we check further ahead bytes to confirm that indeed it's a true keyframe and not just a random occurence of "9". - In the case of H.264 video codec, at the correct "9" byte position (xPos) we expect the 11th & 12th bytes ahead always to be "17" and "01" respectively.
If
that is== true
then we check the three Tag Size bytes and add 15 to this integer for the total length of bytes expected to be written from Source into Target (temp_BA
). We have added 15 to account for the 11 bytes before and also the 4 bytes after expected TAG DATA. These 4 bytes at tag ending are "Previous Tag Size" and this amount actually includes the 11 front bytes but not counting these end 4 bytes themselves.- We tell
temp_BA
to write bytes of Source (yourvideoByteArray
) starting from pos of "9" byte (xPos) for a length of "Tag Size" + 15. You have now extracted an MPEG keyframe.
example :temp_BA.writeBytes( videoByteArray, int (xPos), int (TAG_size) );
- This
temp_BA
with tag of a Keyframe can now be appended using:
example :netStream.appendBytes( temp_BA ); //displays a single frame
注意:要读取3个字节的标签的尺寸,我将展示一个自定义的转换 bytes_toInt()
功能(因为处理器读1,2或4个字节一次为整数,在这里读3个字节是akward的要求)。
note : For reading 3 bytes of Tag Size I will show a custom converting bytes_toInt()
function (since processors read either 1, 2 or 4 bytes at once for integers, reading 3 bytes here is an akward request).
搜索提示:标签始终遵循彼此的踪迹。我们可以寻求更快也检查,如果字节用于非关键帧(P帧)视频标签,甚至一些音频的标签。如果是这样的话,我们检查特定标签的尺寸
现在增加我们的 XPOS
跳到这个新的长度。这样我们就可以通过跳过整个标签仅仅通过单一的字节大小没有。只有停止,当我们有一个关键帧标记。
Searching tip : Tags always follow each other in a trail. We can seek faster by also checking if bytes are for a non-keyframe (P frame) video tag or even some audio tag. If so then we check that particular tag size
and now increment our xPos
to jump this new length. This way we can skip by whole tag sizes not just by single bytes. Stopping only when we have a keyframe tag.
当你想想看,玩的就是单纯的喜欢自动力求通过框架的基础回事框架。凡获得各下一帧的速度预期由视频的EN codeD帧率定义。
When you think about it, play is simply like an auto-seek going on a frame by frame basis. Where the expected speed of getting each next frame is defined by the video's encoded framerate.
所以,您的播放功能可以简单地是定时器
,获取视频标签(帧)每秒(或1000 milisecs)第X-量。你做,因为例如 my_Timer =新的定时器(video_FPS)
。当定时器运行并达到第二的每个FPS片它将运行 append_PLAY();
功能,这反过来又运行 get_frame_Tag();
功能。
So your playback function can simply be a Timer
that gets X-amount of video tags (frames) every second (or 1000 milisecs). You do that as example my_Timer = new Timer ( video_FPS )
. When the timer runs and reaches each FPS slice of a second it will run the append_PLAY();
function which in turn runs a get_frame_Tag();
function.
-
NS.seek(0)
:写的NetStream进入搜索模式。 (数量并不重要,但必须存在于命令)。任何超前帧缓冲区被清除,他们就会没有(图)帧更新,直到.. -
RESET_SEEK
:结束搜索模式,现在让图像更新。您使用RESET_SEEK
命令后追加的第一个标签必须与视频关键帧标记。 (音频,只有这个可以是任何标记,因为技术上的所有音频标签音频关键帧) -
END_SEQUENCE
:(支持MPEG H.264)播放任何剩余的未来框架(水渠缓冲)。一旦倒掉,你现在可以附加任何类型的视频标签。记住H.264预计,向前移动的时间戳,如果你看到女**糟透了像素那么你的下一个标记时间戳是错误的(太高或太低)。如果追加只是一帧(海报形象吗?),你可以使用END_SEQUEMCE
排缓冲区,并显示一帧(无需等待缓冲填满到x-量帧第一)...
NS.seek(0)
: Puts NetStream into "seek mode". (the number doesn't matter but must exist in the command). Any "ahead frames" buffer is cleared and they'll be no (image) frame updates until..RESET_SEEK
: Ends the "seek mode" and now allows image updates. The first tag you append after using theRESET_SEEK
command must be a tag with a video keyframe. (for audio-only this can be any tag since technically all audio tags are audio keyframes)END_SEQUENCE
: (for MPEG H.264) Plays out any remaining "ahead frames" (drains the buffer). Once drained you can now append any type of video tag. Remember H.264 expects forward-moving timestamps, If you see f**ked up pixels then your next tag timestamp is wrong (too high or too low). If you appending just one frame (poster image?) you could useEND_SEQUEMCE
to drain the buffer and display that one frame (without waiting for buffer to fill up to x-amount of frames first)...
在播放功能作为一个中间人功能,可管理的事情,而不会弄乱的获取帧与功能。如果
报表等管理手段的事情,例如检查有下载的,甚至开始根据标签的尺寸得到一个框架足够的字节。
The play function acts as a middle-man function to manage things without cluttering the get frame function with If
statements etc. Managing things means for example checking that there are enough bytes downloaded to even begin getting a frame according to Tag Size.
code是太长了......看到这个链接: https://gist.github.com/Valerio-Charles-VC1/657054b773dba9ba1cbc
Code is too long.. see this link below:https://gist.github.com/Valerio-Charles-VC1/657054b773dba9ba1cbc
希望它帮助。 VC
这篇关于AS3 NetStream的对appendBytes寻求问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!