大家好,我有一个非常复杂的画布编辑器,该编辑器允许用户使用Konvajs和Gifler库选择视频背景,添加文本,gif和Lottie动画。它已经走了很长一段路,但是我试图加快我的canvas应用程序的性能。我已经阅读了很多有关屏幕外画布的信息,但我不太了解。假设我有一个常规的HTML画布对象,我将如何创建一个屏幕外画布并将其吐回到浏览器中?理想情况下,我希望能够以30 fps的速度从画布中以阵列形式获取图像,而不会出现延迟。我还担心,根据caniuse.com,屏幕外画布似乎尚未得到广泛支持。每当我尝试从画布上创建非屏幕画布时,我总会得到:
Failed to execute 'transferControlToOffscreen' on
'HTMLCanvasElement': Cannot transfer control from a canvas that has a rendering context.
就像我说的那样,我只是想弄清楚如何平滑地渲染动画,但不确定如何去做。这里的任何帮助将是巨大的。这是代码。
<template>
<div>
<button @click="render">Render</button>
<h2>Backgrounds</h2>
<template v-for="background in backgrounds">
<img
:src="background.poster"
class="backgrounds"
@click="changeBackground(background.video)"
/>
</template>
<h2>Images</h2>
<template v-for="image in images">
<img
:src="image.source"
@click="addImage(image)"
class="images"
/>
</template>
<br />
<button @click="addText">Add Text</button>
<button v-if="selectedNode" @click="removeNode">
Remove selected {{ selectedNode.type }}
</button>
<label>Font:</label>
<select v-model="selectedFont">
<option value="Arial">Arial</option>
<option value="Courier New">Courier New</option>
<option value="Times New Roman">Times New Roman</option>
<option value="Desoto">Desoto</option>
<option value="Kalam">Kalam</option>
</select>
<label>Font Size</label>
<input type="number" v-model="selectedFontSize" />
<label>Font Style:</label>
<select v-model="selectedFontStyle">
<option value="normal">Normal</option>
<option value="bold">Bold</option>
<option value="italic">Italic</option>
</select>
<label>Color:</label>
<input type="color" v-model="selectedColor" />
<button
v-if="selectedNode && selectedNode.type === 'text'"
@click="updateText"
>
Update Text
</button>
<template v-if="selectedNode && selectedNode.lottie">
<input type="text" v-model="text">
<button @click="updateAnim(selectedNode.image)">
Update Animation
</button>
</template>
<br />
<video
id="preview"
v-show="preview"
:src="preview"
:width="width"
:height="height"
preload="auto"
controls
/>
<a v-if="file" :href="file" download="dopeness.mp4">download</a>
<div id="container"></div>
</div>
</template>
<script>
import lottie from "lottie-web";
import * as anim from "../AEAnim/anim.json";
import * as anim2 from "../AEAnim/anim2.json";
import * as anim3 from "../AEAnim/anim3.json";
import * as anim4 from "../AEAnim/anim4.json";
import * as anim5 from "../AEAnim/anim5.json";
export default {
data() {
return {
source: null,
stage: null,
layer: null,
video: null,
animations: [],
text: "",
animationData: null,
captures: [],
backgrounds: [
{
poster: "/api/files/stock/3oref310k1uud86w/poster/poster.jpg",
video:
"/api/files/stock/3oref310k1uud86w/main/1080/3oref310k1uud86w_1080.mp4"
},
{
poster: "/api/files/stock/3yj2e30tk5x6x0ww/poster/poster.jpg",
video:
"/api/files/stock/3yj2e30tk5x6x0ww/main/1080/3yj2e30tk5x6x0ww_1080.mp4"
},
{
poster: "/api/files/stock/2ez931ik1mggd6j/poster/poster.jpg",
video:
"/api/files/stock/2ez931ik1mggd6j/main/1080/2ez931ik1mggd6j_1080.mp4"
},
{
poster: "/api/files/stock/yxrt4ej4jvimyk15/poster/poster.jpg",
video:
"/api/files/stock/yxrt4ej4jvimyk15/main/1080/yxrt4ej4jvimyk15_1080.mp4"
},
{
poster:
"https://images.costco-static.com/ImageDelivery/imageService?profileId=12026540&itemId=100424771-847&recipeName=680",
video: "/api/files/jedi/surfing.mp4"
},
{
poster:
"https://thedefensepost.com/wp-content/uploads/2018/04/us-soldiers-afghanistan-4308413-1170x610.jpg",
video: "/api/files/jedi/soldiers.mp4"
}
],
images: [
{ source: "/api/files/jedi/solo.jpg" },
{ source: "api/files/jedi/yoda.jpg" },
{ source: "api/files/jedi/yodaChristmas.jpg" },
{ source: "api/files/jedi/darthMaul.jpg" },
{ source: "api/files/jedi/darthMaul1.jpg" },
{ source: "api/files/jedi/trump.jpg" },
{ source: "api/files/jedi/hat.png" },
{ source: "api/files/jedi/trump.png" },
{ source: "api/files/jedi/bernie.png" },
{ source: "api/files/jedi/skywalker.png" },
{ source: "api/files/jedi/vader.gif" },
{ source: "api/files/jedi/vader2.gif" },
{ source: "api/files/jedi/yoda.gif" },
{ source: "api/files/jedi/kylo.gif" },
{
source: "https://media3.giphy.com/media/R3IxJW14a3QNa/source.gif",
animation: anim
},
{
source: "https://bestanimations.com/Text/Cool/cool-story-3.gif",
animation: anim2
},
{
source: "https://freefrontend.com/assets/img/css-text-animations/HTML-CSS-Animated-Text-Fill.gif",
animation: anim3
},
{
source: "api/files/jedi/zoomer.gif",
animation: anim4
},
{
source: "api/files/jedi/slider.gif",
animation: anim5
}
],
backgroundVideo: null,
imageGroups: [],
anim: null,
selectedNode: null,
selectedFont: "Arial",
selectedColor: "black",
selectedFontSize: 20,
selectedFontStyle: "normal",
width: 1920,
height: 1080,
texts: [],
preview: null,
file: null,
canvas: null
};
},
mounted: function() {
this.initCanvas();
},
methods: {
changeBackground(source) {
this.source = source;
this.video.src = this.source;
this.anim.stop();
this.anim.start();
this.video.play();
},
removeNode() {
if (this.selectedNode && this.selectedNode.type === "text") {
this.selectedNode.transformer.destroy(
this.selectedNode.text.transformer
);
this.selectedNode.text.destroy(this.selectedNode.text);
this.texts.splice(this.selectedNode.text.index - 1, 1);
this.selectedNode = null;
this.layer.draw();
} else if (this.selectedNode && this.selectedNode.type == "image") {
this.selectedNode.group.destroy(this.selectedNode);
this.imageGroups.splice(this.selectedNode.group.index - 1, 1);
if (this.selectedNode.lottie) {
clearTimeout(this.animations.interval);
this.selectedNode.lottie.destroy();
this.animations.splice(this.selectedNode.lottie.index - 1, 1);
}
this.selectedNode = null;
this.layer.draw();
}
},
async addImage(imageToAdd, isUpdate) {
let lottieAnimation = null;
let imageObj = null;
const type = imageToAdd.source.slice(imageToAdd.source.lastIndexOf("."));
const vm = this;
function process(img) {
return new Promise((resolve, reject) => {
img.onload = () => resolve({ width: img.width, height: img.height });
});
}
imageObj = new Image();
imageObj.src = imageToAdd.source;
imageObj.width = 200;
imageObj.height = 200;
await process(imageObj);
if (type === ".gif" && !imageToAdd.animation) {
const canvas = document.createElement("canvas");
canvas.setAttribute("id", "gif");
async function onDrawFrame(ctx, frame) {
ctx.drawImage(frame.buffer, frame.x, frame.y);
// redraw the layer
vm.layer.draw();
}
gifler(imageToAdd.source).frames(canvas, onDrawFrame);
canvas.onload = async () => {
canvas.parentNode.removeChild(canvas);
};
imageObj = canvas;
const gif = new Image();
gif.src = imageToAdd.source;
const gifImage = await process(gif);
imageObj.width = gifImage.width;
imageObj.height = gifImage.height;
} else if (imageToAdd.animation) {
if(!isUpdate){this.text = "new text";}
const canvas = document.createElement("canvas");
canvas.style.width = 1920;
canvas.style.height= 1080;
canvas.setAttribute("id", "animationCanvas");
const ctx = canvas.getContext("2d");
const div = document.createElement("div");
div.setAttribute("id", "animationContainer");
div.style.display = "none";
canvas.style.display = "none";
this.animationData = imageToAdd.animation.default;
for(let i =0; i <this.animationData.layers.length; i++){
for(let b =0; b<this.animationData.layers[i].t.d.k.length; b++){
this.animationData.layers[i].t.d.k[b].s.t = this.text;
}
}
lottieAnimation = lottie.loadAnimation({
container: div, // the dom element that will contain the animation
renderer: "svg",
loop: true,
autoplay: true,
animationData: this.animationData
});
lottieAnimation.imgSrc = imageToAdd.source;
lottieAnimation.text = this.text;
const svg = await div.getElementsByTagName("svg")[0];
const timer = setInterval(async () => {
const xml = new XMLSerializer().serializeToString(svg);
const svg64 = window.btoa(xml);
const b64Start = "data:image/svg+xml;base64,";
const image64 = b64Start + svg64;
imageObj = new Image({ width: canvas.width, height: canvas.height });
imageObj.src = image64;
await process(imageObj);
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(imageObj, 0, 0, canvas.width, canvas.height);
this.layer.batchDraw();
}, 1000 / 30);
this.animations.push({ lottie: lottieAnimation, interval: timer });
imageObj = canvas;
canvas.onload = async () => {
canvas.parentNode.removeChild(canvas);
};
}
const image = new Konva.Image({
x: 50,
y: 50,
image: imageObj,
width: imageObj.width,
height: imageObj.height,
position: (0, 0),
strokeWidth: 10,
stroke: "blue",
strokeEnabled: false
});
const group = new Konva.Group({
draggable: true
});
// add the shape to the layer
addAnchor(group, 0, 0, "topLeft");
addAnchor(group, imageObj.width, 0, "topRight");
addAnchor(group, imageObj.width, imageObj.height, "bottomRight");
addAnchor(group, 0, imageObj.height, "bottomLeft");
imageObj = null;
image.on("click", function () {
vm.hideAllHelpers();
vm.selectedNode = {
type: "image",
group,
lottie: lottieAnimation,
image: imageToAdd
};
if(lottieAnimation && lottieAnimation.text){vm.text = lottieAnimation.text}
group.find("Circle").show();
vm.layer.draw();
});
image.on("mouseover", function(evt) {
if (vm.selectedNode && vm.selectedNode.type === "image") {
const index = image.getParent().index;
const groupId = vm.selectedNode.group.index;
if (index != groupId) {
evt.target.strokeEnabled(true);
vm.layer.draw();
}
} else {
evt.target.strokeEnabled(true);
vm.layer.draw();
}
});
image.on("mouseout", function(evt) {
evt.target.strokeEnabled(false);
vm.layer.draw();
});
vm.hideAllHelpers();
group.find("Circle").show();
group.add(image);
vm.layer.add(group);
vm.imageGroups.push(group);
vm.selectedNode = {
type: "image",
group,
lottie: lottieAnimation,
image: imageToAdd
};
function update(activeAnchor) {
const group = activeAnchor.getParent();
let topLeft = group.get(".topLeft")[0];
let topRight = group.get(".topRight")[0];
let bottomRight = group.get(".bottomRight")[0];
let bottomLeft = group.get(".bottomLeft")[0];
let image = group.get("Image")[0];
let anchorX = activeAnchor.getX();
let anchorY = activeAnchor.getY();
// update anchor positions
switch (activeAnchor.getName()) {
case "topLeft":
topRight.y(anchorY);
bottomLeft.x(anchorX);
break;
case "topRight":
topLeft.y(anchorY);
bottomRight.x(anchorX);
break;
case "bottomRight":
bottomLeft.y(anchorY);
topRight.x(anchorX);
break;
case "bottomLeft":
bottomRight.y(anchorY);
topLeft.x(anchorX);
break;
}
image.position(topLeft.position());
let width = topRight.getX() - topLeft.getX();
let height = bottomLeft.getY() - topLeft.getY();
if (width && height) {
image.width(width);
image.height(height);
}
}
function addAnchor(group, x, y, name) {
let stage = vm.stage;
let layer = vm.layer;
let anchor = new Konva.Circle({
x: x,
y: y,
stroke: "#666",
fill: "#ddd",
strokeWidth: 2,
radius: 8,
name: name,
draggable: true,
dragOnTop: false
});
anchor.on("dragmove", function() {
update(this);
layer.draw();
});
anchor.on("mousedown touchstart", function() {
group.draggable(false);
this.moveToTop();
});
anchor.on("dragend", function() {
group.draggable(true);
layer.draw();
});
// add hover styling
anchor.on("mouseover", function() {
let layer = this.getLayer();
document.body.style.cursor = "pointer";
this.strokeWidth(4);
layer.draw();
});
anchor.on("mouseout", function() {
let layer = this.getLayer();
document.body.style.cursor = "default";
this.strokeWidth(2);
layer.draw();
});
group.add(anchor);
}
},
async updateAnim(image){
this.addImage(image, true);
this.removeNode();
},
hideAllHelpers() {
for (let i = 0; i < this.texts.length; i++) {
this.texts[i].transformer.hide();
}
for (let b = 0; b < this.imageGroups.length; b++) {
this.imageGroups[b].find("Circle").hide();
}
},
async startRecording(duration) {
const chunks = []; // here we will store our recorded media chunks (Blobs)
const stream = this.canvas.captureStream(30); // grab our canvas MediaStream
const rec = new MediaRecorder(stream, {
videoBitsPerSecond: 20000 * 1000
});
// every time the recorder has new data, we will store it in our array
rec.ondataavailable = e => chunks.push(e.data);
// only when the recorder stops, we construct a complete Blob from all the chunks
rec.onstop = async e => {
this.anim.stop();
const blob = new Blob(chunks, {
type: "video/webm"
});
this.preview = await URL.createObjectURL(blob);
const video = window.document.getElementById("preview");
const previewVideo = new Konva.Image({
image: video,
draggable: false,
width: this.width,
height: this.height
});
this.layer.add(previewVideo);
console.log("video", video);
video.addEventListener("ended", () => {
console.log("preview ended");
if (!this.file) {
const vid = new Whammy.fromImageArray(this.captures, 30);
this.file = URL.createObjectURL(vid);
}
previewVideo.destroy();
this.anim.stop();
this.anim.start();
this.video.play();
});
let seekResolve;
video.addEventListener("seeked", async () => {
if (seekResolve) seekResolve();
});
video.addEventListener("loadeddata", async () => {
let interval = 1 / 30;
let currentTime = 0;
while (currentTime <= duration && !this.file) {
video.currentTime = currentTime;
await new Promise(r => (seekResolve = r));
this.layer.draw();
let base64ImageData = this.canvas.toDataURL("image/webp");
this.captures.push(base64ImageData);
currentTime += interval;
video.currentTime = currentTime;
}
this.layer.draw();
});
};
rec.start();
setTimeout(() => rec.stop(), duration);
},
async render() {
this.captures = [];
this.preview = null;
this.file = null;
this.hideAllHelpers();
this.selectedNode = null;
this.video.currentTime = 0;
this.video.loop = false;
const duration = this.video.duration * 1000;
this.startRecording(duration);
this.layer.draw();
},
updateText() {
if (this.selectedNode && this.selectedNode.type === "text") {
const text = this.selectedNode.text;
const transformer = this.selectedNode.transformer;
text.fontSize(this.selectedFontSize);
text.fontFamily(this.selectedFont);
text.fontStyle(this.selectedFontStyle);
text.fill(this.selectedColor);
this.layer.draw();
}
},
addText() {
const vm = this;
const text = new Konva.Text({
text: "new text " + (vm.texts.length + 1),
x: 50,
y: 80,
fontSize: this.selectedFontSize,
fontFamily: this.selectedFont,
fontStyle: this.selectedFontStyle,
fill: this.selectedColor,
align: "center",
width: this.width * 0.5,
draggable: true
});
const transformer = new Konva.Transformer({
node: text,
keepRatio: true,
enabledAnchors: ["top-left", "top-right", "bottom-left", "bottom-right"]
});
text.on("click", async () => {
for (let i = 0; i < this.texts.length; i++) {
let item = this.texts[i];
if (item.index === text.index) {
let transformer = item.transformer;
this.selectedNode = { type: "text", text, transformer };
this.selectedFontSize = text.fontSize();
this.selectedFont = text.fontFamily();
this.selectedFontStyle = text.fontStyle();
this.selectedColor = text.fill();
vm.hideAllHelpers();
transformer.show();
transformer.moveToTop();
text.moveToTop();
vm.layer.draw();
break;
}
}
});
text.on("mouseover", () => {
transformer.show();
this.layer.draw();
});
text.on("mouseout", () => {
if (
(this.selectedNode &&
this.selectedNode.text &&
this.selectedNode.text.index != text.index) ||
(this.selectedNode && this.selectedNode.type === "image") ||
!this.selectedNode
) {
transformer.hide();
this.layer.draw();
}
});
text.on("dblclick", () => {
text.hide();
transformer.hide();
vm.layer.draw();
let textPosition = text.absolutePosition();
let stageBox = vm.stage.container().getBoundingClientRect();
let areaPosition = {
x: stageBox.left + textPosition.x,
y: stageBox.top + textPosition.y
};
let textarea = document.createElement("textarea");
window.document.body.appendChild(textarea);
textarea.value = text.text();
textarea.style.position = "absolute";
textarea.style.top = areaPosition.y + "px";
textarea.style.left = areaPosition.x + "px";
textarea.style.width = text.width() - text.padding() * 2 + "px";
textarea.style.height = text.height() - text.padding() * 2 + 5 + "px";
textarea.style.fontSize = text.fontSize() + "px";
textarea.style.border = "none";
textarea.style.padding = "0px";
textarea.style.margin = "0px";
textarea.style.overflow = "hidden";
textarea.style.background = "none";
textarea.style.outline = "none";
textarea.style.resize = "none";
textarea.style.lineHeight = text.lineHeight();
textarea.style.fontFamily = text.fontFamily();
textarea.style.transformOrigin = "left top";
textarea.style.textAlign = text.align();
textarea.style.color = text.fill();
let rotation = text.rotation();
let transform = "";
if (rotation) {
transform += "rotateZ(" + rotation + "deg)";
}
let px = 0;
let isFirefox =
navigator.userAgent.toLowerCase().indexOf("firefox") > -1;
if (isFirefox) {
px += 2 + Math.round(text.fontSize() / 20);
}
transform += "translateY(-" + px + "px)";
textarea.style.transform = transform;
textarea.style.height = "auto";
textarea.focus();
// start
function removeTextarea() {
textarea.parentNode.removeChild(textarea);
window.removeEventListener("click", handleOutsideClick);
text.show();
transformer.show();
transformer.forceUpdate();
vm.layer.draw();
}
function setTextareaWidth(newWidth) {
if (!newWidth) {
// set width for placeholder
newWidth = text.placeholder.length * text.fontSize();
}
// some extra fixes on different browsers
let isSafari = /^((?!chrome|android).)*safari/i.test(
navigator.userAgent
);
let isFirefox =
navigator.userAgent.toLowerCase().indexOf("firefox") > -1;
if (isSafari || isFirefox) {
newWidth = Math.ceil(newWidth);
}
let isEdge =
document.documentMode || /Edge/.test(navigator.userAgent);
if (isEdge) {
newWidth += 1;
}
textarea.style.width = newWidth + "px";
}
textarea.addEventListener("keydown", function(e) {
// hide on enter
// but don't hide on shift + enter
if (e.keyCode === 13 && !e.shiftKey) {
text.text(textarea.value);
removeTextarea();
}
// on esc do not set value back to node
if (e.keyCode === 27) {
removeTextarea();
}
});
textarea.addEventListener("keydown", function(e) {
let scale = text.getAbsoluteScale().x;
setTextareaWidth(text.width() * scale);
textarea.style.height = "auto";
textarea.style.height =
textarea.scrollHeight + text.fontSize() + "px";
});
function handleOutsideClick(e) {
if (e.target !== textarea) {
text.text(textarea.value);
removeTextarea();
}
}
setTimeout(() => {
window.addEventListener("click", handleOutsideClick);
});
// end
});
text.transformer = transformer;
this.texts.push(text);
this.layer.add(text);
this.layer.add(transformer);
this.hideAllHelpers();
this.selectedNode = { type: "text", text, transformer };
transformer.show();
this.layer.draw();
},
initCanvas() {
const vm = this;
this.stage = new Konva.Stage({
container: "container",
width: vm.width,
height: vm.height
});
this.layer = new Konva.Layer();
this.stage.add(this.layer);
let video = document.createElement("video");
video.setAttribute("id", "video");
video.setAttribute("ref", "video");
if (this.source) {
video.src = this.source;
}
video.preload = "auto";
video.loop = "loop";
video.style.display = "none";
this.video = video;
this.backgroundVideo = new Konva.Image({
image: vm.video,
draggable: false
});
this.video.addEventListener("loadedmetadata", function(e) {
vm.backgroundVideo.width(vm.width);
vm.backgroundVideo.height(vm.height);
});
this.video.addEventListener("ended", () => {
console.log("the video ended");
this.anim.stop();
this.anim.start();
this.video.loop = "loop";
this.video.play();
});
this.anim = new Konva.Animation(function() {
console.log("animation called");
// do nothing, animation just need to update the layer
}, vm.layer);
this.layer.add(this.backgroundVideo);
this.layer.batchDraw();
const canvas = document.getElementsByTagName("canvas")[0];
canvas.style.border = "3px solid red";
this.canvas = canvas;
}
}
};
</script>
<style scoped>
body {
margin: 0;
padding: 0;
background-color: #f0f0f0;
}
.backgrounds,
.images {
width: 100px;
height: 100px;
padding-left: 2px;
padding-right: 2px;
}
</style>
最佳答案
关于错误讯息
就像在获得上下文“ A”之后无法请求上下文“ B”一样,在确实从该画布请求了上下文之后,也无法将DOM画布的控件转移到OffscreenCanvas。
在这里,您正在使用Konva.js库(我不特别知道)来初始化DOM画布。该库将需要从该画布访问可用的上下文之一(显然是“ 2D”上下文)。这意味着,当您访问该画布时,库将已经请求了上下文,并且您将无法将其控件转移到OffscreenCanvas。
库的存储库中有this issue,它指出,不迟于12天前,他们为OffscreenCanvases添加了一些初始支持。因此,我邀请您查看their example有关如何继续使用该库的信息。
关于OffscreenCanvas表演
与常规画布相比,OffscreenCanvas本身不提供任何性能提升。它不会神奇地使以10FPS运行的代码以60FPS运行。
它允许的是不阻塞主线程,也不被主线程阻塞。为此,您需要将其传输到Web Worker。
这意味着您可以使用它
如果您担心自己的画布代码会阻止UI,但实际上并不一定需要平滑的动画。
如果您担心自己的主线程可能会减慢画布动画的速度-例如,如果页面上还有很多其他内容。
但就您而言,似乎只有您的代码在运行。这样一来,您可能不会赢得任何好处。
关于OffscreenCanvas限制
我们看到要真正利用OffscreenCanvas的优势,我们需要在Web Worker的并行线程中运行它。但是,Web Workers无法访问DOM。
这是一个巨大的限制,会使很多事情变得更难处理。
例如,要绘制视频,当前除了使用<video>
元素先播放之外,别无其他方法。 Worker脚本无法访问该<video>
元素,也无法在自己的线程上创建该元素。因此,唯一的解决方案是从主线程创建ImageBitmap并将其传递回您的Worker脚本。
所有艰苦的工作(视频解码+位图生成)都在主线程上完成。值得注意的是,即使createImageBitmap()
方法返回Promise,当我们使用视频作为源时,浏览器也只能从视频同步创建位图。
因此,在使该ImageBitmap在您的工作程序中使用时,您实际上正在使主线程超载,并且如果主线程被锁定以执行其他操作,则显然必须等待其完成,然后才能从视频中获取帧。
另一个很大的限制是当前* Web Workers无法对DOM事件作出反应。因此,您必须设置代理以将在主线程中接收到的事件传递给您的Worker线程。同样,这要求您的主线程是免费的,并且需要很多新代码。
关于您的代码
因为,是的,我认为这是您要提高性能的地方。
我只是快速浏览了一下,但是我已经看到您在某些地方使用setInterval
的比率很高。别。如果需要对可见的物体进行动画处理,请始终使用requestAnimationFrame
,如果不需要全速,请添加内部逻辑以跳过帧,但请继续使用rAF作为主要引擎。
您要让浏览器每帧执行繁重的操作。例如,您的svg部分正在每帧从DOM节点创建一个完整的新svg标记,然后将该标记加载到<img>
中(这意味着浏览器必须为该图像启动一个全新的DOM),并在一块帆布。
这本身很难以高帧速率解决,并且OffscreenCanvas将无济于事。
您将所有图像作为静止图像存储以生成最终视频。这将占用大量内存。
您的代码中可能还有很多类似的内容,因此请对其进行彻底检查并寻找导致您的代码无法达到屏幕刷新率的原因。改善可能的情况,搜索替代方法(例如,在受支持的情况下,MediaRecorder可能比Whammy更好)并且祝您好运。
*有一个ongoing proposal可以解决该问题
关于javascript - 了解离屏 Canvas 以获得更好的性能,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/60870336/