📅  最后修改于: 2023-12-03 15:14:53.762000             🧑  作者: Mango
在ES6中, 多媒体相关的API有了很多改进和增强, 例如:
早期的多媒体加载方式往往是基于回调的, 但是这种方式往往会让代码看起来非常复杂, 不利于维护. ES6中的Promise方式可以让媒体加载更加简单和可读.
let audio = new Audio();
let audioPath = 'path/to/audio.mp3';
function loadAudio(url) {
return new Promise( (resolve, reject) => {
audio.addEventListener('load', () => {
resolve(audio);
});
audio.addEventListener('error', reject);
audio.src = url;
});
}
loadAudio(audioPath)
.then((player) => {
player.play();
})
.catch((error) => {
console.error('Error loading audio');
});
ES6中新增了MediaStream接口, 允许我们在浏览器中访问摄像头和麦克风等设备的音视频流.
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
const audioCtx = new AudioContext();
const source = audioCtx.createMediaStreamSource(stream);
const gainNode = audioCtx.createGain();
source.connect(gainNode);
gainNode.connect(audioCtx.destination);
}, error => {
console.error(error);
});
Web Audio API是一组用于在浏览器中处理和合成音频的低延迟函数库. 它使我们可以创建各种音效和音乐元素, 从简单的延迟和重复效果到更复杂的合成.
const audioCtx = new AudioContext();
const oscillator = audioCtx.createOscillator();
const gain = audioCtx.createGain();
oscillator.connect(gain);
gain.connect(audioCtx.destination);
oscillator.type = 'sine';
oscillator.frequency.value = 440;
gain.gain.value = 0.1;
oscillator.start();
oscillator.stop(audioCtx.currentTime + 3);
VideoContext是一个基于WebGL的视频处理框架, 它可以让开发者实时地更改视频的各种属性, 比如图像滤镜、淡入淡出、分屏、拖影等等.
const video = document.querySelector('#myVideo');
const canvas = document.querySelector('#myCanvas');
const videoCtx = new VideoContext(canvas);
const videoNode1 = videoCtx.video(video);
const videoNode2 = videoCtx.video(video);
videoNode1.currentTime = 0;
videoNode2.currentTime = 5;
const transitionNode = videoCtx.transition(videoNode1, videoNode2, {
duration: 5,
direction: 'horizontal',
easing: 'easeInOut',
mix: 1.0
});
transitionNode.connect(videoCtx.destination);
videoCtx.play();
以上例子只是这个主题下的冰山一角, 如果你感兴趣的话, 可以通过MDN获取更多详细的教程和文档.