如何使用 JavaScript MediaRecorder API 创建视频和音频录制器?
在本教程中,您将学习如何使用 JavaScript MediaRecorder API 创建音频和视频录制器。因此,这可以使用 WebRTC 来完成。
什么是 WebRTC?
WebRTC 是实时通信的简称。我们可以访问和捕获用户设备中可用的网络摄像头和麦克风设备。
我们可以使用 ECMAScript 对象访问用户设备的网络摄像头和麦克风。
navigator.mediaDevices.getUserMedia(constraints).
因此,getUserMedia 函数默认情况下会请求用户许可以使用您的网络摄像头。此函数返回一个promise,并且一旦您单击“确定”并给予同意,则该函数将被触发并在您的系统中启用网络摄像头,否则,如果您不允许,它还有一个 catch 方法来关闭网络摄像头。
我们还可以向getUserMedia() 函数传递参数,例如我们想要某个特定宽度或高度的图片。
前端设计
我们的前端部分将包含以下元素:
对于视频录制,屏幕将包含以下元素:
一个视频元素,用于显示视频媒体屏幕
开始按钮将启动视频录制
停止按钮将停止视频录制流。
对于音频录制,它也将有两个按钮
开始按钮将启动音频录制
停止按钮将停止音频录制流。
我们将添加 font awesome CDN 以添加开始和停止按钮图标,并为了使页面更具吸引力,我们将在元素上添加 CSS 样式。
HTML 代码
示例
<!DOCTYPE html> <html> <head> <title>Video & Audio Recorder</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css"> <style> body { text-align: center; color: red; font-size: 1.2em; } /* styling of start and stop buttons */ #video_st, #video_en, #aud_st, #aud_en{ margin-top: 10px; padding: 10px; border-radius: 4px; cursor: pointer; } #vidBox{ background-color: grey; } /*video box styling*/ video { background-color: gray; display: block; margin: 6px auto; width: 520px; height: 240px; } /*audio box styling*/ audio { display: block; margin: 6px auto; } a { color: green; } </style> </head> <body> <h1 style="color:blue"> Video-Audio recorder</h1> <div class="display-none" id="vid-recorder"> <h3>Record Video </h3> <video autoplay id="vidBox"> </video> <!-- click this button to start video recording --> <button type="button" id="video_st" onclick="start_video_Recording()"> <i class="fa fa-play"></i></button> <!-- click this button to stop video recording --> <button type="button" id="video_en" disabled onclick="stop_Recording(this, document.getElementById('video_st'))"> <i class="fa fa-stop"></i> </button> </div> <!-- ------------ --> <br> <hr> <!-- ------------ --> <div class="display-none" id="audio_rec"> <h3> Record Audio</h3> <!-- click this button to start audio recording --> <button type="button" id="aud_st" onclick="start_audio_Recording()"><i class="fa fa-play"></i> </button> <!-- click this button to stop video recording --> <button type="button" id="aud_en"disabled onclick="stop_Recording(this, document.getElementById('aud_st'))"> <i class="fa fa-stop"></i></button> </div> </body> </html>
当您单击“开始视频”按钮时,它将调用start_video_Recording() 函数,而“停止”按钮将调用stop_Recording() 函数,类似地,对于音频,单击开始按钮将触发start_audio_Recording() 函数,而停止按钮将调用stop_Recording() 函数。
start_video_Recording() 函数
让我们定义一个函数来启动视频并录制它。
function start_video_Recording() { // stores the recorded media let chunksArr= []; const startBtn=document.getElementById("video_st"); const endBtn=document.getElementById("video_en"); // permission to access camera and microphone navigator.mediaDevices.getUserMedia({audio: true, video: true}) .then((mediaStreamObj) => { // Create a new MediaRecorder instance const medRec =new MediaRecorder(mediaStreamObj); window.mediaStream = mediaStreamObj; window.mediaRecorder = medRec; medRec.start(); //when recorded data is available then push into chunkArr array medRec.ondataavailable = (e) => {chunksArr.push(e.data);}; //stop the video recording medRec.onstop = () => { const blobFile = new Blob(chunksArr, { type:"video/mp4" }); chunksArr= []; // create video element and store the media which is recorded const recMediaFile = document.createElement("video"); recMediaFile.controls = true; const RecUrl = URL.createObjectURL(blobFile); //keep the recorded url as source recMediaFile.src = RecUrl; document.getElementById(`vid-recorder`).append(recMediaFile); }; document.getElementById("vidBox").srcObject = mediaStreamObj; //disable the start button and enable the stop button startBtn.disabled = true; endBtn.disabled = false; }); }
当按下开始按钮时,它将调用上述函数,该函数将触发 WebRTC 摄像头和麦克风方法以获取录制权限,并启用停止录制按钮并禁用开始录制按钮。
当按下停止按钮时,它将调用 stop() 函数并停止所有媒体流轨道。
然后,为了录制媒体流,我们将创建一个媒体录制器实例,并将媒体流和媒体重新排序设置为全局变量。然后停止视频将停止媒体流,创建视频元素将创建一个新的视频元素并存储录制的媒体数据。
类似地,start_audio_Recording() 函数也类似于start_video_Recording() 函数,但有一些必要的更改。
stop_Recording() 函数
现在让我们定义一个函数来停止录制。
function stop_Recording(end, start) { window.mediaRecorder.stop(); // stop all tracks window.mediaStream.getTracks() .forEach((track) => {track.stop();}); //disable the stop button and enable the start button end.disabled = true; start.disabled = false; }
此函数将停止存储在媒体流中的所有媒体轨道。
示例
让我们将上述函数添加到 HTML 代码中,以使视频和音频录制功能化。
<!DOCTYPE html> <html> <head> <title>Video & Audio Recorder</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css"> <style> body { text-align: center; color: red; font-size: 1.2em; } //video start & end, Audio start & end button styling #video_st, #video_en, #aud_st, #aud_en{ margin-top: 10px; padding: 10px; border-radius: 4px; cursor: pointer; } #vidBox{ background-color: grey; } video { background-color: gray; display: block; margin: 6px auto; width: 420px; height: 240px; } audio { display: block; margin: 6px auto; } a { color: green; } </style> </head> <body> <h1 style="color:blue"> Video-Audio recorder</h1> <div class="display-none" id="vid-recorder"> <h3>Record Video </h3> <video autoplay id="vidBox"> </video> <button type="button" id="video_st" onclick="start_video_Recording()"> <i class="fa fa-play"></i></button> <button type="button" id="video_en" disabled onclick="stop_Recording(this, document.getElementById('video_st'))"> <i class="fa fa-stop"></i> </button> </div> <!-- ------------ --> <br> <hr> <!-- ------------ --> <div class="display-none" id="audio_rec"> <h3> Record Audio</h3> <button type="button" id="aud_st" onclick="start_audio_Recording()"><i class="fa fa-play"></i> </button> <button type="button" id="aud_en" disabled onclick="stop_Recording(this, document.getElementById('aud_st'))"> <i class="fa fa-stop"></i></button> </div> <script> //----------------------Video------------------------------------- function start_video_Recording() { //To stores the recorded media let chunks = []; const startBtn=document.getElementById("video_st"); const endBtn=document.getElementById("video_en"); // Access the camera and microphone navigator.mediaDevices.getUserMedia({audio: true, video: true}) .then((mediaStreamObj) => { // Create a new MediaRecorder instance const medRec =new MediaRecorder(mediaStreamObj); window.mediaStream = mediaStreamObj; window.mediaRecorder = medRec; medRec.start(); //when recorded data is available then push into chunkArr array medRec.ondataavailable = (e) => { chunks.push(e.data); }; //stop the video recording medRec.onstop = () => { const blobFile = new Blob(chunks, { type:"video/mp4" });chunks = []; // create video element and store the media which is recorded const recMediaFile = document.createElement("video"); recMediaFile.controls = true; const RecUrl = URL.createObjectURL(blobFile); //keep the recorded url as source recMediaFile.src = RecUrl; document.getElementById(`vid-recorder`).append(recMediaFile); }; document.getElementById("vidBox").srcObject = mediaStreamObj; startBtn.disabled = true; endBtn.disabled = false; }); } //--------------------audio--------------------------------------- function start_audio_Recording() { //To stores the recorded media let chunksArr = []; const startBtn=document.getElementById("aud_st"); const endBtn=document.getElementById("aud_en"); // Access the camera and microphone navigator.mediaDevices.getUserMedia({audio: true, video: false}) .then((mediaStream) => { const medRec = new MediaRecorder(mediaStream); window.mediaStream = mediaStream; window.mediaRecorder = medRec; medRec.start(); //when recorded data is available then push into chunkArr array medRec.ondataavailable = (e) => { chunksArr.push(e.data); }; //stop the audio recording medRec.onstop = () => { const blob = new Blob(chunksArr, {type: "audio/mpeg"}); chunksArr = []; // create audio element and store the media which is recorded const recMediaFile = document.createElement("audio"); recMediaFile.controls = true; const RecUrl = URL.createObjectURL(blob); recMediaFile.src = RecUrl; document.getElementById(`audio_rec`).append( recMediaFile); }; startBtn.disabled = true; endBtn.disabled = false; }); } function stop_Recording(end, start) { //stop all tracks window.mediaRecorder.stop(); window.mediaStream.getTracks() .forEach((track) => {track.stop();}); //disable the stop button and enable the start button end.disabled = true; start.disabled = false; } </script> </body> </html>
从输出中可以看出,当单击视频开始按钮时,它会调用 start_video_Recording() 函数,并在该函数中调用 navigator.mediaDevices.getUserMedia() 方法,并打开一个权限菜单,请求视频和麦克风的权限。它返回一个 promise,该 promise 解析媒体流。在它接收音频或视频媒体流后,它会创建一个媒体录制器的实例,并通过在上述代码中调用 medRec.start() 函数来启动录制。
因此,您了解了使用 WebRTC 创建视频和音频录制的完整过程。