I want to capture webcam video stream, and directly stream it to S3 storage.
I've learned that you can upload via stream to s3: /
I've learned that you can upload via browser: .html#HTTPPOSTExamplesFileUpload
But Im still lost on how to actually do it.
I need an example of someone uploadin getusermediastream to S3 like above.
Buffer, Binary data, multipart upload, stream... this is all beyond my knowledge. Stuff I wish I knew, but don't even now where to learn.
I want to capture webcam video stream, and directly stream it to S3 storage.
I've learned that you can upload via stream to s3: https://aws.amazon./blogs/aws/amazon-s3-multipart-upload/
I've learned that you can upload via browser: http://docs.aws.amazon./AmazonS3/latest/dev/HTTPPOSTExamples.html#HTTPPOSTExamplesFileUpload
But Im still lost on how to actually do it.
I need an example of someone uploadin getusermediastream to S3 like above.
Buffer, Binary data, multipart upload, stream... this is all beyond my knowledge. Stuff I wish I knew, but don't even now where to learn.
Share Improve this question asked Dec 17, 2017 at 20:58 Muhammad UmerMuhammad Umer 18.1k24 gold badges109 silver badges174 bronze badges 01 Answer
Reset to default 16Currently, you cannot simply pass the media stream to any S3 method to do the multipart upload automatically.
But still, there is an event called dataavailable
which produces the chunks of video each given time interval. So we can subscribe to dataavailable
and do the S3 Multipart Upload manually.
This approach brings some plications: say chunks of video are generated each 1 second, but we don't know how long does it take to upload the chunk to S3. E.g. the upload can take 3 times longer due to the connection speed. So we can get stuck trying to make multiple PUT requests at the same time.
The potential solution would be to upload the chunks one by one and don't start uploading the next chunk until the prev. one is uploaded. Here is a snippet of how this can be handled using Rx.js and AWS SDK. Please see my ments.
// Configure the AWS. In this case for the simplicity I'm using access key and secret.
AWS.config.update({
credentials: {
accessKeyId: "YOUR_ACCESS_KEY",
secretAccessKey: "YOUR_SECRET_KEY",
region: "us-east-1"
}
});
const s3 = new AWS.S3();
const BUCKET_NAME = "video-uploads-123";
let videoStream;
// We want to see what camera is recording so attach the stream to video element.
navigator.mediaDevices
.getUserMedia({
audio: true,
video: { width: 1280, height: 720 }
})
.then(stream => {
console.log("Successfully received user media.");
const $mirrorVideo = document.querySelector("video#mirror");
$mirrorVideo.srcObject = stream;
// Saving the stream to create the MediaRecorder later.
videoStream = stream;
})
.catch(error => console.error("navigator.getUserMedia error: ", error));
let mediaRecorder;
const $startButton = document.querySelector("button#start");
$startButton.onclick = () => {
// Getting the MediaRecorder instance.
// I took the snippet from here: https://github./webrtc/samples/blob/gh-pages/src/content/getusermedia/record/js/main.js
let options = { mimeType: "video/webm;codecs=vp9" };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.log(options.mimeType + " is not Supported");
options = { mimeType: "video/webm;codecs=vp8" };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.log(options.mimeType + " is not Supported");
options = { mimeType: "video/webm" };
if (!MediaRecorder.isTypeSupported(options.mimeType)) {
console.log(options.mimeType + " is not Supported");
options = { mimeType: "" };
}
}
}
try {
mediaRecorder = new MediaRecorder(videoStream, options);
} catch (e) {
console.error("Exception while creating MediaRecorder: " + e);
return;
}
//Generate the file name to upload. For the simplicity we're going to use the current date.
const s3Key = `video-file-${new Date().toISOString()}.webm`;
const params = {
Bucket: BUCKET_NAME,
Key: s3Key
};
let uploadId;
// We are going to handle everything as a chain of Observable operators.
Rx.Observable
// First create the multipart upload and wait until it's created.
.fromPromise(s3.createMultipartUpload(params).promise())
.switchMap(data => {
// Save the uploadId as we'll need it to plete the multipart upload.
uploadId = data.UploadId;
mediaRecorder.start(15000);
// Then track all 'dataavailable' events. Each event brings a blob (binary data) with a part of video.
return Rx.Observable.fromEvent(mediaRecorder, "dataavailable");
})
// Track the dataavailable event until the 'stop' event is fired.
// MediaRecorder emits the "stop" when it was stopped AND have emitted all "dataavailable" events.
// So we are not losing data. See the docs here: https://developer.mozilla/en-US/docs/Web/API/MediaRecorder/stop
.takeUntil(Rx.Observable.fromEvent(mediaRecorder, "stop"))
.map((event, index) => {
// Show how much binary data we have recorded.
const $bytesRecorded = document.querySelector("span#bytesRecorded");
$bytesRecorded.textContent =
parseInt($bytesRecorded.textContent) + event.data.size; // Use frameworks in prod. This is just an example.
// Take the blob and it's number and pass down.
return { blob: event.data, partNumber: index + 1 };
})
// This operator means the following: when you receive a blob - start uploading it.
// Don't accept any other uploads until you finish uploading: http://reactivex.io/rxjs/class/es6/Observable.js~Observable.html#instance-method-concatMap
.concatMap(({ blob, partNumber }) => {
return (
s3
.uploadPart({
Body: blob,
Bucket: BUCKET_NAME,
Key: s3Key,
PartNumber: partNumber,
UploadId: uploadId,
ContentLength: blob.size
})
.promise()
// Save the ETag as we'll need it to plete the multipart upload
.then(({ ETag }) => {
// How how much bytes we have uploaded.
const $bytesUploaded = document.querySelector("span#bytesUploaded");
$bytesUploaded.textContent =
parseInt($bytesUploaded.textContent) + blob.size;
return { ETag, PartNumber: partNumber };
})
);
})
// Wait until all uploads are pleted, then convert the results into an array.
.toArray()
// Call the plete multipart upload and pass the part numbers and ETags to it.
.switchMap(parts => {
return s3
.pleteMultipartUpload({
Bucket: BUCKET_NAME,
Key: s3Key,
UploadId: uploadId,
MultipartUpload: {
Parts: parts
}
})
.promise();
})
.subscribe(
({ Location }) => {
// pleteMultipartUpload returns the location, so show it.
const $location = document.querySelector("span#location");
$location.textContent = Location;
console.log("Uploaded successfully.");
},
err => {
console.error(err);
if (uploadId) {
// Aborting the Multipart Upload in case of any failure.
// Not to get charged because of keeping it pending.
s3
.abortMultipartUpload({
Bucket: BUCKET_NAME,
UploadId: uploadId,
Key: s3Key
})
.promise()
.then(() => console.log("Multipart upload aborted"))
.catch(e => console.error(e));
}
}
);
};
const $stopButton = document.querySelector("button#stop");
$stopButton.onclick = () => {
// After we call .stop() MediaRecorder is going to emit all the data it has via 'dataavailable'.
// And then finish our stream by emitting 'stop' event.
mediaRecorder.stop();
};
button {
margin: 0 3px 10px 0;
padding-left: 2px;
padding-right: 2px;
width: 99px;
}
button:last-of-type {
margin: 0;
}
p.borderBelow {
margin: 0 0 20px 0;
padding: 0 0 20px 0;
}
video {
height: 232px;
margin: 0 12px 20px 0;
vertical-align: top;
width: calc(20em - 10px);
}
video:last-of-type {
margin: 0 0 20px 0;
}
<div id="container">
<video id="mirror" autoplay muted></video>
<div>
<button id="start">Start Streaming</button>
<button id="stop">Stop Streaming</button>
</div>
<div>
<span>Recorded: <span id="bytesRecorded">0</span> bytes</span>;
<span>Uploaded: <span id="bytesUploaded">0</span> bytes</span>
</div>
<div>
<span id="location"></span>
</div>
</div>
<!-- include adapter for srcObject shim -->
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<script src="https://cdnjs.cloudflare./ajax/libs/aws-sdk/2.175.0/aws-sdk.js"></script>
<script src="https://cdnjs.cloudflare./ajax/libs/rxjs/5.5.6/Rx.js"></script>
Caveats:
- All Multipart Uploads need to be either pleted or aborted. You will be charged if you leave it pending forever. See the "Note" here.
- Each chunk that you Upload (except the last one) must be larger than 5 MB. Or an error will be thrown. See the details here. So you need to adjust the timeframe/resolution.
- When you are instantiating the SDK make sure that there is a policy that with the
s3:PutObject
permission. - You need to expose the ETag in your bucket CORS configuration. Here is the example of CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws./doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Limitations:
- Be carefull as the MediaRecorder API is still not widely adopted. Make sure you check you visit caniuse. before using it in prod.