Im trying to visualize the waveform data of an audio buffer to a html5 canvas.
I used buffer.getChannelData(0)
to get the value for each sample.
And then i drew each sample value to the buffer using a for loop.
Its working now, but the problem is it uses each pixel for a sample which means it requires 44100px sized canvas to draw one second of audio.
this is the code im using to draw:
for (var i = 0; i < data.length; i++) {
ctx.fillStyle = 'black';
ctx.fillRect(i, height / 2, 0.5, (data[i] * height));
}
And you can see the whole code here /
How can i scale it so it fits the the canvas.width
according to the loaded buffer size every time a buffer is loaded?
Thanks
Im trying to visualize the waveform data of an audio buffer to a html5 canvas.
I used buffer.getChannelData(0)
to get the value for each sample.
And then i drew each sample value to the buffer using a for loop.
Its working now, but the problem is it uses each pixel for a sample which means it requires 44100px sized canvas to draw one second of audio.
this is the code im using to draw:
for (var i = 0; i < data.length; i++) {
ctx.fillStyle = 'black';
ctx.fillRect(i, height / 2, 0.5, (data[i] * height));
}
And you can see the whole code here http://jsfiddle/ehsanziya/KKNFL/1/
How can i scale it so it fits the the canvas.width
according to the loaded buffer size every time a buffer is loaded?
Thanks
Share Improve this question edited Jul 3, 2013 at 11:17 zya asked Jul 3, 2013 at 10:54 zyazya 86011 silver badges27 bronze badges 3- The following example code from mdn seems to do what you need: developer.mozilla/en-US/docs/Visualizing_Audio_Spectrum – markE Commented Jul 3, 2013 at 15:35
- 1 @markE This is about spectrum analysation and visualisation. My question is about waveform data visualisation. I found a code that does it github./cwilso/Audio-Buffer-Draw but did not understand how he did the scaling. – zya Commented Jul 3, 2013 at 15:45
- I'm not an audiophile, so sorry, your distinction just flew right over my head :( But..it seems from their code that they sort the large ining data into bins and just display the aggregated bin information rather than every single datapoint. That's how they dealt with their scaling issue. – markE Commented Jul 3, 2013 at 16:00
5 Answers
Reset to default 4Let's say your canvas has a width of 500, and your audio is 60 seconds long with a sample rate of 44100 kHz...
What you need to do is figure out how many samples are going to represent a single pixel on your canvas. To do that, you basically say var binSize = ( audioBuffer.duration * 44100 ) / 500;
Now you iterate over all the samples and group them into sections of binSize
length. For each bin, you find the max value and push it into an array. Once you're done, you'll have an array with 500 values that you can use to draw into your canvas.
I've modified your JSFiddle and posted a fork here:
http://jsfiddle/jjDpm/4/
Here's the key line:
ctx.lineTo(i, height/2 + data[step*i] * amp);
The problem with scaling on the X-axis is resolved by ing up with a "step" factor and then using that to move through the data at an appropriate rate. The idea is that you won't be able to graph every single data point, but you'll essentially graph an evenly (step) spaced set of samples from the data.
I wrote a sample library to do this: https://github./cwilso/Audio-Buffer-Draw.
I've written a simple ADSR synth, allowing the user to enter the parameters for an instrument that will then be synthesized, making it's data available for visualization or playback. I also used 44100 hz.
The key to getting a good looking image, is to realize that you don't want to plot just once for each horizontal pixel, but rather once for each sample. In the case of say a 0.2s sample and a 256px wide canvas, you'll need to fit 8,820 samples into 256 pixels - ~34 samples per pixel. It seems like overkill, but it's the only way to get a visualization that doesn't miss data. It's also the way to get an image that resembles those found in sound-editing programs, eg Audacity, Milkytracker etc, etc.
Here's a hi-hat drum, Fmax > 10,000 hz
EDIT: image added to show that calling closePath does not add a line from the end-point back to the starting point.
Here's the visualization code I use:
function drawFloatArray(samples, canvas)
{
var i, n = samples.length;
var dur = (n / 44100 * 1000)>>0;
canvas.title = 'Duration: ' + dur / 1000.0 + 's';
var width=canvas.width,height=canvas.height;
var ctx = canvas.getContext('2d');
ctx.strokeStyle = 'yellow';
ctx.fillStyle = '#303030';
ctx.fillRect(0,0,width,height);
ctx.beginPath();
ctx.moveTo(0,height/2);
for (i=0; i<n; i++)
{
x = ( (i*width) / n);
y = ((samples[i]*height/2)+height/2);
ctx.lineTo(x, y);
}
ctx.stroke();
ctx.closePath();
canvas.mBuffer = samples;
canvas.onclick = function() { playSound(this.mBuffer, 44100, 50); };
}
The solution written by: enhzflep Is the correct way, but with audio files with large audio buffer arrays, you will need to chunk the processing when iterating over the audio array's buffer length to prevent crashing the browser. Rather than using a setTimeout, set a variable to a value that can be checked once the the chunk process is pleted. The way I did it was like this:
function processChunk(ctx, cwidth, cheight){
var chsize = 6000;
var index=0;
var chComplete;
function doChunk(){
var chunk = chsize;
while(chunk-- && index<data.length){
var x = (index*cwidth)/data.length;
var y = ((data[index]*cheight/2)+cheight/2)
ctx.lineTo(x,y);
index++;
chComplete=1;
}
chComplete=0;
if(index < data.length){
function checkChunk(){
if(chComplete==0){
doChunk()
console.log('chunking');
}else{
setTimeout(checkChunk(), 500);
}
}
checkChunk();
}
}
doChunk();
}