I need help. I am building a musical application that uses convolution reverb. I want to use the spazialization abilities of THREE js but its impossible connecting a convolver node to my sound. I cant use the audio element since I am going to be using about 10,000 different sound samples for the pipe an.
I have already built a working prototype using just stereo and I was hoping to build a 3d model this time to improve the use.
You can see the demo here
const canvas = document.querySelector('canvas.webgl')
// Create a positional audio object
const sound = new THREE.PositionalAudio(listener);
const camera = new THREE.PerspectiveCamera(75, sizes.width / sizes.height)
// Load an audio file
const audioLoader = new THREE.AudioLoader();
audioLoader.load('062-D.wav', function(buffer) {
sound.setBuffer(buffer);
sound.setRefDistance(20);
sound.setVolume(0.2);
sound.play();
});
const scene = new THREE.Scene()
// Create a mesh to attach the positional audio to
const sphereGeometry = new THREE.SphereGeometry(2, 3.2, 1.6 );
const sphereMaterial = new THREE.MeshBasicMaterial({ color: 0xff0000 });
const sphere = new THREE.Mesh(sphereGeometry, sphereMaterial);
sphere.add(sound); // Attach positional audio to the mesh
scene.add(sphere);
scene.add(camera)
sphere.position.set(0.3,0.1,-4)
// camera.position.set(1,1,0.3)
const renderer = new THREE.WebGLRenderer({
canvas: canvas
})
renderer.setSize(sizes.width, sizes.height)
renderer.render(scene, camera)
I just need a way to be able to connect the sound buffer to the convolver node. Basically connecting the THREE.AudioLoader buffer to the reverb using the regular Web Audio API since THREE js doesnt seem to have any convolution mechanism.