I have two cameras in a 3D application, each one capturing its view and applying it as a texture on a flat plane.
The cameras are both at 0,0,0, and each is rotated to aim at the center of its respective plane object.
the goal now is to apply projection math to warp the images before applying them to the plane objects, to create a seamless join between the images, as if it was a single image on a single object, projected from the camera viewpoint.
I am however, lost here. What data do I need to calculate this? I can access: Camera position, FOV, aperture, focal length. Plane object vertex, UV information.
What does this look like in pseudo-code? What is the technique that i am looking for called?
Is having all cameras at the same point the right approach? Or is there a 'sweet-spot' for each camera relative to the plane?
(I am in python, but could equally work in c++)