Currently I am training a physics informed neural network with the wave propagation data. The shape of the training input is (193524369, 3). I am currently using batch size as 2048 and epoch 300. For training the model, I have given total 20,000 batch. So basically, to train the model, I am using 20,000 batch each having 2048 data which is nearly 10% of my whole dataset. The code is running on Tesla V100-PCIE-32GB GPU.
The training progress is extremely slow and it will probably take months to train the model. However, the NN is not that heavy with so many nodes and layers. How can I make the training progress faster?
I am a bit cluless about making the training process faster. Making the training batch size larger can make it a bit better but this will lead to less accuracy. How should I optimize to make the training process faster and keep the accuracy better?