I'm training my model on Kaggle GPU, but it's taking too long. So, I decided to switch to TPU for faster training.
I enabled TPU in the notebook settings and ran the following code to initialize it:
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # Detect TPU
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
print("TPU initialized")
except ValueError:
strategy = tf.distribute.get_strategy() # Default strategy for CPU and single GPU
print("TPU not found, using default strategy")
However, I keep getting this output:
TPU not found, using default strategy
What I Have Checked:
- TPU is enabled in notebook settings
- I still have TPU quota available this week
- Restarted the session and tried again
Additionally, the Draft Session says:
Utilization is not currently available for TPU VMs.
enter image description here
Has Kaggle stopped providing TPU access, or is there a missing step in my setup? If TPU is still available, how can I get it to work properly?