Thank you very much for reading my question , sorry if it is an obvious question.
I use anaconda navigator : piped install the model whisper from OpenAi, which is an audio to text transformer model, I use jupyter notebook and when I just run the cell of the model, there is this summary of modules which is quite useful to get to know what the model is :
However, with another pip installed model : I notice the difference is it is optimum.onnxruntime
and when I do the same thing as above, it instead returns a memory location ? or is it ?
sorry if it is a simple question, I tried googling a bit but dont quite know what keyword to search - "onnx pytorch model summary" ? is there a way to have a model summary as above ?
Thank you very much for reading my question .
Thank you very much for reading my question , sorry if it is an obvious question.
I use anaconda navigator : piped install the model whisper from OpenAi, which is an audio to text transformer model, I use jupyter notebook and when I just run the cell of the model, there is this summary of modules which is quite useful to get to know what the model is :
However, with another pip installed model : https://huggingface.co/breezedeus/pix2text-mfr I notice the difference is it is optimum.onnxruntime
and when I do the same thing as above, it instead returns a memory location ? or is it ?
sorry if it is a simple question, I tried googling a bit but dont quite know what keyword to search - "onnx pytorch model summary" ? is there a way to have a model summary as above ?
Thank you very much for reading my question .
Share Improve this question asked Jan 22 at 14:35 Mickey HanMickey Han 891 silver badge11 bronze badges 2 |1 Answer
Reset to default 1The ORT model output is just the default string that represents an object in Python, providing the class name and then the memory address. They are both valid objects in Python but the first model overrides the __str__()
method to show the layers when the model is printed.
The ORT model doesn't have the same methods, like the __str__()
override, as the whisper model because it is not a PyTorch-based model. Instead, it uses ONNX graph definitions and operators. You can use a tool like Netron. You can try to print the encoder and decoder graphs, but it is not very readable.
import onnx
# Encoder model
encoder_onnx_model = onnx.load(model.encoder_model_path)
print(onnx.helper.printable_graph(encoder_onnx_model.graph))
# Decoder model
decoder_onnx_model = onnx.load(model.decoder_model_path)
print(onnx.helper.printable_graph(decoder_onnx_model.graph))
<
and>
is showing you information about that Python objectmodel
. It may not have special handling encoded in your installed environment and so Jupyter doesn't know how to display it other than this way. You could trydisplay(model)
instead of justmodel
as the last line; however, because it is already the last expression in that cell, Jupyter should already be trying to do the equivalent of that. As for your main question, did you try applying the same approach to the second model? It doesn't look like you did. So instead ... – Wayne Commented Jan 22 at 17:10import whisper
on one line and then the line below that trywhisper.load_model(model)
. To put it another way, you say, "when I do the same thing as above,". In no way according to Python did you do the same thing. In the first screenshot you usedwhisper
'sload_model()
method to target the model 'base'. In the second screenshot you don't involvewhisper
at all, and so they are not the same thing. Did you accidentally post the wrong screenshot? Even if you apply the method, it may not work if the model if not same type of data. – Wayne Commented Jan 22 at 17:17