最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

pytorch - HuggingFace Model - OnnxRuntime - Jupyter Notebook Print Model Summary - Stack Overflow

programmeradmin3浏览0评论

Thank you very much for reading my question , sorry if it is an obvious question.

I use anaconda navigator : piped install the model whisper from OpenAi, which is an audio to text transformer model, I use jupyter notebook and when I just run the cell of the model, there is this summary of modules which is quite useful to get to know what the model is :



However, with another pip installed model : I notice the difference is it is optimum.onnxruntime


and when I do the same thing as above, it instead returns a memory location ? or is it ?

sorry if it is a simple question, I tried googling a bit but dont quite know what keyword to search - "onnx pytorch model summary" ? is there a way to have a model summary as above ?

Thank you very much for reading my question .

Thank you very much for reading my question , sorry if it is an obvious question.

I use anaconda navigator : piped install the model whisper from OpenAi, which is an audio to text transformer model, I use jupyter notebook and when I just run the cell of the model, there is this summary of modules which is quite useful to get to know what the model is :



However, with another pip installed model : https://huggingface.co/breezedeus/pix2text-mfr I notice the difference is it is optimum.onnxruntime


and when I do the same thing as above, it instead returns a memory location ? or is it ?

sorry if it is a simple question, I tried googling a bit but dont quite know what keyword to search - "onnx pytorch model summary" ? is there a way to have a model summary as above ?

Thank you very much for reading my question .

Share Improve this question asked Jan 22 at 14:35 Mickey HanMickey Han 891 silver badge11 bronze badges 2
  • 1 The bottom screenshot where you see information between < and > is showing you information about that Python object model. It may not have special handling encoded in your installed environment and so Jupyter doesn't know how to display it other than this way. You could try display(model) instead of just model as the last line; however, because it is already the last expression in that cell, Jupyter should already be trying to do the equivalent of that. As for your main question, did you try applying the same approach to the second model? It doesn't look like you did. So instead ... – Wayne Commented Jan 22 at 17:10
  • <continued> the last line in your second cell add import whisper on one line and then the line below that try whisper.load_model(model). To put it another way, you say, "when I do the same thing as above,". In no way according to Python did you do the same thing. In the first screenshot you used whisper's load_model() method to target the model 'base'. In the second screenshot you don't involve whisper at all, and so they are not the same thing. Did you accidentally post the wrong screenshot? Even if you apply the method, it may not work if the model if not same type of data. – Wayne Commented Jan 22 at 17:17
Add a comment  | 

1 Answer 1

Reset to default 1

The ORT model output is just the default string that represents an object in Python, providing the class name and then the memory address. They are both valid objects in Python but the first model overrides the __str__() method to show the layers when the model is printed.

The ORT model doesn't have the same methods, like the __str__() override, as the whisper model because it is not a PyTorch-based model. Instead, it uses ONNX graph definitions and operators. You can use a tool like Netron. You can try to print the encoder and decoder graphs, but it is not very readable.

import onnx

# Encoder model
encoder_onnx_model = onnx.load(model.encoder_model_path)
print(onnx.helper.printable_graph(encoder_onnx_model.graph))

# Decoder model
decoder_onnx_model = onnx.load(model.decoder_model_path)
print(onnx.helper.printable_graph(decoder_onnx_model.graph))
发布评论

评论列表(0)

  1. 暂无评论