最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

pyspark - How to use input_example in MLFlow logged ONNX model in Databricks to make predictions? - Stack Overflow

programmeradmin1浏览0评论

I logged an ONNX model (converted from a pyspark model) in MLFlow like this:

with mlflow.start_run() as run:
    mlflow.onnx.log_model(
        onnx_model=my_onnx_model,
        artifact_path="onnx_model",
        input_example=input_example,
    )

where input_example is a Pandas dataframe that gets saved to artifacts.

On Databricks experiments page, I can see the model being logged along with input_example.json that indeed contains the data I provided as input_example while logging the model.

How to use that data now to make predictions for testing whether ONNX model was logged correctly or not? On model artifacts page in Databricks UI, I see:

from mlflow.models import validate_serving_input

model_uri = 'runs:/<some-model-id>/onnx_model'

# The logged model does not contain an input_example.
# Manually generate a serving payload to verify your model prior to deployment.
from mlflow.models import convert_input_example_to_serving_input

# Define INPUT_EXAMPLE via assignment with your own input example to the model
# A valid input example is a data instance suitable for pyfunc prediction
serving_payload = convert_input_example_to_serving_input(INPUT_EXAMPLE)

# Validate the serving payload works on the model
validate_serving_input(model_uri, serving_payload)
发布评论

评论列表(0)

  1. 暂无评论