最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

python - Implementing an ML model into an android app with tensorflow lite - Stack Overflow

programmeradmin0浏览0评论

So I am new to the domain of android app development. I have been trying to implement my custom machine learning model designed for image super resolution into an android application. If i understand the steps correctly: Choice 1:

  1. Make the model in tensorflow. convert to tflite using tflite_converter

OR

  1. Make the model in pytorch. Convert to tflite using ai-edge-torch

  2. Add metadata to the tflite model using tflite-support library.

  3. Add the model in the app through android studios by adding the tflite file and use the generated code.

Issues i am having:

  1. ai-edge-torch doesn't work on kaggle/ colab / offline jupyter notebook due to a lot of dependency issues.

  2. tflite-support library doesn't seem to work. I was able to install tflite_support 0.1.0a1 on my venv and it does seem to be working but every time I cross check if metadata was saved to my model, it doesn't show any changes. Neither does it reflect when I print the metadata using python nor when I import it in android studios. The latest version tflite_support-0.4.4 doesn't work on any of the platforms. I have also tried a virtual environment with python 3.9 and it still didn't work.

  3. The imported model doesn't work properly on android studios if I use it without metadata because the app keeps crashing. I tried downloading a pretrained ESRGAN.tflite with metadata and it works perfectly fine. link

Please help me on how to correctly either add metadata to my model and import it in android studios and make it work or any other framework to make it work. I am ready to learn and use flutter as well if it is easier to perform this task there. I am new here so a little detailed advice would be appreciated. I am using python 3.12.8 and tensorflow 2.17.0 in my venv. Thank you.

I have tried many things so far to add metadata. One that has worked (atleast seemed to work) is:

from tflite_support import flatbuffers
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb

# Create model info
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = model_name
model_meta.description = "Image Super Resolution"
model_meta.version = "v1"
model_meta.author = "Tanish"
model_meta.license = "None"

# Create input info
input_meta = _metadata_fb.TensorMetadataT()
input_meta.name = "input"
input_meta.description = "Input image to be processed"
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
input_meta.content.contentProperties.colorSpace = _metadata_fb.ColorSpaceType.RGB
input_meta.content.contentPropertiesType = _metadata_fb.ContentProperties.ImageProperties

# Specify tensor shape and type
input_meta.shape = [1, 128, 128, 3]  # [batch, height, width, channels]
input_meta.dtype = 1

# Create output info
output_meta = _metadata_fb.TensorMetadataT()
output_meta.name = "output"
output_meta.description = "Output upscaled image"
output_meta.content = _metadata_fb.ContentT()
output_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
output_meta.content.contentProperties.colorSpace = _metadata_fb.ColorSpaceType.RGB
output_meta.content.contentPropertiesType = _metadata_fb.ContentProperties.ImageProperties

# Specify tensor shape and type
output_meta.shape = [1, 512, 512, 3]  # [batch, height, width, channels]
output_meta.dtype = 1

subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = [output_meta]
model_meta.subgraphMetadata = [subgraph]

b = flatbuffers.Builder(0)
b.Finish(
    model_meta.Pack(b),
    _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()

populator = _metadata.MetadataPopulator.with_model_file(MODEL_PATH)
populator.load_metadata_buffer(metadata_buf)
# Add metadata to the model
populator.populate()

# Get the updated model buffer
updated_model_buf = populator.get_model_buffer()

# Save the updated model to a file
with open(NEW_MODEL_PATH, "wb") as f:
    f.write(updated_model_buf)

print(f"Model with metadata saved to: {NEW_MODEL_PATH}")

I expected the metadata to change with respect to my updates, but they are not visible in the android studios when I import the model. enter image description here

And the app crashes even when I initialize the model.

I have also tried mediapipe to do the implementation but that also doesn't work due to dependency issues.

发布评论

评论列表(0)

  1. 暂无评论