最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

java - Loading a model but i get this error of bytes - Stack Overflow

programmeradmin3浏览0评论
    private MappedByteBuffer loadModelFile() throws IOException {
        AssetFileDescriptor fileDescriptor = getAssets().openFd("model1.tflite");
        FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
        FileChannel fileChannel = inputStream.getChannel();
        return fileChannel.map(FileChannel.MapMode.READ_ONLY, fileDescriptor.getStartOffset(), fileDescriptor.getDeclaredLength());
    }

    private void runModelOnImage() {
        if (imageBitmap == null) {
            Toast.makeText(this, "No image selected", Toast.LENGTH_SHORT).show();
            return;
        }

        Bitmap resizedImage = Bitmap.createScaledBitmap(imageBitmap, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, true);
        ByteBuffer byteBuffer = convertBitmapToByteBuffer(resizedImage);

        if (tflite == null) {
            Toast.makeText(this, "Model not loaded!", Toast.LENGTH_SHORT).show();
            return;
        }

        float[][] output = new float[1][4]; // Adjust based on the number of classes
        try {
            tflite.run(byteBuffer, output); // Ensure input matches expected shape
            displayPrediction(output);
        } catch (Exception e) {
            Toast.makeText(this, "Error running model: " + e.getMessage(), Toast.LENGTH_SHORT).show();
        }
    }

    private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
        ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * 1 * MODEL_INPUT_SIZE * MODEL_INPUT_SIZE * 3); // Adding batch size
        byteBuffer.order(ByteOrder.nativeOrder());

        int[] intValues = new int[MODEL_INPUT_SIZE * MODEL_INPUT_SIZE];
        bitmap.getPixels(intValues, 0, MODEL_INPUT_SIZE, 0, 0, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE);

        for (int pixelValue : intValues) {
            byteBuffer.putFloat(((pixelValue >> 16) & 0xFF) / 255.0f);
            byteBuffer.putFloat(((pixelValue >> 8) & 0xFF) / 255.0f);
            byteBuffer.putFloat((pixelValue & 0xFF) / 255.0f);
        }

        return byteBuffer;
    }

Here is the error running model:

Cannot copy to a tensorflowlite tensor (serving_default_keras_tensor_142:0) with 25165824 bytes from a java buffer with 602112 bytes.

    private MappedByteBuffer loadModelFile() throws IOException {
        AssetFileDescriptor fileDescriptor = getAssets().openFd("model1.tflite");
        FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
        FileChannel fileChannel = inputStream.getChannel();
        return fileChannel.map(FileChannel.MapMode.READ_ONLY, fileDescriptor.getStartOffset(), fileDescriptor.getDeclaredLength());
    }

    private void runModelOnImage() {
        if (imageBitmap == null) {
            Toast.makeText(this, "No image selected", Toast.LENGTH_SHORT).show();
            return;
        }

        Bitmap resizedImage = Bitmap.createScaledBitmap(imageBitmap, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, true);
        ByteBuffer byteBuffer = convertBitmapToByteBuffer(resizedImage);

        if (tflite == null) {
            Toast.makeText(this, "Model not loaded!", Toast.LENGTH_SHORT).show();
            return;
        }

        float[][] output = new float[1][4]; // Adjust based on the number of classes
        try {
            tflite.run(byteBuffer, output); // Ensure input matches expected shape
            displayPrediction(output);
        } catch (Exception e) {
            Toast.makeText(this, "Error running model: " + e.getMessage(), Toast.LENGTH_SHORT).show();
        }
    }

    private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
        ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * 1 * MODEL_INPUT_SIZE * MODEL_INPUT_SIZE * 3); // Adding batch size
        byteBuffer.order(ByteOrder.nativeOrder());

        int[] intValues = new int[MODEL_INPUT_SIZE * MODEL_INPUT_SIZE];
        bitmap.getPixels(intValues, 0, MODEL_INPUT_SIZE, 0, 0, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE);

        for (int pixelValue : intValues) {
            byteBuffer.putFloat(((pixelValue >> 16) & 0xFF) / 255.0f);
            byteBuffer.putFloat(((pixelValue >> 8) & 0xFF) / 255.0f);
            byteBuffer.putFloat((pixelValue & 0xFF) / 255.0f);
        }

        return byteBuffer;
    }

Here is the error running model:

Cannot copy to a tensorflowlite tensor (serving_default_keras_tensor_142:0) with 25165824 bytes from a java buffer with 602112 bytes.

Share Improve this question edited Mar 31 at 18:42 MetaSnarf 6,1873 gold badges28 silver badges42 bronze badges asked Mar 31 at 6:11 Moses MusireMoses Musire 1 New contributor Moses Musire is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
Add a comment  | 

1 Answer 1

Reset to default 0

Your error indicates that TensorFlow Lite expects a buffer with 25165824 bytes, but your current buffer has 602112 bytes

Ensure that the size of the ByteBuffer matches the model's input shape. For example, if your model expects an input size of 224x224x3, you should allocate the ByteBuffer as follows:

int inputSize = 224; // Replace with the actual size your model expects ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * 1 * inputSize * inputSize * 3); byteBuffer.order(ByteOrder.nativeOrder());

Try modifying your method convertBitmapToByteBuffer with below code whether it works

int inputSize = 224; // Replace with the correct model input size
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4 * 1 * inputSize * inputSize * 3);
    byteBuffer.order(ByteOrder.nativeOrder());

    int[] intValues = new int[inputSize * inputSize];
    bitmap.getPixels(intValues, 0, inputSize, 0, 0, inputSize, inputSize);

    for (int pixelValue : intValues) {
        byteBuffer.putFloat(((pixelValue >> 16) & 0xFF) / 255.0f); 
        byteBuffer.putFloat(((pixelValue >> 8) & 0xFF) / 255.0f);  
        byteBuffer.putFloat((pixelValue & 0xFF) / 255.0f);        
    }

    return byteBuffer;

Also, confirm that the model file is loaded properly:

MappedByteBuffer modelFile = loadModelFile(); Interpreter.Options options = new Interpreter.Options(); tflite = new Interpreter(modelFile, options);

发布评论

评论列表(0)

  1. 暂无评论