最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

tensorflow - TFLite C++ API Invoke causes Segmentation fault - Stack Overflow

programmeradmin4浏览0评论

I am having issues running a simple call to TFLite (C++ API) interpreter. Might be my C++ setup issue, but I honestly have no idea where the problem is.

So within the 'constructor' the model loads and runs a dummy image through it to initialize the weights. All the consecutive runs should happen in the process call, but I am getting Segmentation fault (core dumped) there. If I move the same code into the constructor, no issues there. I tried moving interpreter into public too. In the process call I can access interpreter just fine, I get the input/output tensor details. It fails at interpreter->Invoke() and doesn't even get to the std::cout << "Failed to invoke model!" << std::endl;.

Any help would be appreciated.

Main cpp file:

#include "inference.hpp"
#include <iostream>

int main() {
  
    ObjectDetectionProcessor obj("/src/model.tflite");
    obj.process();

    return 0;
}

And the corresponding hpp file

#include <opencv2/opencv.hpp>

#include <fstream>
#include <string>
#include <vector>
#include <memory>
#include <iostream>

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"

class ObjectDetectionProcessor {
public:
    ObjectDetectionProcessor(const std::string& model_path) {
                                
        auto model = tflite::FlatBufferModel::BuildFromFile(model_path.c_str());
        if (!model) {
            std::cout << "Failed to mmap model " << model_path << std::endl;
            exit(-1);
        }

        tflite::InterpreterBuilder(*model, resolver)(&interpreter);

        if (!interpreter) {
            std::cout << "Failed to construct interpreter" << std::endl;
            exit(-1);
        }

        if (interpreter->AllocateTensors() != kTfLiteOk) {
            std::cout << "Failed to allocate tensors!" << std::endl;
            exit(-1);
        }

        int input_index = interpreter->inputs()[0];
        input_tensor = interpreter->tensor(input_index);
        const uint input_width  = input_tensor->dims->data[2];
        const uint input_height = input_tensor->dims->data[1];
        const uint input_channels = input_tensor->dims->data[3];
        const uint input_type = input_tensor->type;

        // get quantization parameters
        input_scale = input_tensor->params.scale;
        input_zero_point = input_tensor->params.zero_point;

        is_initialized = false;

        int cv_type = (input_type == kTfLiteFloat32) ? CV_32FC3 : CV_8UC3; // Adjust type based on tensor type
        
        // create a dummy cv::Mat frame of zeros of input_width and input_height of type input_type
        cv::Mat dummy_frame = cv::Mat::zeros(input_height, input_width, cv_type);


        // // and pass it to the interpreter to warm up the model
        TfLiteTensor* input_data = interpreter->tensor(interpreter->inputs()[0]);
        
        std::memcpy(input_data->data.uint8, dummy_frame.ptr<uint8_t>(0), dummy_frame.total() * dummy_frame.elemSize());
        
        if (interpreter->Invoke() != kTfLiteOk) {
            std::cout << "Failed to warm up model!" << std::endl;
            exit(-1);
        }
        int output_index = interpreter->outputs()[0];
        output_tensor = interpreter->tensor(output_index);
        float* out_data = interpreter->typed_output_tensor<float>(0);

        is_initialized = true;

    }


    void process(){

        if (!is_initialized) {
            std::cout << "Model not initialized!" << std::endl;
            return;
        }

        // Read image from file 
        cv::Mat dummy_frame = cv::imread("/src/image.jpg", cv::IMREAD_COLOR);

        // resize the image to the input size of the model
        cv::resize(dummy_frame, dummy_frame, cv::Size(input_tensor->dims->data[2], input_tensor->dims->data[1]));

        dummy_frame.convertTo(dummy_frame, CV_8UC3);


        // get the input and output tensor
        TfLiteTensor* input_data = interpreter->tensor(interpreter->inputs()[0]);
        TfLiteTensor* output_data = interpreter->tensor(interpreter->outputs()[0]);

        std::memcpy(input_data->data.uint8, dummy_frame.ptr<uint8_t>(0), dummy_frame.total() * dummy_frame.elemSize());
        

        // run inference
        if (interpreter->Invoke() != kTfLiteOk) {
            std::cout << "Failed to invoke model!" << std::endl;
            return;
        }

        // // get the output data
        float* boxes = interpreter->tensor(interpreter->outputs()[0])->data.f;
        float* classes = interpreter->tensor(interpreter->outputs()[1])->data.f;
        float* scores = interpreter->tensor(interpreter->outputs()[2])->data.f;

    }


private:
    std::unique_ptr<tflite::FlatBufferModel> model;
    tflite::ops::builtin::BuiltinOpResolver resolver;
    std::unique_ptr<tflite::Interpreter> interpreter;
    TfLiteTensor* input_tensor = nullptr;
    TfLiteTensor* output_tensor = nullptr;

    bool is_initialized;

};


EDIT: Compiling the code with

g++ -g -o out src/infer.cpp tensorflow/tensorflow/lite/delegates/external/external_delegate -I/usr/local/tensorflow/include -L/usr/local/tensorflow/lib -ltensorflowlite -I/usr/local/include/opencv4/ -L/usr/local/lib/ -lopencv_core -lopencv_imgcodecs -lopencv_imgproc

EDIT2:

This assumes a sample image located at /src/image.jpg and a tflite model at /src/model.tflite . I used pretrained model from tensorflow here

EDIT 3: Backtrace doesn't produce anything useful

(gdb) backtrace
#0  0x0000713a12be8ab9 in ?? () from /usr/local/lib/libtensorflowlite.so
#1  0x0000713a1290aa7b in ?? () from /usr/local/lib/libtensorflowlite.so
#2  0x0000713a12bf1474 in ?? () from /usr/local/lib/libtensorflowlite.so
#3  0x0000713a12bf18ae in ?? () from /usr/local/lib/libtensorflowlite.so
#4  0x0000713a12bd8a52 in ?? () from /usr/local/lib/libtensorflowlite.so
#5  0x0000713a1297e730 in ?? () from /usr/local/lib/libtensorflowlite.so
#6  0x0000713a1297f57e in ?? () from /usr/local/lib/libtensorflowlite.so
#7  0x0000713a1298190b in ?? () from /usr/local/lib/libtensorflowlite.so
#8  0x0000713a12982137 in TfLiteStatus tflite::ops::builtin::conv::EvalImpl<(tflite::ops::builtin::conv::KernelType)2, (TfLiteType)3>(TfLiteContext*, TfLiteNode*) () from /usr/local/lib/libtensorflowlite.so
#9  0x0000713a129821fb in TfLiteStatus tflite::ops::builtin::conv::Eval<(tflite::ops::builtin::conv::KernelType)2>(TfLiteContext*, TfLiteNode*) () from /usr/local/lib/libtensorflowlite.so
#10 0x0000713a12b8f519 in tflite::Subgraph::Invoke() () from /usr/local/lib/libtensorflowlite.so
#11 0x0000713a12b9508c in tflite::Interpreter::Invoke() () from /usr/local/lib/libtensorflowlite.so
#12 0x00006152b80498d9 in ObjectDetectionProcessor::process (this=0x7ffdf3118b20) at src/inference.hpp:226
#13 0x00006152b8047aaf in main () at src/infer.cpp:12

Looking at the pointers for input_data and input_tensor across constructor and function call, they are the same, so they are pointing to the same address.

Constructor:

(gdb) p interpreter
$2 = std::unique_ptr<tflite::Interpreter> = {get() = 0x6152ecc21760}
(gdb) p input_tensor
$4 = (TfLiteTensor *) 0x6152ecc3aaa0
(gdb) p input_data 
$5 = (TfLiteTensor *) 0x6152ecc3aaa0


Process function:

(gdb) p input_tensor_
$7 = (TfLiteTensor *) 0x6152ecc3aaa0
(gdb) p interpreter
$8 = std::unique_ptr<tflite::Interpreter> = {get() = 0x6152ecc21760}
(gdb) p input_data 
$9 = (TfLiteTensor *) 0x6152ecc3aaa0

I am having issues running a simple call to TFLite (C++ API) interpreter. Might be my C++ setup issue, but I honestly have no idea where the problem is.

So within the 'constructor' the model loads and runs a dummy image through it to initialize the weights. All the consecutive runs should happen in the process call, but I am getting Segmentation fault (core dumped) there. If I move the same code into the constructor, no issues there. I tried moving interpreter into public too. In the process call I can access interpreter just fine, I get the input/output tensor details. It fails at interpreter->Invoke() and doesn't even get to the std::cout << "Failed to invoke model!" << std::endl;.

Any help would be appreciated.

Main cpp file:

#include "inference.hpp"
#include <iostream>

int main() {
  
    ObjectDetectionProcessor obj("/src/model.tflite");
    obj.process();

    return 0;
}

And the corresponding hpp file

#include <opencv2/opencv.hpp>

#include <fstream>
#include <string>
#include <vector>
#include <memory>
#include <iostream>

#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"

class ObjectDetectionProcessor {
public:
    ObjectDetectionProcessor(const std::string& model_path) {
                                
        auto model = tflite::FlatBufferModel::BuildFromFile(model_path.c_str());
        if (!model) {
            std::cout << "Failed to mmap model " << model_path << std::endl;
            exit(-1);
        }

        tflite::InterpreterBuilder(*model, resolver)(&interpreter);

        if (!interpreter) {
            std::cout << "Failed to construct interpreter" << std::endl;
            exit(-1);
        }

        if (interpreter->AllocateTensors() != kTfLiteOk) {
            std::cout << "Failed to allocate tensors!" << std::endl;
            exit(-1);
        }

        int input_index = interpreter->inputs()[0];
        input_tensor = interpreter->tensor(input_index);
        const uint input_width  = input_tensor->dims->data[2];
        const uint input_height = input_tensor->dims->data[1];
        const uint input_channels = input_tensor->dims->data[3];
        const uint input_type = input_tensor->type;

        // get quantization parameters
        input_scale = input_tensor->params.scale;
        input_zero_point = input_tensor->params.zero_point;

        is_initialized = false;

        int cv_type = (input_type == kTfLiteFloat32) ? CV_32FC3 : CV_8UC3; // Adjust type based on tensor type
        
        // create a dummy cv::Mat frame of zeros of input_width and input_height of type input_type
        cv::Mat dummy_frame = cv::Mat::zeros(input_height, input_width, cv_type);


        // // and pass it to the interpreter to warm up the model
        TfLiteTensor* input_data = interpreter->tensor(interpreter->inputs()[0]);
        
        std::memcpy(input_data->data.uint8, dummy_frame.ptr<uint8_t>(0), dummy_frame.total() * dummy_frame.elemSize());
        
        if (interpreter->Invoke() != kTfLiteOk) {
            std::cout << "Failed to warm up model!" << std::endl;
            exit(-1);
        }
        int output_index = interpreter->outputs()[0];
        output_tensor = interpreter->tensor(output_index);
        float* out_data = interpreter->typed_output_tensor<float>(0);

        is_initialized = true;

    }


    void process(){

        if (!is_initialized) {
            std::cout << "Model not initialized!" << std::endl;
            return;
        }

        // Read image from file 
        cv::Mat dummy_frame = cv::imread("/src/image.jpg", cv::IMREAD_COLOR);

        // resize the image to the input size of the model
        cv::resize(dummy_frame, dummy_frame, cv::Size(input_tensor->dims->data[2], input_tensor->dims->data[1]));

        dummy_frame.convertTo(dummy_frame, CV_8UC3);


        // get the input and output tensor
        TfLiteTensor* input_data = interpreter->tensor(interpreter->inputs()[0]);
        TfLiteTensor* output_data = interpreter->tensor(interpreter->outputs()[0]);

        std::memcpy(input_data->data.uint8, dummy_frame.ptr<uint8_t>(0), dummy_frame.total() * dummy_frame.elemSize());
        

        // run inference
        if (interpreter->Invoke() != kTfLiteOk) {
            std::cout << "Failed to invoke model!" << std::endl;
            return;
        }

        // // get the output data
        float* boxes = interpreter->tensor(interpreter->outputs()[0])->data.f;
        float* classes = interpreter->tensor(interpreter->outputs()[1])->data.f;
        float* scores = interpreter->tensor(interpreter->outputs()[2])->data.f;

    }


private:
    std::unique_ptr<tflite::FlatBufferModel> model;
    tflite::ops::builtin::BuiltinOpResolver resolver;
    std::unique_ptr<tflite::Interpreter> interpreter;
    TfLiteTensor* input_tensor = nullptr;
    TfLiteTensor* output_tensor = nullptr;

    bool is_initialized;

};


EDIT: Compiling the code with

g++ -g -o out src/infer.cpp tensorflow/tensorflow/lite/delegates/external/external_delegate -I/usr/local/tensorflow/include -L/usr/local/tensorflow/lib -ltensorflowlite -I/usr/local/include/opencv4/ -L/usr/local/lib/ -lopencv_core -lopencv_imgcodecs -lopencv_imgproc

EDIT2:

This assumes a sample image located at /src/image.jpg and a tflite model at /src/model.tflite . I used pretrained model from tensorflow here

EDIT 3: Backtrace doesn't produce anything useful

(gdb) backtrace
#0  0x0000713a12be8ab9 in ?? () from /usr/local/lib/libtensorflowlite.so
#1  0x0000713a1290aa7b in ?? () from /usr/local/lib/libtensorflowlite.so
#2  0x0000713a12bf1474 in ?? () from /usr/local/lib/libtensorflowlite.so
#3  0x0000713a12bf18ae in ?? () from /usr/local/lib/libtensorflowlite.so
#4  0x0000713a12bd8a52 in ?? () from /usr/local/lib/libtensorflowlite.so
#5  0x0000713a1297e730 in ?? () from /usr/local/lib/libtensorflowlite.so
#6  0x0000713a1297f57e in ?? () from /usr/local/lib/libtensorflowlite.so
#7  0x0000713a1298190b in ?? () from /usr/local/lib/libtensorflowlite.so
#8  0x0000713a12982137 in TfLiteStatus tflite::ops::builtin::conv::EvalImpl<(tflite::ops::builtin::conv::KernelType)2, (TfLiteType)3>(TfLiteContext*, TfLiteNode*) () from /usr/local/lib/libtensorflowlite.so
#9  0x0000713a129821fb in TfLiteStatus tflite::ops::builtin::conv::Eval<(tflite::ops::builtin::conv::KernelType)2>(TfLiteContext*, TfLiteNode*) () from /usr/local/lib/libtensorflowlite.so
#10 0x0000713a12b8f519 in tflite::Subgraph::Invoke() () from /usr/local/lib/libtensorflowlite.so
#11 0x0000713a12b9508c in tflite::Interpreter::Invoke() () from /usr/local/lib/libtensorflowlite.so
#12 0x00006152b80498d9 in ObjectDetectionProcessor::process (this=0x7ffdf3118b20) at src/inference.hpp:226
#13 0x00006152b8047aaf in main () at src/infer.cpp:12

Looking at the pointers for input_data and input_tensor across constructor and function call, they are the same, so they are pointing to the same address.

Constructor:

(gdb) p interpreter
$2 = std::unique_ptr<tflite::Interpreter> = {get() = 0x6152ecc21760}
(gdb) p input_tensor
$4 = (TfLiteTensor *) 0x6152ecc3aaa0
(gdb) p input_data 
$5 = (TfLiteTensor *) 0x6152ecc3aaa0


Process function:

(gdb) p input_tensor_
$7 = (TfLiteTensor *) 0x6152ecc3aaa0
(gdb) p interpreter
$8 = std::unique_ptr<tflite::Interpreter> = {get() = 0x6152ecc21760}
(gdb) p input_data 
$9 = (TfLiteTensor *) 0x6152ecc3aaa0
Share Improve this question edited Feb 17 at 16:48 Maja asked Feb 17 at 10:04 MajaMaja 749 bronze badges 6
  • 2 Most likely an object lifetime issue somewhere. The shadowing of the model member looks suspicious, as does the anonymous InterpreterBuilder that you discard immediately. – molbdnilo Commented Feb 17 at 10:33
  • @TedLyngmo This is the minimal reproducible example. I'll add command I'm running it with too. For the debug output. I tried running it with gdb, not too familiar with C++, but thats the only error I got Program received signal SIGSEGV, Segmentation fault on the Invoke call. – Maja Commented Feb 17 at 11:24
  • @molbdnilo InterpreterBuilder doesn't seem to be an issue. It builds it in the std::unique_ptr<tflite::Interpreter> interpreter. So it is not discarded. – Maja Commented Feb 17 at 11:35
  • Yes, the InterpreterBuilder you create with tflite::InterpreterBuilder(*model, resolver) is destroyed immediately after calling its operator() and before execution of the next line. Whether that is a problem or not is a different matter. – molbdnilo Commented Feb 17 at 13:01
  • Have you tried: 1) enabling (and fixing) more compiler warnings? 2) using address-sanitizer? 3) using undefined behaviour sanitizer? 4) using valgrind? 5) using a debugger? 6) reducing your problem/code to a minimal reproducible example? – Jesper Juhl Commented Feb 17 at 15:39
 |  Show 1 more comment

2 Answers 2

Reset to default 0

Not being fluent in C++, I followed a few examples I found and they used auto type. So did I.

auto model = tflite::FlatBufferModel::BuildFromFile(model_path.c_str());

This line is problematic. I assumed auto induced the type from the private variable declaration, but I guess it overrides it and is still accessible from other function calls.

Removing auto and just calling

model = tflite::FlatBufferModel::BuildFromFile(model_path.c_str()); works as expected.

ObjectDetectionProcessor(const std::string& model_path) {
    auto model = tflite::FlatBufferModel::BuildFromFile(model_path.c_str());
    //...
}

auto model = ... inside your constructor creates a variable local to the constructor which is shadowing the member variable named model. When the constructor returns, this local variable is destroyed and the member variable model that you access from other member functions is still an empty std::unique_ptr<tflite::FlatBufferModel>.

By removing auto you instead assign the return value from BuildFromFile to the member variable model, which will then contain a pointer to the FlatBufferModel that you can access from other member functions.

发布评论

评论列表(0)

  1. 暂无评论