在过去的1-2天里,我一直在努力构建TensorFlow Lite,因此我可以在自己的C\C++项目中将其用作 header 或库。
例如,我有一个main.cpp的C++项目,其代码如下:
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
int main()
{
std::unique_ptr<tflite::FlatBufferModel> model;
model = tflite::FlatBufferModel::BuildFromBuffer(h5_converted_tflite, h5_converted_tflite_len);
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
}
我应该从哪里下载\build,然后从哪里下载,以便我可以成功地编译此代码?当前,它显然说找不到h文件,并且当我克隆TF存储库并将其添加到include文件夹时,它找不到“flatbuffers.h”文件,而当我手动添加它时,它给出了我有很多联系错误。
任何帮助将不胜感激...
提前致谢
最佳答案
尝试下面的代码,该代码已通过TensorFlow
lite 1.14.0测试:
std::string str = "model.tflite";
ifstream file(str, std::ifstream::binary);
file.seekg(0, file.end);
int length = file.tellg();
file.seekg(0, file.beg);
char * model_data = new char [length];
file.read (model_data, length);
file.close();
std::unique_ptr<tflite::Interpreter> interpreter;
std::unique_ptr<tflite::FlatBufferModel> model;
tflite::ops::builtin::BuiltinOpResolver resolver;
model = tflite::FlatBufferModel::BuildFromBuffer(model_data, length);
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
interpreter->AllocateTensors();
关于c++ - 如何在C++项目中使用TF Lite库,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/56573425/