[Enhanced] method of memory loading model#2029
[Enhanced] method of memory loading model#2029dhzgit wants to merge 2 commits intoopen-mmlab:mainfrom
Conversation
|
Thanks for you contribution. While, the current apis already have the ability to load a model from memory like this: mmdeploy_model_t model;
std::ifstream ifs(model_path, std::ios::binary); // /path/to/zipmodel
ifs.seekg(0, std::ios::end);
auto size = ifs.tellg();
ifs.seekg(0, std::ios::beg);
std::string str(size, '\0');
ifs.read(str.data(), size);
mmdeploy_model_create(str.data(), size, &model);
int status{};
mmdeploy_classifier_t classifier{};
status = mmdeploy_classifier_create(model, device_name, 0, &classifier);
// or use mmdeploy_classifier_create_v2I'm not sure if it's necessary to add these api. What is your option about this pr? @lvhan028 |
|
The current code already has the usage of "mmdeploy_classifier_create_by_path", and adding the corresponding "mmdeploy_classifier_create_by_buffer" can effectively reduce misunderstandings about the API during use and corresponding usage costs. After all, even if our staff are proficient in both Python/C++and various AI model usage scenarios, it took a long time to find "mmdeploy_model_create" in the code and master the usage of this API. |
| option(MMDEPLOY_BUILD_EXAMPLES "build examples" OFF) | ||
| option(MMDEPLOY_SPDLOG_EXTERNAL "use external spdlog" OFF) | ||
| option(MMDEPLOY_ZIP_MODEL "support SDK model in zip format" OFF) | ||
| option(MMDEPLOY_ZIP_MODEL "support SDK model in zip format" ON) |
There was a problem hiding this comment.
It is recommended to keep MMDEPLOY_ZIP_MODEL OFF as the default value because not all of the users need it
| return ec; | ||
| } | ||
|
|
||
| int mmdeploy_classifier_create_by_buffer(const void* buffer, int size, const char* device_name, |
There was a problem hiding this comment.
We don't suggest creating another API. It's kinda redundant because mmdeploy_model_create and mmdeploy_classifier_create are enough to make up for it
|
Hi,
I understand your concern. So, from the perspective of maintenance, I think it's not necessary to change the API. Instead, we can describe a procedure in the user guide about creating predictors from the buffer |
|
Hi @dhzgit, We'd like to express our appreciation for your valuable contributions to the mmdeploy. Your efforts have significantly aided in enhancing the project's quality. If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID". Thanks again for your awesome contribution, and we're excited to have you as part of our community! |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2029 +/- ##
==========================================
+ Coverage 49.44% 49.65% +0.21%
==========================================
Files 338 339 +1
Lines 12920 12985 +65
Branches 1897 1901 +4
==========================================
+ Hits 6388 6448 +60
- Misses 6088 6091 +3
- Partials 444 446 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@irexyc could you please add a tutorial about loading mmdeploy SDK model from a buffer? |
Motivation
There is no way to load the model from memory, so it cannot be loaded from memory when the model is encrypted
Modification
Add methods to load models from memory