浏览全部资源
扫码关注微信
上海理工大学 智能康复工程研究院,上海 200093
任诗扬(2000-),女,上海理工大学智能康复工程研究院硕士研究生,研究方向为基于机器视觉的智能人机交互
吴伟铭(1997-),男,上海理工大学智能康复工程研究院博士研究生,研究方向为大模型与人工智能
胡冰山(1982-),男,博士,上海理工大学智能康复工程研究院教授、博士生导师,研究方向为康复机器人智能控制、柔性驱动机制的设计与控制、轻量级协作机械臂
喻洪流(1966-),男,博士,上海理工大学智能康复工程研究院教授、博士生导师,研究方向为人类仿生力学与智能控制、康复机器人、人与机器智能交互。
收稿:2025-02-18,
纸质出版:2026-04-15
移动端阅览
任诗扬,吴伟铭,胡冰山等.基于MEAL-YOLOv8的助餐机器人食物种类识别[J].软件导刊,2026,25(04):20-26.
REN Shiyang,WU Weiming,HU Bingshan,et al.Food Category Recognition for Meal-Assistance Robots Using MEAL-YOLOv8[J].Software Guide,2026,25(04):20-26.
任诗扬,吴伟铭,胡冰山等.基于MEAL-YOLOv8的助餐机器人食物种类识别[J].软件导刊,2026,25(04):20-26. DOI: 10.11907/rjdk.251103.
REN Shiyang,WU Weiming,HU Bingshan,et al.Food Category Recognition for Meal-Assistance Robots Using MEAL-YOLOv8[J].Software Guide,2026,25(04):20-26. DOI: 10.11907/rjdk.251103.
助餐机器人在辅助老年人和上肢残疾人方面展现出巨大潜力。助餐机器人通常体积小、具备良好的移动便捷性,因此必须采用轻量化设计,在有限的空间内高效运作并节省能耗。为此,提出MEAL-YOLOv8轻量化视觉模型实现更高效的食物分类功能,结合了BRA与MSDA两种注意力机制,融合了YOLOv8与MobileNetV3的优点。在JETSON NANO B01平台上部署后,MEAL-YOLOv8模型大小仅为2.37 MB,内存占用为158 MB,分类准确率达到91.4%,具有高精度、低延迟的视觉识别能力,使助餐机器人能在资源受限的环境中高效运作,能实时、准确地识别食物。
Catering robots have shown great potential in assisting the elderly and people with upper limb disabilities. Food assistance robots are usually small in size and require good convenience and mobility, so they must adopt lightweight design to operate efficiently and save energy in limited space. Therefore, a lightweight visual model of MEAL-YOLOv8 is proposed to achieve more efficient food classification function, combining BRA and MSDA attention mechanisms, and integrating the advantages of YOLOv8 and MobileNetV3. After deployment on the JETSON NANO B01 platform, the MEAL-YOLOv8 model has a size of only 2.37 MB, a memory usage of 158 MB, and a classification accuracy of 91.4%. It has high-precision and low latency visual recognition capabilities, allowing the food aid robot to operate efficiently in resource limited environments and recognize food in real-time and accurately.
United Nations . World population prospects 2022: summary of results [EB/OL]. https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/wpp2022_summary_of_results.pdf https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/wpp2022_summary_of_results.pdf . doi: 10.18356/9789210014380 http://dx.doi.org/10.18356/9789210014380
RUDNICKA E , NAPIERAŁA P , PODFIGURNA A , et al . The world health organization (WHO) approach to healthy ageing [J]. Maturitas , 2020 , 139 : 6 - 11 . doi: 10.1016/j.maturitas.2020.05.018 http://dx.doi.org/10.1016/j.maturitas.2020.05.018
GERLACH H K , JANSEN T A , LEONHARDT S . A survey on robotic devices for upper limb rehabilitation [J]. Journal of Neuroengineering and Rehabilitation , 2014 , 11 : 1 - 29 . doi: 10.1186/1743-0003-11-3 http://dx.doi.org/10.1186/1743-0003-11-3
TOPPING M . The development of Handy 1, a robotic aid to independence for the severely disabled [C]// IEEE Colloquium Mechatronic Aids for the Disabled , 1995 : 1 - 2 . doi: 10.1049/ic:19950683 http://dx.doi.org/10.1049/ic:19950683
SOYAMA R , ISHII S , FUKASE A . 8 selectable operating interfaces of the meal-assistance device “my spoon” [C]// Advances in Rehabilitation Robotics , Lecture Notes in Control and Information Science , 2006 : 155 - 163 .
HERMANN R P , PHALANGAS A C , MAHONEY R M , et al . Powered feeding devices: an evaluation of three models [J]. Archives of Physical Medicine and Rehabilitation , 1999 , 80 : 1237 - 1242 . doi: 10.1016/s0003-9993(99)90022-9 http://dx.doi.org/10.1016/s0003-9993(99)90022-9
CHOI I , KO K , SONG H , et al . A meal-assistance robot system for Asian food and its food acquisition point estimation and user interface based on face recognition [J]. Applied Science , 2023 , 13 ( 5 ): 3216 .
KAWAMOTO H , SHIRAKI T , OTSUKA T , et al . Meal-assistance by robot suit HAL using detection of food position with camera [C]// IEEE International Conference on Robotics and Biomimetics , 2011 : 889 - 894 . doi: 10.1109/robio.2011.6181400 http://dx.doi.org/10.1109/robio.2011.6181400
FAN Y , ZHANG L , ZHENG C , et al . Real-time and accurate meal detection for meal-assisting robots [J]. Journal of Food Engineering , 2024 , 371 : 111996 . doi: 10.1016/j.jfoodeng.2024.111996 http://dx.doi.org/10.1016/j.jfoodeng.2024.111996
ZHAO X , CHEN D , XIANG G , et al . A vision-based target localization system for the meal assistance robot [C]// International Conference on CYBER Technology in Automation, Control, and Intelligent Systems , 2022 : 769 - 774 . doi: 10.1109/cyber55403.2022.9907689 http://dx.doi.org/10.1109/cyber55403.2022.9907689
PERERA C J , LALITHARATNE T D , KIGUCHI K . EEG-controlled meal assistance robot with camera-based automatic mouth position tracking and mouth open detection [C]// IEEE International Conference on Robotics and Automation , 2017 : 1760 - 1765 . doi: 10.1109/ICRA.2017.7989208 http://dx.doi.org/10.1109/ICRA.2017.7989208
SIMONYAN K , ZISSERMAN A . Very deep convolutional networks for large-scale image recognition [DB/OL] . https://arxiv.org/abs/1409.1556 https://arxiv.org/abs/1409.1556 .
HE K , ZHANG X , REN S , et al . Deep residual learning for image recognition [C]// IEEE Conference on Computer Vision and Pattern Recognition , 2016 : 770 - 778 . doi: 10.1109/cvpr.2016.90 http://dx.doi.org/10.1109/cvpr.2016.90
GIRSHICK R . Fast R-CNN [C]// IEEE International Conference on Computer Vision , 2015 : 1440 - 1448 . doi: 10.1109/iccv.2015.169 http://dx.doi.org/10.1109/iccv.2015.169
REDMON J , DIVVALA S , GIRSHICK R , et al . You only look once: unified, real-time object detection [C]// Las Vegas: IEEE Conference on Computer Vision and Pattern Recognition , 2016 . doi: 10.1109/cvpr.2016.91 http://dx.doi.org/10.1109/cvpr.2016.91
REDMON J , FARHADI A . YOLO9000: better, faster, stronger [C]// Honolulu: IEEE Conference on Computer Vision and Pattern Recognition , 2017 . doi: 10.1109/cvpr.2017.690 http://dx.doi.org/10.1109/cvpr.2017.690
REDMON J , FARHADI A . Yolov3: an incremental improvement[DB/OL] . https://arxiv.org/abs/1804.02767 https://arxiv.org/abs/1804.02767 . doi: 10.1109/cvpr.2017.690 http://dx.doi.org/10.1109/cvpr.2017.690
BOCHKOVSKIY A , WANG C Y , LIAO H Y M . Yolov4: optimal speed and accuracy of object detection[DB/OL] . https://arxiv.org/abs/2004.10934 https://arxiv.org/abs/2004.10934 .
GE Z , LIU S , WANG F , et al . Yolox: exceeding YOLO series in 2021[DB/OL] . https://arxiv.org/abs/2107.08430 https://arxiv.org/abs/2107.08430 .
JIAO J , TANG Y M , LIN K Y , et al . DilateFormer: multi-scale dilated transformer for visual recognition [J]. IEEE Transactions on Multimedia , 2023 , 25 : 8906 - 8919 . doi: 10.1109/tmm.2023.3243616 http://dx.doi.org/10.1109/tmm.2023.3243616
ZHU L , WANG X , KE Z , et al . Biformer: vision transformer with bi-level routing attention [C]// Vancouver: IEEE Conference on Computer Vision and Pattern Recognition , 2023 . doi: 10.1109/cvpr52729.2023.00995 http://dx.doi.org/10.1109/cvpr52729.2023.00995
0
浏览量
14
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构
鄂公网安备42010602004949号