CN117415831A - Intelligent meal-assisting robot control method and system for improving net food rate - Google Patents

Intelligent meal-assisting robot control method and system for improving net food rate Download PDF

Info

Publication number
CN117415831A
CN117415831A CN202311415840.XA CN202311415840A CN117415831A CN 117415831 A CN117415831 A CN 117415831A CN 202311415840 A CN202311415840 A CN 202311415840A CN 117415831 A CN117415831 A CN 117415831A
Authority
CN
China
Prior art keywords
food
meal
grabbing
assisting
running track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311415840.XA
Other languages
Chinese (zh)
Inventor
邹凌
朱堃
吕继东
李文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou University
Original Assignee
Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou University filed Critical Changzhou University
Priority to CN202311415840.XA priority Critical patent/CN117415831A/en
Publication of CN117415831A publication Critical patent/CN117415831A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides an intelligent meal-assisting robot control method and system for improving net food rate. The intelligent meal-assisting robot control method for improving the net food rate comprises the following steps: positioning and identifying food based on a color segmentation technology and a deep learning model to obtain the position and the type of the food; according to the position and the type of food, making a grabbing running track and a food taking mode; executing a meal-assisting task according to the grabbing running track and the feeding mode; obtaining a single execution result of the meal-assisting task through a visual feedback system; and adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed. The intelligent meal-assisting robot control method for improving the net food rate improves the accuracy and efficiency of the robot for carrying food and provides better meal experience for users.

Description

Intelligent meal-assisting robot control method and system for improving net food rate
Technical Field
The invention relates to the technical field of computer images and robots, in particular to an intelligent meal-assisting robot control method and system for improving net food rate.
Background
At present, the meal-assisting robot is a very practical intelligent robot, and can provide great convenience for people needing to assist in eating. However, in actual operation, the robot easily causes loss and carry-over of food during gripping and carrying due to the variety of food. This situation affects the efficiency of the dining robot and the dining experience of the user and wastes food.
Disclosure of Invention
In view of the above, the present invention provides an intelligent dining robot control method and system for improving the net eating rate, so as to solve the above problems.
The invention provides an intelligent meal-assisting robot control method aiming at improving net food rate, which comprises the following steps: positioning and identifying food based on a color segmentation technology and a deep learning model to obtain the position and the type of the food; according to the position and the type of food, making a grabbing running track and a food taking mode; executing a meal-assisting task according to the grabbing running track and the feeding mode; obtaining a single execution result of the meal-assisting task through a visual feedback system; and adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed.
In another implementation manner of the present invention, the intelligent meal-assisting robot control method for improving the net food rate further comprises: acquiring a food sample image and a labeling data set, wherein the labeling data comprises information about food types and positions; training is carried out based on the food sample image and the labeling data set, and a deep learning model is obtained.
In another implementation of the present invention, the positioning and identifying the food based on the color segmentation technique and the deep learning model to obtain the position and the kind of the food includes: the intelligent meal-assisting robot acquires a current food scene image through a visual sensor; detecting food in the current food scene image based on a color segmentation technology, and determining position information corresponding to the detected food; and inputting the detected food and the corresponding position information into a deep learning model to obtain the type of the food.
In another implementation manner of the present invention, executing a meal-assisting task according to a grabbing running track and a feeding manner includes: the tail end of the mechanical arm of the intelligent meal-assisting robot performs grabbing operation on food according to the grabbing running track and the grabbing mode; and adjusting the grabbing running track and the grabbing mode of the mechanical arm according to the food characteristic information of grabbing food to complete the meal assisting task, wherein the food characteristic information comprises the shape, the size and the weight of the food.
In another implementation manner of the present invention, the intelligent meal-assisting robot control method for improving the net food rate further comprises: dividing food targets to be identified in the food sample image into 5 categories for data labeling; marking food images which are difficult to process and operate and have smooth surfaces as thorns; marking soft and large-size food images as 'cutting' types; labeling the food images to be mixed as 'stirring' types; marking the food easy to be clamped as a clamping type; the granular food image is labeled as "scooping" type.
In another implementation of the present invention, obtaining, by a visual feedback system, a single execution result of a meal-aid task includes: after finishing single meal-assisting operation, the visual feedback system acquires the existence condition of food in the dinner plate and the executor based on a detection algorithm, and determines the relative distribution condition of the food; the grabbing running track is adjusted according to the execution result until the meal-assisting task is completed, and the method comprises the following steps: and adjusting the grabbing running track according to the existence condition of the food and the relative distribution condition of the food until the meal-assisting task is completed, wherein the grabbing running track comprises the position, the direction and the action of the mechanical arm.
In another aspect of the invention, there is provided an intelligent dining robot control system intended to promote net eating rate, comprising: and an identification subsystem: the method is used for carrying out positioning and identification processing on the food based on a color segmentation technology and a deep learning model to obtain the position and the type of the food; and the grabbing subsystem is used for: the grabbing device is used for making grabbing running tracks and grabbing modes according to the positions and the types of foods; executing a meal-assisting task according to the grabbing running track and the feeding mode; and a feedback subsystem: the visual feedback system is used for acquiring a single execution result of the meal-assisting task; and adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed.
According to the intelligent meal-aiding robot control method for improving the net food rate, different food taking modes of the mechanical arm are controlled according to the characteristics of each food to be eaten, and the moving track in the carrying process is determined, so that the food loss and the food carry-over can be reduced to the greatest extent. Specifically, computer vision technology and deep learning technology are used to identify different kinds of foods, and different eating modes and running tracks are designed according to the characteristics of each food, for example, slow and stable running modes can be adopted for fragile foods; for foods that are easy to leave, a rotating or vibrating mode can be adopted; in addition, in actual operation, the state of food can be monitored in real time and dynamically adjusted according to the requirement; the dining assisting robot is more intelligent and efficient, the net food rate is improved, food residues are reduced, the accuracy and efficiency of carrying food by the robot are improved, and better dining experience is provided for users.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described, and advantages and benefits in the solutions will become apparent to those skilled in the art from reading the detailed description of the embodiments below. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. In the drawings:
FIG. 1 is a flow chart of steps of an intelligent dining robot control method aimed at improving net eating rate according to one embodiment of the present invention.
Fig. 2 is a schematic configuration diagram of a dining robot system according to another embodiment of the present invention.
Fig. 3 is a schematic diagram of a table dish dataset construction process according to another embodiment of the present invention.
Fig. 4 is a schematic diagram of an end-effector of a robot according to another embodiment of the present invention.
Detailed Description
In order to make the technical solutions in the embodiments of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly and specifically described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the present invention, shall fall within the scope of protection of the embodiments of the present invention.
Fig. 1 is a flowchart of steps of a control method of an intelligent meal-assisting robot for improving net food rate according to an embodiment of the present invention, as shown in fig. 1, the embodiment mainly includes the following steps:
s101, positioning and identifying food based on a color segmentation technology and a deep learning model to obtain the position and the type of the food.
Illustratively, by training the meal assist robot in a vision-based approach for food recognition, the technique enables the separation of food pixels from the background, enabling accurate positioning and recognition of food. Considering various kinds of foods, a learning algorithm for enhancing the recognition capability of foods is provided, and the learning algorithm is used for recognizing untrained food categories, particularly under the condition that foods cannot be effectively distinguished, by combining cross-category information, the meal assisting robot can rapidly distinguish new foods or irregular foods, the comprehensive capability of recognition is improved, and the learning algorithm is particularly suitable for foods which are difficult to be clearly defined in traditional classification.
Preferably, for certain foods, combinations of classes may be employed to enhance recognition, and robots improve recognition accuracy for foods in complex situations by learning a particular combination of "look like" target foods. By adjusting the pre-trained deep learning web object recognition algorithm, the meal assist robot is enabled to detect new food, even if not previously trained by accurate food categories.
S102, according to the positions and the types of the foods, making a grabbing running track and a taking mode.
For example, different foods may require different gripping strategies to ensure stable pick-up and transport, and the eating patterns are determined based on the characteristics of the food, and the shape, size, weight, and surface characteristics of the food, such as smooth, rough, fragile, etc., are first analyzed to select an appropriate eating pattern.
S103, executing the meal-assisting task according to the grabbing running track and the feeding mode.
Illustratively, the robot adopts a corresponding food taking mode according to the type and position information of food, and the mechanical arm is used as an end effector to carry out self-adaptive control so as to improve the grabbing success rate and the net food rate.
It should be appreciated that the robot performs robotic arm control, such as inverse kinematics, trajectory planning, force control, etc., via a learning control algorithm to achieve precise gripping and handling actions to ensure stable pick-up and transport. Specific gesture trajectories are designed for different food categories to ensure optimal gripping and handling.
Specifically, the end effector action of the meal assist robot is shown in fig. 4, and for slippery foods that are difficult to handle, the effector takes a "thorn" action; for soft large foods, the actuator takes a "cutting" action; for foods to be mixed, the actuator takes a stirring action; for food easy to clamp, the actuator takes a clamping action; for rice and similar dishes, the actuator takes a "scooping" action. For the calibrated action, corresponding joint variables are solved through the given pose of the end effector, so that the rotation angles of all joints are determined, and the mechanical arm is controlled to complete the movement in space.
S104, obtaining a single execution result of the meal-assisting task through a visual feedback system.
In an exemplary real-time recognition stage, the trained model is applied to a dining-assisting robot, and images of food on a dining table are acquired in real time through sensors such as a camera, classification and target detection are carried out, so that the type and position information of the food are obtained. On the basis, a vision-based feedback system is adopted to adjust the behavior of the robot, and the specific steps are as follows:
s1041, food supply detection: detecting whether the dinner plate has enough food supply and the actuator has enough food, and adjusting subsequent operation according to the detection result.
S1042, user feeding detection: and detecting whether the user eats food or not by using the classifier 'food with food' and 'food without food', and adjusting the operation strategy of the robot according to the detection result.
S1043, visual feedback application: and converting the information obtained from visual perception into tasks executed by the mechanical arm so as to realize accurate action planning.
Specifically, the dinner plate food detection: a detection algorithm is used to detect the presence of food on the dish. If the dinner plate is detected to be provided with food, the next step is carried out; if no food is detected, the robot arm will not go further. And (3) detecting food of an actuator: and detecting whether enough food exists on the actuator, and ensuring that the required food quantity reaches a preset standard by adjusting a detection algorithm. Feeding threshold selection: from the color histogram in the end effector image, a threshold of "enough food" is determined, which helps the robot determine if it is necessary to continue the feeding action.
S105, adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed.
Illustratively, the robot behavior is adjusted according to the single execution result, and the robot is correspondingly adjusted according to a preset strategy according to the detection result and the color histogram information. If it is detected that the end effector does not obtain enough food, the meal assist robot will re-scoop; if food is detected in the dinner plate, but the end effector continuously does not detect the food, the robot can change the position of the bowl or change the containing mode, so that the feeding efficiency is improved.
According to the intelligent meal-aiding robot control method for improving the net food rate, different food taking modes of the mechanical arm are controlled according to the characteristics of each food to be eaten, and the moving track in the carrying process is determined, so that the food loss and the food carry-over can be reduced to the greatest extent. Specifically, computer vision technology and deep learning technology are used to identify different kinds of foods, and different eating modes and running tracks are designed according to the characteristics of each food, for example, slow and stable running modes can be adopted for fragile foods; for foods that are easy to leave, a rotating or vibrating mode can be adopted; in addition, in actual operation, the state of food can be monitored in real time and dynamically adjusted according to the requirement; the dining assisting robot is more intelligent and efficient, the net food rate is improved, food residues are reduced, the accuracy and efficiency of carrying food by the robot are improved, and better dining experience is provided for users.
In another implementation of the invention, a meal assist robot system configuration is shown in fig. 2, wherein the robotic arm employs a six axis horizontal robot as the main component of the meal assist robot assembly. In addition, in order to adjust the grasping force of the chopsticks according to the characteristics of food, a 3-axis sensor is used, in which an X-axis sensor detects the quality of food, a Y-axis sensor detects the weight thereof, and a Z-axis sensor detects the touch of the chopsticks to the user, preventing dangerous situations. The camera is used to acquire visual images of the food to be consumed and to display these images to the user's selection via the screen.
Preferably, the whole process is autonomously controlled by a user, the mode of operating the meal assisting robot by the user is very simple, and the whole meal taking and feeding process can be started by clicking a start button on an operation panel, so that the operation mode is convenient, simple and easy to master. When the user wants to select food, the user can conveniently select the food by moving the mouse pointer to the position of the food required on the screen and clicking the image of the selected food, and then the meal taking and feeding process can be independently completed by the meal assisting robot.
Preferably, on the choice of meal means, although the evaluation shows that the fork and spoon combination has a higher function, the chopsticks can handle a wider variety of foods and can perform various basic actions such as stirring, piercing, cutting, gripping and scooping. In addition, when eating with chopsticks, the diner does not need to open the mouth like a spoon, and the chopsticks can perform finer motions and can be considered as the extension of the fingers of the user. In addition, the Chinese is familiar with and knows about the use of chopsticks. Based on these considerations, chopsticks were chosen as a tool for food gripping.
In another implementation manner of the present invention, the intelligent meal-assisting robot control method for improving the net food rate further comprises: acquiring a food sample image and a labeling data set, wherein the labeling data comprises information about food types and positions; training is carried out based on the food sample image and the labeling data set, and a deep learning model is obtained.
Illustratively, sample images are prepared, and pixel sample images containing different foods and backgrounds are collected to teach the robot to identify the colors of the different foods. And performing color matching processing on the food sample image, and performing matching labeling on each pixel and the most matched color (food or background) by using a nearest neighbor method and Euclidean distance in RGB color space to generate a binary mask of the food pixel. Pixel processing is performed, binary pixel markers are cleaned up by erosion and dilation processes, and connected components are grouped to accurately discern individual instances of food. Typically, the center point of the largest connected component is selected as the target point for the food. The method aims at helping a robot learn how to distinguish the colors of foods, so that the foods can be accurately identified when a task is executed, and meanwhile, the method provides help for a detection algorithm.
It should be appreciated that the food sample image may need to be preprocessed prior to labeling, as shown in fig. 3, and the food sample image may be collected in batches by canteen dish photographing and internet chinese dish image collection. For the acquired food sample image, the image processing technology can be used for preprocessing, including operations such as image scaling, cutting and enhancing, so as to reduce noise and improve image quality. The preprocessing stage of the image ensures the quality of training data, thereby improving the robustness and accuracy of the model.
It should also be appreciated that in view of the wide variety of foods and the varying shapes and sizes, a learning algorithm is proposed that is capable of identifying untrained food categories. In some cases, combinations of food categories are utilized to enhance identification when food cannot be effectively distinguished. This algorithm allows the robot to find a specific combination of "look like" target foods. Taking the meal-aid robot as an example, when facing objects such as food sweet dumplings, the meal-aid robot has the characteristics of soft oversized food and slippery food, and in this case, the robot can search for objects which are simultaneously seen as large food and slippery food, so that the food can be more accurately identified and distinguished from other foods. The core idea of the algorithm is that by considering the combination of different food categories, the robot can improve the accuracy of food identification under complex conditions; the meal assist robot can detect new food by adjusting a pre-trained deep learning network object recognition algorithm even if the food has not been trained by an accurate food category before.
Preferably, an adaptive fusion (ASFF) is applied to YOLO, and an adaptation module is placed after the feature extraction part of the YOLO model, which allows the adaptation module to process the information from the different scale feature maps to intelligently fuse them. In ASFF, gradient computation takes place in the following way:
these coefficients (. Alpha.) ij ) The effective coefficients can be obtained through training processes of different food types by learning through a standard back propagation algorithm. By adding an SE (Squeeze-and-specification) module, the model can automatically learn the relative importance of each channel characteristic, so that the sensitivity to the channel characteristic is enhanced, and the performance is improved. The process of the module comprises the steps of learning the influence weights of different channels in the feature map on the final result, generating corresponding weight coefficient vectors, and multiplying the corresponding weight coefficient vectors with the original feature map to generate a new feature map, so that the model performance can be improved. Aiming at the problem of various foods, through the groupAnd the identification accuracy is improved by combining different food types.
In another implementation of the present invention, the positioning and identifying the food based on the color segmentation technique and the deep learning model to obtain the position and the kind of the food includes: the intelligent meal-assisting robot acquires a current food scene image through a visual sensor; detecting food in the current food scene image based on a color segmentation technology, and determining position information corresponding to the detected food; and inputting the detected food and the corresponding position information into a deep learning model to obtain the type of the food.
For realizing efficient meal taking and feeding processes, a camera is installed above the dinner plate, and is used for displaying images of food on a table top, the images are collected through the camera and are processed and identified, and the meal assisting robot can quickly and accurately determine the positions and types of the food, so that efficient meal taking and feeding are realized.
Specifically, since the color of the food is different from the color distribution of the table, the food pixels can be positioned in the image by the color segmentation method, and the dining-assisting robot is trained by providing the pixel sample image containing different foods and backgrounds, so that the food position information can be accurately distinguished from the backgrounds. An image segmentation algorithm of euclidean distance may be employed, in which the average color is represented by an RGB vector a, the purpose of the segmentation being to classify each RGB pixel in the image, i.e. whether there is a color in a specified region. Euclidean distance formula:
wherein the subscript R, G, B represents the RGB components of vectors α and z, satisfying D (z, α). Ltoreq.D 0 The track of the points of (2) is of radius D 0 The points contained inside and on the surface of the sphere meet specified color criteria. Binarizing two groups of points in the image produces a binary segmented image. Food identification is realized through a color segmentation technology, so that powerful food and background distinguishing is provided for the meal assisting robotSupport.
Further, the food is identified using the depth network, and classified based on the shape and texture of the food. The model is optimized through two-stage training, firstly, the model is trained on a training set to optimize parameters, and the recognition accuracy is improved; next, the model is tuned over the validation set to select the best parameter configuration. Furthermore, by means of a pre-training network, a learning algorithm is developed that enables the meal assist robot to associate new objects with the most similar class labels.
In another implementation manner of the present invention, executing a meal-assisting task according to a grabbing running track and a feeding manner includes: the tail end of the mechanical arm of the intelligent meal-assisting robot performs grabbing operation on food according to the grabbing running track and the grabbing mode; and adjusting the grabbing running track and the grabbing mode of the mechanical arm according to the food characteristic information of grabbing food to complete the meal assisting task, wherein the food characteristic information comprises the shape, the size and the weight of the food.
The food feeding method is characterized in that after the type and the position of food are identified, the dining assisting robot judges to adopt a corresponding track, the tips of the chopsticks are moved to the item to take out a small-mouth food, and then the food is brought into the mouth of a user at a uniform speed, so that the food feeding method not only can ensure the sanitation and the safety of the food, but also can improve the dining experience of the user. On the basis, more flexible feeding modes can be added to meet the requirements of different users.
Further, after a user eats a small-mouth food, the mechanical arm automatically returns to the original position, and the camera image is updated so as to take and feed the food next time. After the user selects one food category, the user can continue to select the next food category, so that the user can conveniently select food, the continuity and fluency of the whole process are ensured, and the dining experience of the user is improved.
Preferably, the mechanical arm can make corresponding grabbing according to the type and the position information of the foodAnd (5) the action is performed to finish grabbing and carrying of the food. Aiming at different feeding modes, the mechanical arm adopts forward and reverse solution operation to make corresponding actions. The solution adopts a numerical method, so that the applicability is wide, namely, the robot is assumed to be at the initial position q s Move step by step to the target position q e The step length of each step is as follows:
Δθ=αJ T (θ)Δx(JacobianTransposeMethod)
Δθ=αJ T (θ)(J(θ)J T (θ)) -1 Δx(PseudoInverseMethod)
the feeding mode of the end effector is optimized according to the food characteristics, so that stable grabbing and carrying processes are guaranteed.
In another implementation manner of the present invention, the intelligent meal-assisting robot control method for improving the net food rate further comprises: dividing food targets to be identified in the food sample image into 5 categories for data labeling; marking food images which are difficult to process and operate and have smooth surfaces as thorns; marking soft and large-size food images as 'cutting' types; labeling the food images to be mixed as 'stirring' types; marking the food easy to be clamped as a clamping type; the granular food image is labeled as "scooping" type.
For example, in image labeling, labeling based on the smallest circumscribed rectangle of each kind of food is required to ensure that the background is contained as little as possible within the rectangular box. According to the actual conditions of the meal-assisting robot when grabbing food, classifying food targets to be identified in the image into 5 categories respectively for data annotation, as shown in fig. 4, wherein specific category classification rules are as follows: marking images of slippery foods that are difficult to handle, such as quail eggs, as "thorns"; labeling images of soft oversized foods, such as sweet potatoes, as "cut" types; labeling the food images to be mixed as 'stirring' types; labeling the image of the food which is easy to clamp, such as shredded potatoes, as a clamping type; the image of rice and similar dishes is labeled as "scoop" type. Model training: using the annotated image data, a deep learning model may be trained for identifying categories of food. Model training is performed by using a deep learning framework, a proper Convolutional Neural Network (CNN) architecture is selected, and training and verification are performed by using a large number of labeling images, so that the accuracy and the robustness of the model are improved.
In another implementation of the present invention, obtaining, by a visual feedback system, a single execution result of a meal-aid task includes: after finishing single meal-assisting operation, the visual feedback system acquires the existence condition of food in the dinner plate and the executor based on a detection algorithm, and determines the relative distribution condition of the food; the grabbing running track is adjusted according to the execution result until the meal-assisting task is completed, and the method comprises the following steps: and adjusting the grabbing running track according to the existence condition of the food and the relative distribution condition of the food until the meal-assisting task is completed, wherein the grabbing running track comprises the position, the direction and the action of the mechanical arm.
Illustratively, the information derived from the visual perception is mapped to tasks performed by the robotic arm. In this process, the meal assist robot will adjust by: first, the tray and the actuator are checked by a detection algorithm for the presence of food, the goal of this stage being to determine the presence or absence of food in the current environment, and their relative distribution, and whether food is present in the actuator during multiple executions. Second, translating the information based on visual perception into specific tasks that the robotic arm needs to perform will involve adjusting the position, orientation, and motion of the robotic arm to better accommodate the current detected food profile. Through the mode, the robot can conduct real-time and accurate action planning according to visual information, so that food can be effectively picked up, the risk of falling food is reduced, and the feeding efficiency is improved.
Preferably, the visual-based feedback system. Detecting whether enough food is supplied, and when no food exists in the dinner plate, the mechanical arm does not go to any more; detecting whether there is sufficient food on the actuator, the algorithm may be adjusted to specify the amount of food desired using the detection algorithm. To detect whether a user has eaten food, a classifier that is divided into two categories is used: "food present" and "food not present" further uses the color histogram in the end effector image to inform the selection of the threshold for "food sufficient". If the end effector is detected to not obtain enough food, the meal-aid robot will re-scoop; if the dinner plate is detected to have food, but the end effector continuously does not detect the food, the dinner-assisting robot changes the same position of the bowl and further changes the containing mode. The visual feedback based system ensures that the robot adjusts behavior in real time according to the food supply to improve eating efficiency and reduce food residues.
The invention provides an intelligent meal-assisting robot system comprehensively applying deep learning, meal-taking mode optimization and a vision-based feedback system and a control method thereof in the intelligent meal-assisting robot control method aiming at improving the net food rate. The aims of reducing food residues and improving the feeding rate are achieved through the modes of food color recognition, characteristic analysis, food intake mode optimization, real-time feedback adjustment and the like. The innovative method has wide application prospect in the field of meal-assisting robots and has positive enlightenment effect on the development of future intelligent robot systems.
In another aspect of the invention, there is provided an intelligent dining robot control system intended to promote net eating rate, comprising:
and an identification subsystem: the method is used for carrying out positioning and identification processing on the food based on the color segmentation technology and the deep learning model to obtain the position and the type of the food.
And the grabbing subsystem is used for: the grabbing device is used for making grabbing running tracks and grabbing modes according to the positions and the types of foods; and executing the meal-assisting task according to the grabbing running track and the feeding mode.
And a feedback subsystem: the visual feedback system is used for acquiring a single execution result of the meal-assisting task; and adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed.
In the intelligent meal-aiding robot control system for improving the net food rate, the different food taking modes of the mechanical arm are controlled according to the characteristics of each food to be eaten, and the moving track in the carrying process is determined, so that the food loss and the food carry-over can be reduced to the greatest extent. Specifically, computer vision technology and deep learning technology are used to identify different kinds of foods, and different eating modes and running tracks are designed according to the characteristics of each food, for example, slow and stable running modes can be adopted for fragile foods; for foods that are easy to leave, a rotating or vibrating mode can be adopted; in addition, in actual operation, the state of food can be monitored in real time and dynamically adjusted according to the requirement; the dining assisting robot is more intelligent and efficient, the net food rate is improved, food residues are reduced, the accuracy and efficiency of carrying food by the robot are improved, and better dining experience is provided for users.
Thus, specific embodiments of the present invention have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
It should be noted that all directional indicators (such as up, down, left, right, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is correspondingly changed.
In the description of the present invention, the terms "first," "second," and the like are used merely for convenience in describing the various components or names, and are not to be construed as indicating or implying a sequential relationship, relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
It should be noted that, although specific embodiments of the present invention have been described in detail with reference to the accompanying drawings, the present invention should not be construed as limiting the scope of the present invention. Various modifications and variations which may be made by those skilled in the art without the creative effort fall within the protection scope of the present invention within the scope described in the claims.
Examples of embodiments of the present invention are intended to briefly illustrate technical features of embodiments of the present invention so that those skilled in the art may intuitively understand the technical features of the embodiments of the present invention, and are not meant to be undue limitations of the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. An intelligent meal-assisting robot control method for improving net food rate is characterized by comprising the following steps:
positioning and identifying food based on a color segmentation technology and a deep learning model to obtain the position and the type of the food;
according to the position and the type of the food, making a grabbing running track and a feeding mode;
executing a meal-assisting task according to the grabbing running track and the feeding mode;
obtaining a single execution result of the meal-assisting task through a visual feedback system;
and adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed.
2. The method as recited in claim 1, further comprising:
acquiring a food sample image and a labeling data set, wherein the labeling data comprises information about food types and positions;
training based on the food sample image and the labeling data set to obtain a deep learning model.
3. The method of claim 2, wherein the locating and identifying the food based on the color segmentation technique and the deep learning model to obtain the location and the type of the food comprises:
the intelligent meal-assisting robot acquires a current food scene image through a visual sensor;
detecting food in the current food scene image based on a color segmentation technology, and determining position information corresponding to the detected food;
and inputting the detected food and the corresponding position information into the deep learning model to obtain the type of the food.
4. The method of claim 1, wherein the performing a meal-aid task according to the grab rail and the feeding method comprises:
the tail end of the mechanical arm of the intelligent meal-assisting robot performs grabbing operation on food according to the grabbing running track and the food taking mode;
and adjusting the grabbing running track and the grabbing mode of the mechanical arm according to the food characteristic information of grabbing food to finish the meal-assisting task, wherein the food characteristic information comprises the shape, the size and the weight of the food.
5. The method as recited in claim 2, further comprising:
dividing food targets to be identified in the food sample image into 5 categories for data labeling;
marking food images which are difficult to process and operate and have smooth surfaces as thorns;
marking soft and large-size food images as 'cutting' types;
labeling the food images to be mixed as 'stirring' types;
marking the food easy to be clamped as a clamping type;
the granular food image is labeled as "scooping" type.
6. A method according to claim 3, wherein the obtaining, by a visual feedback system, a single execution result of the meal-aid task comprises:
after finishing single meal assisting operation, the visual feedback system acquires the existence condition of food in the dinner plate and the executor based on a detection algorithm, and determines the relative distribution condition of the food;
and adjusting the grabbing running track according to the execution result until the meal-assisting task is completed, wherein the method comprises the following steps of:
and adjusting the grabbing running track according to the existence condition of the food and the relative distribution condition of the food until the meal-assisting task is completed, wherein the grabbing running track comprises the position, the direction and the action of the mechanical arm.
7. An intelligent meal-aid robot control system aimed at improving net eating rate, characterized by comprising:
and an identification subsystem: the method is used for carrying out positioning and identification processing on the food based on a color segmentation technology and a deep learning model to obtain the position and the type of the food;
and the grabbing subsystem is used for: the grabbing moving track and the grabbing way are formulated according to the positions and the types of the foods; executing a meal-assisting task according to the grabbing running track and the feeding mode;
and a feedback subsystem: the visual feedback system is used for acquiring a single execution result of the meal-assisting task; and adjusting the grabbing running track according to the single execution result until the meal-assisting task is completed.
CN202311415840.XA 2023-10-30 2023-10-30 Intelligent meal-assisting robot control method and system for improving net food rate Pending CN117415831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311415840.XA CN117415831A (en) 2023-10-30 2023-10-30 Intelligent meal-assisting robot control method and system for improving net food rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311415840.XA CN117415831A (en) 2023-10-30 2023-10-30 Intelligent meal-assisting robot control method and system for improving net food rate

Publications (1)

Publication Number Publication Date
CN117415831A true CN117415831A (en) 2024-01-19

Family

ID=89532183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311415840.XA Pending CN117415831A (en) 2023-10-30 2023-10-30 Intelligent meal-assisting robot control method and system for improving net food rate

Country Status (1)

Country Link
CN (1) CN117415831A (en)

Similar Documents

Publication Publication Date Title
CN106530297B (en) Grasping body area positioning method based on point cloud registering
Le et al. Learning to grasp objects with multiple contact points
Bohg et al. Robot arm pose estimation through pixel-wise part classification
CN111590611B (en) Article classification and recovery method based on multi-mode active perception
Kragic et al. Robust visual servoing
JP2018027581A (en) Picking system
CN108247635A (en) A kind of method of the robot crawl object of deep vision
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
US20230125022A1 (en) Picking system and method
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
Hasegawa et al. Graspfusion: Realizing complex motion by learning and fusing grasp modalities with instance segmentation
Chen et al. Detecting graspable rectangles of objects in robotic grasping
CN109079777B (en) Manipulator hand-eye coordination operation system
CN113034575A (en) Model construction method, pose estimation method and object picking device
US20210001488A1 (en) Silverware processing systems and methods
CN117415831A (en) Intelligent meal-assisting robot control method and system for improving net food rate
Uçar et al. Determination of Angular Status and Dimensional Properties of Objects for Grasping with Robot Arm
US20230297068A1 (en) Information processing device and information processing method
CN115319739A (en) Workpiece grabbing method based on visual mechanical arm
Hema et al. An intelligent vision system for object localization and obstacle avoidance for an indoor service robot
Roudbari et al. Autonomous Vision-based Robotic Grasping of Household Objects: A Practical Case Study
WO2023092519A1 (en) Grabbing control method and apparatus, and electronic device and storage medium
Zoghlami et al. Tracking body motions in order to guide a robot using the time of flight technology.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination