WO2020057455A1 - 食材处理机器人控制方法、装置、系统、存储介质及设备 - Google Patents

食材处理机器人控制方法、装置、系统、存储介质及设备 Download PDF

Info

Publication number
WO2020057455A1
WO2020057455A1 PCT/CN2019/105912 CN2019105912W WO2020057455A1 WO 2020057455 A1 WO2020057455 A1 WO 2020057455A1 CN 2019105912 W CN2019105912 W CN 2019105912W WO 2020057455 A1 WO2020057455 A1 WO 2020057455A1
Authority
WO
WIPO (PCT)
Prior art keywords
food
processed
processing
information
food processing
Prior art date
Application number
PCT/CN2019/105912
Other languages
English (en)
French (fr)
Inventor
何德裕
朱文飞
何国斌
Original Assignee
鲁班嫡系机器人(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 鲁班嫡系机器人(深圳)有限公司 filed Critical 鲁班嫡系机器人(深圳)有限公司
Publication of WO2020057455A1 publication Critical patent/WO2020057455A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Definitions

  • the present application relates to the technical field of equipment control, and in particular, to a method, device, system, storage medium, and equipment for controlling a food processing robot.
  • a method for controlling a food processing robot comprising:
  • the image of the to-be-processed food obtain the kind information of the to-be-processed food and corresponding food processing information through a neural network model, and output corresponding food-processing instructions.
  • the neural network model is to process the image based on the food and / Or obtained through video training, the food processing instruction is used to instruct the food processing robot to process the food to be processed according to the food processing information in the food processing instruction.
  • the training data of the neural network model further includes at least one of the following three items:
  • the method further includes:
  • the current state information includes at least one of the following three items: current strength information, current temperature information, and current time information.
  • the obtaining the type information of the food to be processed and the corresponding food processing information through a neural network model based on the image of the food to be processed includes:
  • the food material processing information of the food materials to be processed is obtained through a neural network model.
  • the food processing information includes a food processing method and a food processing parameter.
  • the food processing parameters include at least one of a time parameter, a temperature parameter, and a strength parameter.
  • the method further includes:
  • the actual processing information includes at least one of an actual processing time, an actual processing intensity, and an actual processing temperature
  • a food processing result of the food to be processed is determined according to the food processing information and the actual processing information.
  • a food processing robot control device includes:
  • An image acquisition module configured to acquire an image of a food material to be processed
  • a control module configured to obtain, according to an image of the food material to be processed, a food type information of the food material to be processed and corresponding food processing information through a neural network model, and output a corresponding food processing instruction;
  • the neural network model is based on The food processing image and / or video are obtained through training, and the food processing instruction is used to instruct the food processing robot to process the food to be processed according to the food processing information in the food processing instruction.
  • control module is further configured to:
  • the current state information includes at least one of the following three items: current strength information, current temperature information, and current time information.
  • control module is further configured to:
  • the food material processing information of the food materials to be processed is obtained through a neural network model.
  • control module is further configured to:
  • the actual processing information includes at least one of an actual processing time, an actual processing intensity, and an actual processing temperature
  • a food processing result of the food to be processed is determined according to the food processing information and the actual processing information.
  • a food processing system includes:
  • a control device configured to obtain an image of the food material to be processed; according to the image of the food material to be processed, to obtain food material type information of the food material to be processed and corresponding food material processing information through a neural network model, and output a corresponding food material processing instruction,
  • the neural network model is obtained by processing images and / or videos according to ingredients;
  • a food processing robot is configured to process the food to be processed according to food processing information in the food processing instruction.
  • it further includes at least one of the following three items:
  • a timer configured to obtain the actual processing time of the food to be processed after the control device outputs a food processing instruction, and send it to the control device;
  • a force sensor configured to obtain current strength information of the food to be processed before the control device outputs a food processing instruction, and send it to the control device; and, after the control device outputs a food processing instruction, Acquiring the actual processing intensity of the to-be-processed food ingredients and sending it to the control device;
  • a temperature sensor configured to obtain current temperature information of the food to be processed before the control device outputs a food processing instruction, and send the current temperature information to the control device; and after the control device outputs a food processing instruction, The actual processing temperature of the foodstuff to be processed is acquired and sent to the control device.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
  • the image of the to-be-processed food obtain the kind information of the to-be-processed food and corresponding food processing information through a neural network model, and output corresponding food-processing instructions.
  • the neural network model is to process the image based on the food and / Or obtained through video training, the food processing instruction is used to instruct a food processing robot to process the food to be processed according to the food processing information in the food processing instruction.
  • a computer-readable storage medium stores a computer program thereon, and when the computer program is executed by a processor, the following steps are implemented:
  • the image of the to-be-processed food obtain the kind information of the to-be-processed food and corresponding food processing information through a neural network model, and output corresponding food-processing instructions.
  • the neural network model is to process the image based on the food and / Or obtained through video training, the food processing instruction is used to instruct a food processing robot to process the food to be processed according to the food processing information in the food processing instruction.
  • the control method, device, system, storage medium and device for the food processing robot described above obtain an image of the food processing material; according to the image of the food processing material, obtain the food material type information of the food processing material and corresponding food processing information through a neural network model, and The corresponding food processing instruction is output.
  • the neural network model is obtained by training according to the food processing image and / or video.
  • the food processing instruction is used to instruct the food processing robot to process the food to be processed according to the food processing information in the food processing instruction.
  • processing ingredients the types of ingredients are identified through a neural network model and the corresponding ingredients processing information is obtained.
  • the robot processes the ingredients according to the ingredients processing information, that is, different processing processes are performed for different ingredients to make the ingredients processed. More flexible;
  • the neural network model can learn food processing information for a variety of different foods based on different food processing images and / or videos, thereby making the application wider and the food processing more scientific and reasonable.
  • FIG. 1 is a schematic flowchart of a method for controlling a food processing robot in an embodiment
  • FIG. 2 is a schematic diagram of a structure type of a food processing robot in an embodiment
  • FIG. 3 is a schematic diagram of a structure type of a food processing robot in another embodiment
  • FIG. 4 is a schematic diagram of a structure type of a food processing robot in another embodiment
  • FIG. 5 is a schematic diagram of a structure type of a food processing robot in still another embodiment
  • FIG. 6 is a schematic diagram of a structure type of a food processing robot in still another embodiment
  • FIG. 7 is a schematic diagram of processing fried eggs in one embodiment
  • FIG. 8 is a schematic flowchart of a method for controlling a food processing robot in another embodiment
  • FIG. 9 is a schematic structural diagram of a control device for a food processing robot in an embodiment
  • FIG. 10 is a schematic structural diagram of a food processing system in an embodiment.
  • a method for controlling a food processing robot includes the following steps:
  • an image of the food materials to be processed is acquired.
  • an image / device capable of acquiring images is used to obtain an image of the food ingredients to be processed.
  • the acquired image can be a single image, it can be multiple images, or it can be a video composed of multiple frames of images, where a single image can save image acquisition time and speed up image processing speed; multiple images or multiple images Framed video can improve the accuracy of image processing.
  • Step S200 According to the image of the food material to be processed, obtain the food material type information of the food material to be processed and the corresponding food material processing information through a neural network model, and output a corresponding food material processing instruction.
  • the image of the food material to be processed is obtained, the image is subjected to image recognition processing through a neural network model to obtain the food material type information of the food material to be processed included in the image, and the corresponding food material processing information is obtained.
  • the ingredients to be processed include those commonly used in daily life, and the types include: grains and oils, such as: millet, wheat, barley, corn, mung bean, peanut, etc .; vegetables, such as celery, spinach, cabbage, loofah, cucumber, Winter melon, bitter gourd, eggplant, tomato, etc .; meat, such as: various fish and various poultry meat, etc .; fruits, such as: pear, grapefruit, mango, kiwi, banana, orange, strawberry, watermelon, etc .; and others category.
  • the processing information of the ingredients includes: cutting (such as cutting vegetables, cutting fruits, etc.), cutting (such as cutting fruits, etc.), peeling (skinning, such as potato shavings, peeling, such as peeling grapefruit, etc.), stirring (such as Stir egg liquid, etc.), buns (such as dumplings, buns, steamed buns, etc.), fried (such as fried vegetables, etc.), fried (such as fried eggs, etc.), fried (such as french fries, etc.), boiled (such as boiled noodles, dumplings, etc.) ), Steamed (such as steamed buns, etc.), roasted (such as roasted chicken, roasted duck, etc.).
  • cutting such as cutting vegetables, cutting fruits, etc.
  • peeling skinning, such as potato shavings, peeling, such as peeling grapefruit, etc.
  • stirring such as Stir egg liquid, etc.
  • buns such as dumplings, buns, steamed buns, etc.
  • fried such as fried vegetables, etc.
  • the neural network model used in this embodiment includes, but is not limited to, a convolutional neural network (CNN) model.
  • the convolutional neural network (CNN) model may include various network structures, such as: LeNet, AlexNet, ZFNet, VGG, GoogLeNet, Residual Net, DenseNet, R-CNN, SPP-NET, Fast-RCNN, Faster-RCNN, FCN, Mask-RCNN, YOLO, SSD, YOLO2, and other network model structures now known or developed in the future.
  • the neural network model used in this embodiment may also be other types of neural network models.
  • the neural network model is obtained by processing images and / or videos based on food ingredients.
  • the training method of the model includes supervised learning (training based on samples). 2. Reinforcement learning (define reward function) and imitation learning.
  • the trained neural network model can be used to identify the kind of food materials and get the corresponding food processing information.
  • the training of the neural network model may be performed by directly using the food processing graphics and / or video as input of the neural network model; in addition, also You can first extract the pose information of the food processing robot in each image / time point in the food processing graphics and / or video, and then use the pose information of the food processing robot as input to the neural network model to complete the training of the neural network model.
  • the method used for training the neural network model according to the food processing graphics and / or video in this embodiment is not limited to the above two methods, and other methods for directly or indirectly processing graphics based on food and / or Or video training methods for neural network models.
  • a corresponding food processing instruction is generated according to the food type information and the food processing information and sent to the food processing robot.
  • the food processing instruction is used to instruct the food processing robot to The food processing information in the processing instruction processes the food to be processed.
  • the neural network model cannot identify the food category information of the food ingredients to be processed based on the acquired image of the food ingredients to be processed, the image of the food ingredients to be processed is reacquired, and the image recognition processing is performed again based on the reacquired images until it is identified. Information on the types of ingredients for the ingredients to be processed.
  • the type of the food processing robot may depend on the type of the food to be processed.
  • the type of the food processing robot may specifically be a robot, etc. Structure, so that the food material to be processed can be fixed; the type of food processing robot can also include a manipulator and an actuator, as shown in Figures 3 and 4, the actuator can be a tool to hold other appliances, or it can be a cutting tool ( (Such as cutting knives, etc.), cutting tools (such as paring knives, etc.) can be used to process food materials, as shown in Figure 5 and Figure 6, or can be used with other food processing equipment (such as bowls, pots, etc.)
  • a structure that cooperates with each other to complete the food processing process such as a blender that works with a bowl, a spatula that works with a pot, etc.
  • the type of food processing robot can also be a device / equipment that controls the working status of other food processing equipment, such as Device /
  • the other equipment can be held by the actuator, or the other equipment can be directly fixed on the manipulator.
  • This embodiment proposes a method for controlling a food processing robot.
  • the type of the food is identified through a neural network model, and corresponding food processing information is obtained.
  • the robot processes the food according to the food processing information, that is, for different Different processing methods for food ingredients make food processing methods more flexible; in addition, the neural network model can learn food processing information for multiple different food materials based on different food processing images and / or videos, thereby making the application wider. And the processing of ingredients is more scientific and reasonable.
  • the training data of the neural network model further includes at least one of the following three items: strength information during food processing; temperature information during food processing; and time information during food processing.
  • the information obtained by the neural network model is relatively limited, for example, only the operation trajectory, the operation mode and other information can be obtained, and during the food processing process, in order to complete the Processing, in addition to information such as operation trajectory, operation mode, also requires specific operating parameters, such as: intensity, temperature, and time.
  • the operation parameter includes at least one of information on processing strength of the food, information on processing temperature of the food, and information on processing time of the food.
  • the food processing strength information refers to the strength information applied to the food materials to be processed, for example: how much force can be used to beat the egg to achieve the purpose of breaking the egg shell
  • the food processing temperature information refers to the information Required temperature information, for example: how high temperature can be used to cook eggs
  • ingredient processing time information refers to the time information required when processing ingredients, for example: how long can it be cooked to cook eggs purpose.
  • the training data also includes at least one of the information about the strength of the ingredients, the temperature of the ingredients, and the information about the time of the ingredients.
  • the above parameters are very important in the process of ingredients Parameters have a decisive effect on whether the processing of the ingredients is completed. Therefore, by using the above parameters to train the neural network model, the ingredients processing information contained in the ingredients processing instructions output by the model can be more comprehensive and specific, which can be better
  • the food processing robot is controlled to process the food to be processed.
  • the method for controlling a food processing robot further includes: acquiring current status information of the food to be processed, and the acquired current status information is helpful for obtaining food processing information required for processing the food.
  • the current status information includes at least one of the following three items: current velocity information, current temperature information, and current time information, where the current velocity information refers to the velocity information currently applied to the food to be processed, and the current temperature information refers to the The current temperature information of the processed food, and the current time information refers to various related time information of the food to be processed.
  • the current strength information is the strength of the robot to fix the ingredients. Since the ingredients need to be cut, it is necessary to confirm whether the ingredients will slip during the cutting process based on the current strength information. And other phenomena.
  • the obtained current strength information helps to obtain the cutting strength information required for cutting the food.
  • This embodiment can determine the current status of the food materials by acquiring the current status information of the food materials to be processed, which can help to obtain a better food material processing strategy, and make the output food processing instructions more scientific and reasonable.
  • step S200 the step of obtaining the ingredient type information of the ingredient to be processed and the corresponding ingredient processing information according to the neural network model according to the image of the ingredient to be processed specifically includes:
  • the information of the food category of the food material to be processed is obtained through the neural network model; according to the information of the food category of the food material to be processed, and the current status information of the food material to be processed, the processing information of the food material to be processed is obtained through the neural network model. .
  • the current status of the food materials to be processed can be determined, so that a better food material processing strategy can be obtained, and the outputted food processing instructions are more scientific and reasonable.
  • the food processing information includes a food processing method and a food processing parameter.
  • the food processing information obtained through the neural network model includes food processing methods and food processing parameters.
  • the food processing method is the method in which the food is processed.
  • the food processing parameters are the conditions / environment parameters in the food processing process.
  • the processing methods of the food materials may be the same or different; for a food material, there are multiple processing methods of the food materials.
  • the processing methods include: Eggs, stir, fried eggs, steamed eggs and other processing methods.
  • the ingredients processing parameters corresponding to the beaten eggs indicate how to beat the eggs to break the egg shell;
  • the ingredients corresponding to the mixing ingredients indicate how to stir to make the egg whites and yolks of the eggs mixed uniformly;
  • the ingredients processing parameters corresponding to the fried eggs How to fry an egg can make the egg cooked;
  • the processing parameter corresponding to the steamed egg means how to steam the egg to make the egg cooked.
  • the food processing information includes a food processing method and a food processing parameter.
  • the food processing robot can process the food according to the food processing method and the food processing parameter, thereby making the food processing process more intelligent.
  • the food processing parameters include at least one of a time parameter, a temperature parameter, and a strength parameter.
  • the time parameter indicates the processing time of the food material to be processed
  • the temperature parameter indicates the ambient temperature of the food material to be processed
  • the strength parameter indicates the strength of the food material to be processed.
  • the food processing parameters used are not the same. It can be only one of the three parameters mentioned above, it can be two of the three parameters mentioned above, or it can be All of the above three parameters are used. It can be understood that the above three parameters are parameters commonly used in general food processing, and the food processing parameters may also include parameters other than the above three parameters used in the food processing process, such as the number of food processing times.
  • the required parameters include strength parameters, that is, how much force is used to fix the egg, and how much force is used to break the egg shell.
  • the required parameters include strength parameters and time parameters, that is, how much force is used to stir the egg whites and egg yolks, and how long to stir so that the egg whites and egg yolks are mixed uniformly.
  • the required parameters include temperature parameters and time parameters, that is, how high the temperature is used to steam the eggs, and how long it takes to steam the eggs.
  • the required parameters include time parameters, temperature parameters and strength parameters, that is, how high the temperature is used to fry the eggs, and how much force is used to fry the eggs during the frying process. Noodles and how long it takes to fry the eggs.
  • one of the parameters may change with the change of the other one or two parameters.
  • Corresponding changes between different parameters can be added. For example, when a certain food is boiled, its temperature parameter is generally 100 degrees Celsius and its time parameter is 30 minutes. However, in high altitude regions, the boiling point of water will decrease, which makes the food boiled. The temperature parameter is below 100 degrees Celsius. At this time, the required time parameter will be greater than 30 minutes.
  • the food processing parameters include at least one of a time parameter, a temperature parameter, and a force parameter.
  • the food processing robot may process the food based on one or more of the above parameters, thereby making the food processing process more scientific and reasonable.
  • the method for controlling a food processing robot further includes steps S300 and S400.
  • Step S300 Acquire actual processing information of the food materials to be processed.
  • the actual processing information includes at least one of an actual processing time, an actual processing intensity, and an actual processing temperature.
  • the actual processing information of the food materials to be processed is selected according to the food processing parameters included in the food processing information. For example, if the food processing parameters include only the strength parameters, only the actual processing strength of the food materials to be processed is obtained; when the food processing parameters include both the strength parameters and the time parameters, the actual processing strength and actual processing time of the food materials to be processed are also obtained at the same time.
  • Step S400 Determine a food processing result of the food to be processed according to the food processing information and the actual processing information. After the actual processing information is obtained, the actual processing information and the food processing information are compared to determine whether the food processing is completed. For example, in the food processing information of a certain food, the food processing method is boiling, and the food processing parameters include time parameters, specifically 30 minutes, and temperature parameters, specifically 100 degrees Celsius. In the obtained actual processing information, if the actual processing temperature is 100 degrees Celsius and the actual processing time is 20 minutes, it is judged that the ingredients are not processed. If the actual processing temperature is 100 degrees Celsius and the actual processing time is 30 minutes, it is judged that the ingredients are processed. , End the processing of ingredients.
  • the corresponding ingredients processing parameters may be re-acquired through a neural network model, and the ingredients processing instructions may be re-output.
  • the food processing method is boiling, and the food processing parameters include time parameters, specifically 30 minutes, and temperature parameters, specifically 100 degrees Celsius.
  • the actual processing temperature is 90 degrees Celsius (the boiling point of water decreases due to altitude factors)
  • the corresponding food processing parameters are re-obtained through the neural network model, that is, when the temperature is 90 degrees Celsius, boiling is required Time (for example, 45 minutes), and re-output the food processing instruction.
  • the temperature parameter is 90 degrees Celsius and the time parameter is 45 minutes.
  • the method further includes obtaining actual processing information of the food to be processed, and determining the food processing result of the food to be processed according to the food processing information and the actual processing information, so as to ensure that the processing result of the food meets the requirements and makes the food
  • the process is more scientific and reasonable.
  • a food processing robot for cutting vegetables is taken as an example.
  • the robot includes a manipulator and an actuator.
  • the actuator holds a cutting tool.
  • the manipulator is used to fix the food material to be processed.
  • the actuator is used to control the cutting tool.
  • Ingredients are cut into vegetables.
  • the training data includes the image / video of the food material and the strength information feedback from the force sensor set on the cutting board. You can learn to cut the food material through the food image / video
  • the processed operation trajectory and other information can be used to learn the specific strength information required to complete the cutting of the food material through the strength information fed back by the force sensor.
  • the number of neural network models used is not limited, and in the process of processing food, a single neural network may be used
  • the model executes the food processing process, or multiple neural network models can cooperate to perform the food processing process.
  • a single neural network model is used to implement the processing.
  • When performing ingredients processing for fried eggs use neural network model 1 to realize egg recognition, use neural network model 2 to achieve turning over of eggs, use neural network model 3 to control fried egg time, and use neural network model 4 to control frying. Egg temperature and so on. Through the cooperation of different neural network models, the process of food processing can be made more accurate.
  • steps in the flowcharts of FIGS. 1 and 8 are sequentially displayed in accordance with the directions of the arrows, these steps are not necessarily performed in the order indicated by the arrows. Unless explicitly stated in this document, the execution of these steps is not strictly limited, and these steps can be performed in other orders. Moreover, at least some of the steps in Figures 1 and 8 may include multiple sub-steps or stages. These sub-steps or stages are not necessarily performed at the same time, but may be performed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed in turn or alternately with at least a part of another step or a sub-step or stage of another step.
  • a food processing robot control device includes an image acquisition module 100 and a control module 200.
  • the image acquisition module 100 is configured to acquire an image of a food material to be processed.
  • the image acquired by the image acquisition module 100 may be a single image, may be multiple images, or may be a video composed of multiple frames of images. Among them, a single image can save image acquisition time and speed up image processing speed; multiple images Images or videos consisting of multiple frames can improve the accuracy of image processing.
  • the control module 200 is configured to obtain the type information of the food materials to be processed and corresponding food processing information through a neural network model according to the image of the food materials to be processed, and output corresponding food processing instructions.
  • the neural network model is to process the images based on the food materials and / or Obtained from the video training, the food processing instruction is used to instruct the food processing robot to process the food to be processed according to the food processing information in the food processing instruction.
  • the image acquisition module 100 is further configured to perform positioning processing on a food material to be processed and the like during a food material processing process by the food processing robot. For example, referring to FIG. 7, during the process of frying eggs, the image acquisition module 100 is further configured to obtain the position of the eggs, thereby facilitating the food robot to turn the eggs.
  • This embodiment proposes a control device for a food processing robot.
  • the type of the food is identified through a neural network model, and corresponding food processing information is obtained.
  • the food processing robot processes the food according to the food processing information, that is, Executing different processing processes for different food ingredients makes the food processing methods more flexible; in addition, the neural network model can learn food processing information for multiple different food materials according to different food processing images and / or videos, thereby making the application range more Wider, and the processing of ingredients is more scientific and reasonable.
  • control module 200 is further configured to: obtain the current status information of the food materials to be processed, the current status information includes at least one of the following three items: current strength information, current temperature information, and current time information.
  • control module 200 is further configured to control the module further to: obtain the type information of the food materials to be processed through a neural network model according to the image of the food materials to be processed; according to the type information of the food materials to be processed, and to be processed The current state information of the ingredients is obtained through the neural network model.
  • control module 200 is further configured to: obtain actual processing information of the food materials to be processed, the actual processing information includes at least one of actual processing time, actual processing intensity, and actual processing temperature; according to the food processing information and actual processing Information to determine the food processing results of the food to be processed.
  • control module 200 is further configured to obtain actual processing information of the food to be processed, and determine a food processing result of the food to be processed according to the food processing information and the actual processing information, thereby ensuring processing of the food.
  • the results meet the requirements, making the food processing process more scientific and reasonable.
  • Each module in the above-mentioned food processing robot control device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor calls and performs the operations corresponding to the above modules.
  • a food processing system which includes a control device 300 and a food processing robot 400.
  • the control device 300 is configured to obtain an image of the food material to be processed; according to the image of the food material to be processed, obtain the type information of the food material to be processed and the corresponding food processing information through a neural network model, and output the corresponding food processing instruction.
  • the neural network model is Trained according to food processing images and / or videos;
  • the food processing robot 400 is configured to process the food to be processed according to the food processing information in the food processing instruction.
  • This embodiment proposes a food processing system.
  • the control device recognizes the type of the food through a neural network model and obtains the corresponding food processing information.
  • the food processing robot processes the food according to the food processing information, that is, Executing different processing processes for different food ingredients makes the food processing methods more flexible; in addition, the neural network model can learn food processing information for multiple different food materials according to different food processing images and / or videos, thereby making the application range more Wider, and the processing of ingredients is more scientific and reasonable.
  • the food processing system further includes at least one of the following three items:
  • a timer configured to obtain the actual processing time of the food to be processed after the control device outputs the food processing instruction, and send it to the control device;
  • the force sensor is used to obtain the current strength information of the food materials to be processed before the control device outputs the food processing instruction and sends it to the control device; and is used to obtain the actual processing strength of the food materials to be processed after the control device outputs the food processing instruction. And send it to the control device;
  • the temperature sensor is used to obtain the current temperature information of the food to be processed before the control device outputs the food processing instruction and sends it to the control device; and is used to obtain the actual processing temperature of the food to be processed after the control device outputs the food processing instruction. And send it to the control device.
  • first obtaining the current status information of the ingredients can help to obtain more scientific ingredients processing information.
  • the timer, force sensor, temperature One or more of the sensors obtain the actual processing information of the food materials to be processed, and determine the food processing results of the food materials to be processed according to the food processing information and the actual processing information, so as to ensure that the food processing results meet the requirements and make the food processing process more scientific and reasonable.
  • Each module in the above control device can be realized in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor calls and performs the operations corresponding to the above modules.
  • a computer device which includes a memory and a processor.
  • the memory stores a computer program.
  • the processor executes the computer program, the processor implements the following steps: acquiring an image of the food material to be processed;
  • the network model obtains the information on the types of ingredients and the corresponding ingredients processing information, and outputs the corresponding ingredients processing instructions.
  • the neural network model is obtained by training based on the ingredients processing images and / or videos.
  • the ingredients processing instructions are used to instruct the ingredients processing robot to The ingredients processing information in the ingredients processing instruction processes the ingredients to be processed.
  • the processor when the processor executes the computer program, the processor further implements the following steps: obtaining the current status information of the food materials to be processed, the current status information includes at least one of the following three items: current strength information, current temperature information, and current time information .
  • the processor executes the computer program, the following steps are further implemented: obtaining the type information of the food materials to be processed through a neural network model according to the image of the food materials to be processed; according to the type information of the food materials to be processed; The current state information of the food is obtained through the neural network model.
  • the processor executes the computer program, the following steps are further implemented: obtaining actual processing information of the food materials to be processed, the actual processing information includes at least one of actual processing time, actual processing intensity, and actual processing temperature; Information and actual processing information to determine the food processing results of the food to be processed.
  • a computer-readable storage medium on which a computer program is stored.
  • the following steps are performed: obtaining an image of a food material to be processed; and using a neural network according to the image of the food material to be processed
  • the model obtains the information on the types of ingredients and the corresponding ingredients processing information, and outputs the corresponding ingredients processing instructions.
  • the neural network model is obtained by training based on the ingredients processing images and / or videos.
  • the ingredients processing instructions are used to instruct the ingredients processing robot to process the ingredients according to the ingredients.
  • the food processing information in the processing instruction processes the food to be processed.
  • the following steps are further implemented: obtaining current status information of the food materials to be processed, the current status information includes at least one of the following three items: current strength information, current temperature information, and current time information.
  • the following steps are further implemented: obtaining the food kind information of the food to be processed through a neural network model according to the image of the food to be processed; according to the food type information of the food to be processed, and the processing to be processed The current state information of the ingredients is obtained through the neural network model.
  • the computer program when executed by the processor further implements the following steps:
  • the actual processing information includes at least one of the actual processing time, the actual processing intensity, and the actual processing temperature; and determine the food processing results of the food materials to be processed according to the food processing information and the actual processing information.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Synchlink DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • General Preparation And Processing Of Foods (AREA)

Abstract

一种食材处理机器人控制方法,包括:获取待处理食材的图像(S100);根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令(S200),神经网络模型为根据食材处理图像和/或视频训练得到。在对食材进行处理时,通过神经网络模型对食材种类进行识别,并得到对应的食材处理信息,机器人根据食材处理信息对食材进行处理,即针对不同的食材执行不同的处理过程,使得食材处理方式更加灵活;根据不同的食材处理图像和/或视频学习到多种不同食材的食材处理信息,从而使得应用范围更广,并且食材处理更加科学合理。

Description

食材处理机器人控制方法、装置、系统、存储介质及设备 技术领域
本申请涉及设备控制技术领域,特别是涉及一种食材处理机器人控制方法、装置、系统、存储介质及设备。
背景技术
随着科技水平的提高,整个社会都向着智能化、自动化的方向发展。在食材处理方面,出现了越来越多的自动化设备(如食材处理机器人等),使得对食材的处理越来越方便。
传统技术中,在使用自动化设备处理食材时,首先需要在设备中写入固定的食材处理程序,然后该设备按照固定的程序对应的步骤对食材进行处理。然而,由于不同食材的处理方法各异,固定的食材处理步骤并不能适用于所有食材,导致部分食材的处理结果不理想;另外,当食材种类以及处理方式较多时,则需要向设备写入大量的食材处理程序,工作量较大。
发明内容
基于此,有必要针对传统技术中存在的问题,提供一种应用范围更广且处理方式更加灵活的食材处理机器人控制方法、装置、系统、存储介质及设备。
一种食材处理机器人控制方法,其特征在于,包括:
获取待处理食材的图像;
根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食 材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示所述食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
在其中一个实施例中,所述神经网络模型的训练数据还包括以下三项中的至少一项:
食材处理过程中的力度信息;
食材处理过程中的温度信息;
食材处理过程中的时间信息。
在其中一个实施例中,所述获取待处理食材的图像之后,还包括:
获取所述待处理食材的当前状态信息,所述当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
在其中一个实施例中,所述根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,包括:
根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息;
根据所述待处理食材的食材种类信息,以及所述待处理食材的当前状态信息,通过神经网络模型得到所述待处理食材的食材处理信息。
在其中一个实施例中,所述食材处理信息包括食材处理方式和食材处理参数。
在其中一个实施例中,所述食材处理参数包括时间参数、温度参数和力度参数中的至少一项。
在其中一个实施例中,所述输出对应的食材处理指令的步骤之后,还包括:
获取所述待处理食材的实际处理信息,所述实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;
根据所述食材处理信息以及所述实际处理信息,确定所述待处理食材的食材处理结果。
一种食材处理机器人控制装置,包括:
图像获取模块,用于获取待处理食材的图像;
控制模块,用于根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示所述食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
在其中一个实施例中,所述控制模块还用于:
获取所述待处理食材的当前状态信息,所述当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
在其中一个实施例中,所述控制模块还用于:
根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息;
根据所述待处理食材的食材种类信息,以及所述待处理食材的当前状态信息,通过神经网络模型得到所述待处理食材的食材处理信息。
在其中一个实施例中,所述控制模块还用于:
获取所述待处理食材的实际处理信息,所述实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;
根据所述食材处理信息以及所述实际处理信息,确定所述待处理食材的食 材处理结果。
一种食材处理系统,包括:
控制装置,用于获取待处理食材的图像;根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到;
食材处理机器人,用于根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
在其中一个实施例中,还包括以下三项中的至少一项:
计时器,用于当所述控制装置输出食材处理指令之后,获取所述待处理食材的实际处理时间,并发送至所述控制装置;
力传感器,用于当所述控制装置输出食材处理指令之前,获取所述待处理食材的当前力度信息,并发送至所述控制装置;以及,用于当所述控制装置输出食材处理指令之后,获取所述待处理食材的实际处理力度,并发送至所述控制装置;
温度传感器,用于当所述控制装置输出食材处理指令之前,获取所述待处理食材的当前温度信息,并发送至所述控制装置;以及,用于当所述控制装置输出食材处理指令之后,获取所述待处理食材的实际处理温度,并发送至所述控制装置。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取待处理食材的图像;
根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食 材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取待处理食材的图像;
根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
上述食材处理机器人控制方法、装置、系统、存储介质及设备,获取待处理食材的图像;根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,神经网络模型为根据食材处理图像和/或视频训练得到,食材处理指令用于指示食材处理机器人根据食材处理指令中的食材处理信息对待处理食材进行处理。在对食材进行处理时,通过神经网络模型对食材种类进行识别,并得到对应的食材处理信息,机器人根据食材处理信息对食材进行处理,即针对不同的食材执行不同的处理过程,使得食材处理方式更加灵活;此外,该神经网络模型可以根据不同的食材处理图像和/或视频学习到多种不同食材的食材处理信息,从而使得应用范围更广,并且食材处理更加科学合理。
附图说明
图1为一个实施例中食材处理机器人控制方法的流程示意图;
图2为一个实施例中食材处理机器人的结构类型示意图;
图3为另一个实施例中食材处理机器人的结构类型示意图;
图4为又一个实施例中食材处理机器人的结构类型示意图;
图5为再一个实施例中食材处理机器人的结构类型示意图;
图6为再一个实施例中食材处理机器人的结构类型示意图;
图7为一个实施例中进行煎蛋处理的示意图;
图8为另一个实施例中食材处理机器人控制方法的流程示意图;
图9为一个实施例中食材处理机器人控制装置的结构示意图;
图10为一个实施例中食材处理系统的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在一个实施例中,如图1所示,提供一种食材处理机器人控制方法,该食材处理机器人控制方法包括以下步骤:
步骤S100,获取待处理食材的图像。当存在待处理的食材时,首先通过图像传感器等可以获取图像的装置/设备获取待处理食材的图像。获取的图像可以是单张图像,可以是多张图像,也可以是由多帧图像组成的视频,其中,单张图像可以节省图像获取时间,并且可以加快图像处理速度;多张图像或由多帧图像组成的视频可以提高图像处理的准确度。
步骤S200,根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令。
在得到待处理食材的图像之后,通过神经网络模型对该图像进行图像识别处理,得到该图像中包含的待处理食材的食材种类信息,并得到对应的食材处理信息。其中,待处理食材包括日常生活中常用的食材,种类包括:粮油类,例如:小米、小麦、大麦、玉米、绿豆、花生等;蔬菜类,例如:芹菜、菠菜、白菜、、丝瓜、黄瓜、冬瓜、苦瓜、茄子、番茄等;肉类,例如:各种鱼类及各种家禽肉类等;水果类,例如:梨、柚子、芒果、猕猴桃、香蕉、橙子、草莓、西瓜等;以及其他类别。对应的,食材处理信息包括:切(如切蔬菜、切水果等)、削(如削水果等)、去皮(刮皮,如刮土豆等;剥皮,如剥柚子皮等)、搅拌(如搅拌鸡蛋液等)、包(如包饺子、包子、馒头等)、炒(如炒菜等)、煎(如煎鸡蛋等)、炸(如炸薯条等)、煮(如煮面条、汤圆等)、蒸(如蒸馒头等)、烤(如烤鸡、烤鸭等)等。
具体地,本实施例中使用的神经网络模型包括但不局限于卷积神经网络(CNN)模型,该卷积神经网络(CNN)模型可以包括各种网络结构,比如:LeNet,AlexNet,ZFNet,VGG,GoogLeNet,Residual Net,DenseNet,R-CNN,SPP-NET,Fast-RCNN,Faster-RCNN,FCN,Mask-RCNN,YOLO,SSD,YOLO2,以及其它现在已知或将来开发的网络模型结构。本实施例中所使用的神经网络模型也可以是其他类型的神经网络模型,该神经网络模型为根据食材处理图像和/或视频训练得到,该模型的训练方法包括监督学习(根据样本进行训练)、强化学习(定义奖励函数)和模仿学习,使用训练后的神经网络模型可以对食材种类信息进行识别,并得到对应的食材处理信息。
进一步地,在根据食材处理图形和/或视频对神经网络模型进行训练时,可 以是直接以该食材处理图形和/或视频作为神经网络模型的输入来完成对神经网络模型的训练;另外,也可以先提取出食材处理图形和/或视频中食材处理机器人在各个图像中/时间点的位姿信息,再以食材处理机器人的位姿信息作为神经网络模型的输入来完成对神经网络模型的训练。可以理解,本实施例中根据食材处理图形和/或视频对神经网络模型进行训练时所采用的方法并不局限于上述两种方法,也可以是采用其他直接或间接地根据食材处理图形和/或视频对神经网络模型进行训练的方法。
在得到待处理食材的食材种类信息以及对应的食材处理信息之后,根据食材种类信息和食材处理信息生成对应的食材处理指令,并发送至食材处理机器人,食材处理指令用于指示食材处理机器人根据食材处理指令中的食材处理信息对待处理食材进行处理。
进一步地,若神经网络模型根据获取的待处理食材的图像无法识别出待处理食材的食材种类信息,则重新获取待处理食材的图像,并根据重新获取的图像再次进行图像识别处理,直至识别出待处理食材的食材种类信息。
需要说明的是,本实施例并不限定食材处理机器人的类型,食材处理机器人的类型可以随待处理食材种类的变化而定,例如,如图2所示,食材处理机器人类型具体可以是机械手等结构,从而可以对待处理食材进行固定;食材处理机器人类型也可以是包括机械手和执行器,如图3及图4所示,该执行器可以是握持其他器具的工具,也可以是切具(如切割刀等)、削具(如削皮刀等)等可以对待处理食材进行处理的结构,如图5及图6所示,也可以是能与其他食材处理装置(如碗、锅等)互相配合共同完成食材处理过程的结构(如与碗互相配合的搅拌器、与锅互相配合的锅铲等);食材处理机器人类型还可以是控制其他食材处理装置工作状态的装置/设备等,例如控制烤箱烘烤时间的装置/ 设备,控制炒锅炒菜温度的装置/设备等。
进一步地,当食材处理机器人至少包括机械手和执行器时,可以是通过执行器握持其他器具(如刀具等),也可以是将其他器具直接固定在机械手上。
本实施例提出一种食材处理机器人控制方法,在对食材进行处理时,通过神经网络模型对食材种类进行识别,并得到对应的食材处理信息,机器人根据食材处理信息对食材进行处理,即针对不同的食材执行不同的处理过程,使得食材处理方式更加灵活;此外,该神经网络模型可以根据不同的食材处理图像和/或视频学习到多种不同食材的食材处理信息,从而使得应用范围更广,并且食材处理更加科学合理。
在一个实施例中,神经网络模型的训练数据还包括以下三项中的至少一项:食材处理过程中的力度信息;食材处理过程中的温度信息;食材处理过程中的时间信息。根据食材处理图像和/或视频对神经网络模型进行训练时,该神经网络模型获取的信息比较局限,例如只能获取操作轨迹,操作方式等信息,而在食材处理过程中,为了完成对食材的处理,除了需要操作轨迹,操作方式等信息外,还需要具体的一些操作参数,例如:力度、温度和时间等。
本实施例中操作参数包括食材处理力度信息、食材处理温度信息和食材处理时间信息中的至少一项。具体地,食材处理力度信息是指施加在待处理食材上的力度信息,例如:使用多大的力打蛋可以达到使得鸡蛋的蛋壳发生破裂的目的;食材处理温度信息是指在处理食材时所需要的温度信息,例如:使用多高的温度可以达到把鸡蛋煮熟的目的;食材处理时间信息是指在处理食材时所需要的时间信息,例如:煮多长时间可以达到把鸡蛋煮熟的目的。
本实施例在对神经网络模型进行训练时,其训练数据还包括食材处理力度信息、食材处理温度信息、食材处理时间信息中的至少一项,由于上述参数都 是在食材处理过程中非常重要的参数,对与食材是否处理完成具有决定性的作用,因此,通过使用上述参数对神经网络模型进行训练,可以使得该模型输出的食材处理指令所包含的食材处理信息更加全面具体,从而可以更好的控制食材处理机器人对待处理食材进行处理。
在一个实施例中,步骤S100之后,该食材处理机器人控制方法还包括:获取待处理食材的当前状态信息,获取的当前状态信息有助于得到对该食材进行处理所需要的食材处理信息。
当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息,其中,当前力度信息是指当前已经施加在待处理食材上的力度信息,当前温度信息是指待处理食材的当前温度信息,当前时间信息是指待处理食材的各种相关的时间信息。例如:参考图3,对于图3中的食材来说,当前力度信息为机械手固定食材的力度,由于需要对食材进行切割处理,因此,需要根据当前力度信息确认食材在切割过程中是否会发生滑脱等现象,此外,获取的当前力度信息又有助于得到对该食材进行切割所需要的切割力度信息。又例如:在对冷冻的食材(如肉类等)进行处理时,由于冷冻状态的食材相较于常温状态的食材来说,需要一个解冻的过程,因此,其处理过程会比常温状态下更复杂,相应的,食材处理策略也会发生变化。
本实施例通过获取待处理食材的当前状态信息,可以判断待处理食材的当前状态,从而可以有助于得到更好的食材处理策略,使得输出的食材处理指令更加科学合理。
在一个实施例中,步骤S200中,根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息的步骤,具体包括:
根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信 息;根据待处理食材的食材种类信息,以及待处理食材的当前状态信息,通过神经网络模型得到待处理食材的食材处理信息。
在得到待处理食材的种类信息后,结合待处理食材的当前状态信息,可以判断出待处理食材的当前状态,从而可以得到更好的食材处理策略,输出的食材处理指令更加科学合理。
在一个实施例中,食材处理信息包括食材处理方式和食材处理参数。通过神经网络模型得到的食材处理信息包括食材处理方式和食材处理参数,食材处理方式即采用何种方式对食材进行处理,食材处理参数即在食材处理过程中的条件/环境参数。
具体地,对不同种类的食材来说,食材处理方式可以相同,也可以不同;对一种食材来说,其食材处理方式也存在多种,例如:对鸡蛋来说,其处理方式包括:打蛋、搅拌、煎蛋、蒸蛋等多种处理方式。对应的,打蛋对应的食材处理参数表示如何打蛋才能使得鸡蛋的蛋壳发生破裂;搅拌对应的食材处理参数表示如何搅拌才能使得鸡蛋的蛋清和蛋黄混合均匀;煎蛋对应的食材处理参数表示如何煎蛋才能使得鸡蛋煎熟;蒸蛋对应的食材处理参数表示如何蒸蛋才能使得鸡蛋蒸熟。
本实施例中,食材处理信息包括食材处理方式和食材处理参数,食材处理机器人可以根据食材处理方式和食材处理参数对食材进行处理,从而使得食材处理过程更加智能化。
在一个实施例中,食材处理参数包括时间参数、温度参数和力度参数中的至少一项。其中,时间参数表示对待处理食材进行处理的时间,温度参数表示对待处理食材进行处理的环境温度,力度参数表示对待处理食材进行处理的力度。
具体地,根据不同的食材处理方式,用到的食材处理参数不尽相同,可以是只用到上述三个参数中的一个参数,可以是用到上述三个参数中的两个参数,也可以是用到上述三个参数中的所有参数。可以理解,上述三个参数为一般食材处理过程中较为常用的参数,食材处理参数也可以是包括上述三个参数以外的、食材处理过程中所使用到的参数,例如食材处理次数等。
例如:参考图2,在打蛋的过程中,需要的参数包括力度参数,即采用多大的力来固定鸡蛋,以及使用多大的力来使鸡蛋的蛋壳发生破裂。参考图5,在搅拌的过程中,需要的参数包括力度参数和时间参数,即采用多大的力来搅拌蛋清和蛋黄,以及搅拌多长时间以使得蛋清和蛋黄混合均匀。在蒸蛋的过程中,需要的参数包括温度参数和时间参数,即采用多高的温度来蒸蛋,以及蒸多长时间可以把鸡蛋蒸熟。如图7所示,在煎蛋的过程中,需要的参数包括时间参数、温度参数和力度参数,即采用多高的温度来煎蛋,在煎蛋的过程中使用多大的力来使得鸡蛋翻面,以及煎多长时间可以把鸡蛋煎熟。
需要说明的是,当食材处理过程中用到的参数为两个或者两个以上时,其中的一个参数可以随另外一个或两个参数变化而变化,在对神经网络模型进行训练的过程中,可以加入不同参数之间的对应变化关系。例如:在对某一食材进行水煮处理时,其温度参数一般为100摄氏度,其时间参数为30分钟,然而,在高海拔地区,水的沸点会降低,这就使得该食材的水煮处理的温度参数低于100摄氏度,此时,需要的时间参数会大于30分钟。
本实施例中,食材处理参数包括时间参数、温度参数和力度参数中的至少一项,食材处理机器人可以根上述参数中的一个或多个对食材进行处理,从而使得食材处理过程更加科学合理。
在一个实施例中,如图8所示,在输出对应的食材处理指令的步骤之后, 该食材处理机器人控制方法还包括步骤S300和步骤S400。
步骤S300,获取待处理食材的实际处理信息,实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项。在获取实际处理信息时,根据输出的食材处理指令中,食材处理信息包含的食材处理参数选择获取的待处理食材的实际处理信息。例如:食材处理参数只包括力度参数,则只获取待处理食材的实际处理力度;食材处理参数同时包括力度参数和时间参数,则同时获取待处理食材的实际处理力度和实际处理时间。
步骤S400,根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果。在得到实际处理信息后,比对实际处理信息和食材处理信息,判断食材处理是否完成。例如:某一食材的食材处理信息中,食材处理方式为水煮,其食材处理参数包括时间参数,具体为30分钟,以及温度参数,具体为100摄氏度。在获取的实际处理信息中,若实际处理温度为100摄氏度,实际处理时间为20分钟,则判断食材未处理完成;若实际处理温度为100摄氏度,实际处理时间为30分钟,则判断食材处理完成,结束对食材的处理。
进一步地,若食材的实际处理环境无法达到食材处理信息中食材处理参数的要求,则可以通过神经网络模型重新获取对应的食材处理参数,并重新输出食材处理指令。例如:某一食材的食材处理信息中,食材处理方式为水煮,其食材处理参数包括时间参数,具体为30分钟,以及温度参数,具体为100摄氏度。在获取的实际处理信息中,若实际处理温度为90摄氏度(因海拔因素导致水的沸点降低),则通过神经网络模型重新获取对应的食材处理参数,即当温度为90摄氏度时,需要水煮的时间(例如45分钟),并重新输出食材处理指令,在重新输出的指令中,温度参数为90摄氏度,时间参数为45分钟。
本实施例在输出食材处理指令之后,还包括获取待处理食材的实际处理信 息,并根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果,从而保证食材处理结果满足要求,使得食材处理过程更加科学合理。
在一个实施例中,以用于切菜的食材处理机器人为例,该机器人包括机械手和执行器,执行器握持有切具,机械手用于固定待处理的食材,执行器用于控制切具对食材进行切菜处理。在对神经网络模型进行训练时,训练的数据包括对该食材进行切割的图像/视频以及设置在切菜板上的力传感器反馈的力度信息,通过食材切割图像/视频可以学习对该食材进行切割处理的操作轨迹等信息,通过力传感器反馈的力度信息可以学习完成对该食材的切割所需要的具体力度信息。
在一个实施例中,在执行上述实施例所提供的食材处理机器人控制方法的步骤时,并不限定使用的神经网络模型的个数,在对食材进行处理的过程中,可以是使用单个神经网络模型执行该食材处理过程,也可以是多个神经网络模型共同配合以执行该食材处理过程。例如:当执行打蛋的食材处理时,使用单个神经网络模型实现该处理过程。当执行煎蛋的食材处理时,使用神经网络模型1实现鸡蛋的识别,使用神经网络模型2实现对鸡蛋的翻面处理,使用神经网络模型3控制煎蛋的时间,使用神经网络模型4控制煎蛋的温度等。通过不同神经网络模型的相互配合,可以使得食材处理过程更加准确。
应该理解的是,虽然图1、8的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1、8中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它 步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图9所示,提供一种食材处理机器人控制装置,该食材处理机器人控制装置包括图像获取模块100和控制模块200。
图像获取模块100用于获取待处理食材的图像。图像获取模块100获取的图像可以是单张图像,可以是多张图像,也可以是由多帧图像组成的视频,其中,单张图像可以节省图像获取时间,并且可以加快图像处理速度;多张图像或由多帧图像组成的视频可以提高图像处理的准确度。
控制模块200用于根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,神经网络模型为根据食材处理图像和/或视频训练得到,食材处理指令用于指示食材处理机器人根据食材处理指令中的食材处理信息对待处理食材进行处理。
此外,图像获取模块100还用于在食材处理机器人进行食材处理过程中,对待处理食材进行定位处理等。例如:参考图7,在煎蛋的过程中,图像获取模块100还用于获取鸡蛋的位置,从而方便食材机器人对鸡蛋进行翻面处理。
本实施例提出一种食材处理机器人控制装置,在对食材进行处理时,通过神经网络模型对食材种类进行识别,并得到对应的食材处理信息,食材处理机器人根据食材处理信息对食材进行处理,即针对不同的食材执行不同的处理过程,使得食材处理方式更加灵活;此外,该神经网络模型可以根据不同的食材处理图像和/或视频学习到多种不同食材的食材处理信息,从而使得应用范围更广,并且食材处理更加科学合理。
在一个实施例中,控制模块200还用于控制模块还用于:获取待处理食材的当前状态信息,当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
在一个实施例中,控制模块200还用于控制模块还用于:根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息;根据待处理食材的食材种类信息,以及待处理食材的当前状态信息,通过神经网络模型得到待处理食材的食材处理信息。
在一个实施例中,控制模块200还用于:获取待处理食材的实际处理信息,实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果。
本实施例中,控制模块200在输出食材处理指令之后,还用于获取待处理食材的实际处理信息,并根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果,从而保证食材处理结果满足要求,使得食材处理过程更加科学合理。
关于食材处理机器人控制装置的具体限定可以参见上文中对于食材处理机器人控制方法的限定,在此不再赘述。上述食材处理机器人控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,如图10所示,提供一种食材处理系统,包括控制装置300和食材处理机器人400。
控制装置300用于获取待处理食材的图像;根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,神经网络模型为根据食材处理图像和/或视频训练得到;
食材处理机器人400用于根据食材处理指令中的食材处理信息对待处理食 材进行处理。
本实施例提出一种食材处理系统,在对食材进行处理时,控制装置通过神经网络模型对食材种类进行识别,并得到对应的食材处理信息,食材处理机器人根据食材处理信息对食材进行处理,即针对不同的食材执行不同的处理过程,使得食材处理方式更加灵活;此外,该神经网络模型可以根据不同的食材处理图像和/或视频学习到多种不同食材的食材处理信息,从而使得应用范围更广,并且食材处理更加科学合理。
在一个实施例中,该食材处理系统还包括以下三项中的至少一项:
计时器,用于当控制装置输出食材处理指令之后,获取待处理食材的实际处理时间,并发送至控制装置;
力传感器,用于当控制装置输出食材处理指令之前,获取待处理食材的当前力度信息,并发送至控制装置;以及,用于当控制装置输出食材处理指令之后,获取待处理食材的实际处理力度,并发送至控制装置;
温度传感器,用于当控制装置输出食材处理指令之前,获取待处理食材的当前温度信息,并发送至控制装置;以及,用于当控制装置输出食材处理指令之后,获取待处理食材的实际处理温度,并发送至控制装置。
本实施例中,在对食材进行处理之前,先获取食材的当前状态信息,可以有助于得到更加科学的食材处理信息;在对食材进行实际处理的过程中,通过计时器、力传感器、温度传感器中的一个或多个获取待处理食材的实际处理信息,并根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果,从而保证食材处理结果满足要求,使得食材处理过程更加科学合理。
关于控制装置的具体限定可以参见上文中对于食材处理机器人控制方法的限定,在此不再赘述。上述控制装置中的各个模块可全部或部分通过软件、硬 件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供一种计算机设备,包括存储器和处理器,存储器存储有计算机程序,处理器执行计算机程序时实现以下步骤:获取待处理食材的图像;根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,神经网络模型为根据食材处理图像和/或视频训练得到,食材处理指令用于指示食材处理机器人根据食材处理指令中的食材处理信息对待处理食材进行处理。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取待处理食材的当前状态信息,当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息;根据待处理食材的食材种类信息,以及待处理食材的当前状态信息,通过神经网络模型得到待处理食材的食材处理信息。
在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取待处理食材的实际处理信息,实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果。
在一个实施例中,提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:获取待处理食材的图像;根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息以及对应的 食材处理信息,并输出对应的食材处理指令,神经网络模型为根据食材处理图像和/或视频训练得到,食材处理指令用于指示食材处理机器人根据食材处理指令中的食材处理信息对待处理食材进行处理。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:获取待处理食材的当前状态信息,当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据待处理食材的图像,通过神经网络模型得到待处理食材的食材种类信息;根据待处理食材的食材种类信息,以及待处理食材的当前状态信息,通过神经网络模型得到待处理食材的食材处理信息。
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:
获取待处理食材的实际处理信息,实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;根据食材处理信息以及实际处理信息,确定待处理食材的食材处理结果。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据 率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (15)

  1. 一种食材处理机器人控制方法,其特征在于,包括:
    获取待处理食材的图像;
    根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示所述食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
  2. 根据权利要求1所述的食材处理机器人控制方法,其特征在于,所述神经网络模型的训练数据还包括以下三项中的至少一项:
    食材处理过程中的力度信息;
    食材处理过程中的温度信息;
    食材处理过程中的时间信息。
  3. 根据权利要求1所述的食材处理机器人控制方法,其特征在于,所述获取待处理食材的图像之后,还包括:
    获取所述待处理食材的当前状态信息,所述当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
  4. 根据权利要求3所述的食材处理机器人控制方法,其特征在于,所述根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,包括:
    根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息;
    根据所述待处理食材的食材种类信息,以及所述待处理食材的当前状态信 息,通过神经网络模型得到所述待处理食材的食材处理信息。
  5. 根据权利要求2所述的食材处理机器人控制方法,其特征在于,所述食材处理信息包括食材处理方式和食材处理参数。
  6. 根据权利要求5所述的食材处理机器人控制方法,其特征在于,所述食材处理参数包括时间参数、温度参数和力度参数中的至少一项。
  7. 根据权利要求6所述的食材处理机器人控制方法,其特征在于,所述输出对应的食材处理指令的步骤之后,还包括:
    获取所述待处理食材的实际处理信息,所述实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;
    根据所述食材处理信息以及所述实际处理信息,确定所述待处理食材的食材处理结果。
  8. 一种食材处理机器人控制装置,其特征在于,包括:
    图像获取模块,用于获取待处理食材的图像;
    控制模块,用于根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示所述食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
  9. 根据权利要求8所述的食材处理机器人控制装置,其特征在于,所述控制模块还用于:
    获取所述待处理食材的当前状态信息,所述当前状态信息包括以下三项中的至少一项:当前力度信息,当前温度信息,当前时间信息。
  10. 根据权利要求9所述的食材处理机器人控制装置,其特征在于,所述 控制模块还用于:
    根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息;
    根据所述待处理食材的食材种类信息,以及所述待处理食材的当前状态信息,通过神经网络模型得到所述待处理食材的食材处理信息。
  11. 根据权利要求8所述的食材处理机器人控制装置,其特征在于,所述控制模块还用于:
    获取所述待处理食材的实际处理信息,所述实际处理信息包括实际处理时间、实际处理力度和实际处理温度中的至少一项;
    根据所述食材处理信息以及所述实际处理信息,确定所述待处理食材的食材处理结果。
  12. 一种食材处理系统,其特征在于,包括:
    控制装置,用于获取待处理食材的图像;根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到;
    食材处理机器人,用于根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
  13. 根据权利要求12所述的食材处理系统,其特征在于,还包括以下三项中的至少一项:
    计时器,用于当所述控制装置输出食材处理指令之后,获取所述待处理食材的实际处理时间,并发送至所述控制装置;
    力传感器,用于当所述控制装置输出食材处理指令之前,获取所述待处理 食材的当前力度信息,并发送至所述控制装置;以及,用于当所述控制装置输出食材处理指令之后,获取所述待处理食材的实际处理力度,并发送至所述控制装置;
    温度传感器,用于当所述控制装置输出食材处理指令之前,获取所述待处理食材的当前温度信息,并发送至所述控制装置;以及,用于当所述控制装置输出食材处理指令之后,获取所述待处理食材的实际处理温度,并发送至所述控制装置。
  14. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现以下步骤:
    获取待处理食材的图像;
    根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现以下步骤:
    获取待处理食材的图像;
    根据所述待处理食材的图像,通过神经网络模型得到所述待处理食材的食材种类信息以及对应的食材处理信息,并输出对应的食材处理指令,所述神经网络模型为根据食材处理图像和/或视频训练得到,所述食材处理指令用于指示食材处理机器人根据所述食材处理指令中的食材处理信息对所述待处理食材进行处理。
PCT/CN2019/105912 2018-09-17 2019-09-16 食材处理机器人控制方法、装置、系统、存储介质及设备 WO2020057455A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811081931.3 2018-09-17
CN201811081931.3A CN109434844B (zh) 2018-09-17 2018-09-17 食材处理机器人控制方法、装置、系统、存储介质及设备

Publications (1)

Publication Number Publication Date
WO2020057455A1 true WO2020057455A1 (zh) 2020-03-26

Family

ID=65532869

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105912 WO2020057455A1 (zh) 2018-09-17 2019-09-16 食材处理机器人控制方法、装置、系统、存储介质及设备

Country Status (2)

Country Link
CN (1) CN109434844B (zh)
WO (1) WO2020057455A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109434844B (zh) * 2018-09-17 2022-06-28 鲁班嫡系机器人(深圳)有限公司 食材处理机器人控制方法、装置、系统、存储介质及设备
CN109998360B (zh) * 2019-04-11 2021-03-26 上海长膳智能科技有限公司 一种用于自动烹饪食物的方法和装置
CN111151368B (zh) * 2020-01-09 2021-04-02 珠海格力电器股份有限公司 垃圾处理方法、系统、存储介质以及垃圾处理设备
CN111814862A (zh) * 2020-06-30 2020-10-23 平安国际智慧城市科技股份有限公司 果蔬识别方法及装置
CN111914777B (zh) * 2020-08-07 2021-07-06 广东工业大学 一种跨模态识别机器人指令的方法及系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6468069B2 (en) * 1999-10-25 2002-10-22 Jerome H. Lemelson Automatically optimized combustion control
US6650779B2 (en) * 1999-03-26 2003-11-18 Georgia Tech Research Corp. Method and apparatus for analyzing an image to detect and identify patterns
CN102294695A (zh) * 2010-06-25 2011-12-28 鸿富锦精密工业(深圳)有限公司 机器人标定方法及标定系统
CN104914720A (zh) * 2015-04-16 2015-09-16 贵州省烟草公司遵义市公司 具有自动学习功能的电子鼻智能烘烤控制系统及控制方法
CN105512676A (zh) * 2015-11-30 2016-04-20 华南理工大学 一种智能终端上的食物识别方法
CN106878697A (zh) * 2016-06-29 2017-06-20 鲁班嫡系机器人 一种拍摄方法及其成像方法、装置和设备
CN108098773A (zh) * 2017-12-20 2018-06-01 芜湖哈特机器人产业技术研究院有限公司 一种机器人的分拣控制系统和方法
CN109434844A (zh) * 2018-09-17 2019-03-08 鲁班嫡系机器人(深圳)有限公司 食材处理机器人控制方法、装置、系统、存储介质及设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3536551B2 (ja) * 1996-10-03 2004-06-14 松下電器産業株式会社 調理器
JP6522488B2 (ja) * 2015-07-31 2019-05-29 ファナック株式会社 ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法
CN105444222B (zh) * 2015-12-11 2017-11-14 美的集团股份有限公司 微波炉的烹饪控制方法、系统、云服务器和微波炉
CN105867257A (zh) * 2016-05-30 2016-08-17 深圳市泰瑞达科技有限公司 一种智能锅及其控制编译方法
CN106774876B (zh) * 2016-12-12 2020-07-28 快创科技(大连)有限公司 基于ar增强现实技术和菜谱生成的烹饪辅助系统
CN106897661B (zh) * 2017-01-05 2020-03-27 合肥美的智能科技有限公司 一种食材图像的智能化识别方法、系统和家用电器
CN108197635B (zh) * 2017-11-29 2020-05-29 珠海格力电器股份有限公司 烹饪方式的展示方法及装置、抽油烟机
CN108303920A (zh) * 2018-01-18 2018-07-20 周晔 一种基于食材识别的智能电器综合管理方法及系统
CN108527388A (zh) * 2018-04-20 2018-09-14 成都昂联科技有限公司 一种基于行为分析的厨房智能机器人及其操作方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6650779B2 (en) * 1999-03-26 2003-11-18 Georgia Tech Research Corp. Method and apparatus for analyzing an image to detect and identify patterns
US6468069B2 (en) * 1999-10-25 2002-10-22 Jerome H. Lemelson Automatically optimized combustion control
CN102294695A (zh) * 2010-06-25 2011-12-28 鸿富锦精密工业(深圳)有限公司 机器人标定方法及标定系统
CN104914720A (zh) * 2015-04-16 2015-09-16 贵州省烟草公司遵义市公司 具有自动学习功能的电子鼻智能烘烤控制系统及控制方法
CN105512676A (zh) * 2015-11-30 2016-04-20 华南理工大学 一种智能终端上的食物识别方法
CN106878697A (zh) * 2016-06-29 2017-06-20 鲁班嫡系机器人 一种拍摄方法及其成像方法、装置和设备
CN108098773A (zh) * 2017-12-20 2018-06-01 芜湖哈特机器人产业技术研究院有限公司 一种机器人的分拣控制系统和方法
CN109434844A (zh) * 2018-09-17 2019-03-08 鲁班嫡系机器人(深圳)有限公司 食材处理机器人控制方法、装置、系统、存储介质及设备

Also Published As

Publication number Publication date
CN109434844B (zh) 2022-06-28
CN109434844A (zh) 2019-03-08

Similar Documents

Publication Publication Date Title
WO2020057455A1 (zh) 食材处理机器人控制方法、装置、系统、存储介质及设备
Bollini et al. Interpreting and executing recipes with a cooking robot
US11117253B2 (en) Methods and systems for food preparation in a robotic cooking kitchen
RU2675713C1 (ru) Интеллектуальные аппарат и способ для приготовления пищи
US11926036B2 (en) Information processing device and scheduling method
CN112107232A (zh) 一种烹饪控制方法、装置、设备及可读存储介质
Derossi et al. Avenues for non-conventional robotics technology applications in the food industry
WO2021200306A1 (ja) 情報処理装置、情報処理端末、情報処理方法
CN110794718B (zh) 多元食品烹饪方法、终端及存储介质
AU2019230135A1 (en) System and method for grading and scoring food
JP6541981B2 (ja) すり身揚げ物食品
Kukharev et al. RESEARCH RESULTS OF THE DEVICE FOR DRYING AGRICULTURAL CROPS
US20230169611A1 (en) Methods, systems, and computer readable media for unified production recipe management and nutrition content calculation
KR20240014911A (ko) 능동적으로 조리 조건을 조절하는 프라잉 로봇 제어 시스템
US20230128796A1 (en) Control apparatus and control method
Demeure et al. What We Learn about Process Specification Languages, from Studying Recipes
Nevens et al. researchportal. unamur. be
CN115291782A (zh) 自动生成菜谱的方法、系统、计算机可读介质及电子设备
KR20220039707A (ko) 정보 처리 장치, 정보 처리 방법, 조리 로봇, 조리 방법, 및 조리 기구
Rahman et al. Food process design: Overview
Dong et al. Chef Curry: A Generative Machine Learning Model for Food Recipes
Hromkovic et al. Algorithmics, or What Have Programming and Baking in Common?

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19862612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19862612

Country of ref document: EP

Kind code of ref document: A1