CN112784640A - Menu making method and device and cooking machine - Google Patents

Menu making method and device and cooking machine Download PDF

Info

Publication number
CN112784640A
CN112784640A CN201911086129.8A CN201911086129A CN112784640A CN 112784640 A CN112784640 A CN 112784640A CN 201911086129 A CN201911086129 A CN 201911086129A CN 112784640 A CN112784640 A CN 112784640A
Authority
CN
China
Prior art keywords
food material
cooking
image
menu
material adding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911086129.8A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911086129.8A priority Critical patent/CN112784640A/en
Publication of CN112784640A publication Critical patent/CN112784640A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J27/00Cooking-vessels
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J36/00Parts, details or accessories of cooking-vessels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2643Oven, cooking

Abstract

The invention is suitable for the technical field of cooking, and provides a menu making method, a menu making device and a cooking machine, wherein the method comprises the following steps: acquiring cooking video stream and cooking temperature data of a cooking process; extracting a food material adding time sequence and a food material adding category corresponding to each food material adding point in the food material adding time sequence from the cooking video stream; and recording the food material adding time sequence, the food material adding category and the cooking temperature data into a menu template with a standard format to generate a corresponding cooking menu. According to the method and the device, the cooking video stream and the cooking temperature data in the cooking process are obtained, the food material adding time sequence and the food material adding category corresponding to each food material adding point are automatically extracted from the cooking video stream, and the information is automatically recorded into the menu template in the standard format, so that the corresponding cooking menu is automatically generated, the cooking process data do not need to be manually recorded, the complexity of the menu making process is reduced, and the menu making efficiency is improved.

Description

Menu making method and device and cooking machine
Technical Field
The invention belongs to the technical field of cooking, and particularly relates to a menu making method and device and a cooking machine.
Background
Along with the improvement of the living standard and the progress of science and technology of people, people gradually use various machines so as to improve the working efficiency and the life quality of people and further improve the happiness index of people. In the field of kitchen electric technology, a series of dish frying machines are produced in recent years in order to replace manual dish frying and improve the quality of life.
When the cooker works, automatic cooking is carried out according to a set menu, and when a new dish is made, an electronic menu corresponding to the new dish is often required to be made in advance.
Among the prior art, the process of making the electronic menu at present is that, the cook manually makes a dish once on the cooking machine, and the record personnel manually record the whole process data that the cook made this dish, for example record the food and put in kind and put in order, duration and degree of heating etc. then make the menu that generates this dish correspondence according to the data manual work of record, lead to the menu process of making very complicated, consume the manpower, inefficiency.
Disclosure of Invention
The embodiment of the invention aims to provide a menu making method and device and a cooking machine, and aims to solve the technical problem that the existing menu making process is very complex.
The embodiment of the invention is realized in such a way that a menu making method comprises the following steps:
acquiring cooking video stream and cooking temperature data of a cooking process;
extracting a food material adding time sequence and a food material adding category corresponding to each food material adding point in the food material adding time sequence from the cooking video stream;
and recording the food material adding time sequence, the food material adding category and the cooking temperature data into a menu template with a standard format to generate a corresponding cooking menu.
Still further, the method further comprises:
acquiring the weight of the food materials added corresponding to each food material adding point in the food material adding time sequence;
and recording the weight of the food material into the menu template.
Still further, after the step of generating the corresponding cooking recipe, the method further includes:
and when an editing instruction of the user for the cooking menu is acquired, correcting corresponding information recorded in the cooking menu according to correction information input by the user.
Still further, the method further comprises:
and storing the cooking video stream and the cooking temperature data.
Further, the step of extracting the food material adding time sequence from the cooking video stream comprises:
performing scene change identification based on each frame of image in the cooking video stream;
extracting front and rear two frames of images when each scene changes, and extracting front and rear two frames of images with the position of the stirrer in the images consistent front and rear so as to extract two frames of key images when the food material is added from the images;
and generating the food material adding time sequence according to the time of the key images when all food materials are added in the cooking video stream.
Further, the step of extracting the food material adding category from the cooking video stream comprises:
taking the image of the previous frame of the two frames of key images when the food material is added as a background and the image of the next frame as a foreground;
subtracting the background frame from the foreground frame to obtain a differential image, and preprocessing the differential image to obtain a mask of the newly added food material;
and multiplying the mask of the newly added food material with the foreground image to obtain a segmentation image of the newly added food material, and inputting the segmentation image into a neural network for identification to obtain the category of the newly added food material.
Further, the scene change recognition step includes:
performing feature matching on every two adjacent frames of images to obtain the feature point matching number of every two adjacent frames of images;
calculating the ratio of the matching number of each feature point to the number of the feature points of the previous frame of image of the two adjacent frames of images corresponding to the feature point so as to calculate the ratio of a plurality of feature points;
and when the ratio of the characteristic points is smaller than a specified threshold value, judging that the scene changes.
In addition, an embodiment of the present invention further provides a menu making apparatus, where the apparatus includes:
the data acquisition module is used for acquiring cooking video stream and cooking temperature data in the cooking process;
the data extraction module is used for extracting a food material adding time sequence and a food material adding category corresponding to each food material adding point in the food material adding time sequence from the cooking video stream;
and the recipe making module is used for recording the food material adding time sequence, the food material adding category and the cooking temperature data into a recipe template with a standard format to generate a corresponding cooking recipe.
Still further, the apparatus further comprises:
the weight acquisition module is used for acquiring the weight of the food materials added corresponding to each food material adding point in the food material adding time sequence;
the recipe making module is further used for recording the weight of the food materials into the recipe template.
Still further, the apparatus further comprises:
and the menu correction module is used for correcting corresponding information recorded in the cooking menu according to correction information input by the user when an editing instruction of the user on the cooking menu is obtained.
Still further, the apparatus further comprises:
and the data holding module is used for storing the cooking video stream and the cooking temperature data.
Further, the data extraction module comprises:
a scene change identification unit for identifying scene changes based on each frame of image in the cooking video stream;
the image extraction unit is used for extracting front and rear two frames of images when each scene changes and extracting front and rear two frames of images with the positions of the stirrer consistent in front and rear in the images so as to extract two frames of key images when the food material is added; (ii) a
And the time sequence generating unit is used for generating the food material adding time sequence according to the time of the key images when all food materials are added in the cooking video stream.
Further, the data extraction module further comprises:
the image setting unit is used for taking the image of the previous frame of the two frames of key images as a background and the image of the next frame of key images as a foreground when the food material is added;
the image processing unit is used for subtracting the background frame from the foreground frame to obtain a differential image, and preprocessing the differential image to obtain a mask of the newly added food material;
and the image matching unit is used for multiplying the mask of the newly added food material with the foreground image to obtain a segmentation image of the newly added food material, and inputting the segmentation image into a neural network for identification to obtain the category of the newly added food material.
Further, the scene change recognition unit includes:
the characteristic matching subunit is used for carrying out characteristic matching on each two adjacent frames of images to obtain the matching number of characteristic points of each two adjacent frames of images;
the characteristic point ratio subunit is used for calculating the ratio of the matching number of each characteristic point to the number of the characteristic points of the previous frame image of the two adjacent frame images corresponding to the matching number of each characteristic point so as to calculate the ratio of a plurality of characteristic points;
and the scene change identification subunit is used for judging that the scene changes when the characteristic point ratio is smaller than a specified threshold value.
In addition, the embodiment of the invention also provides a cooking machine, which comprises a cooker, a camera for shooting the video stream of the cooking process, a temperature sensor for collecting the cooking temperature of the cooking process, and the menu making device, wherein the menu making device is respectively connected with the camera and the temperature sensor.
According to the recipe making method provided by the embodiment of the invention, the cooking video stream and the cooking temperature data in the cooking process are obtained, the food material adding time sequence and the food material adding category corresponding to each food material adding point are automatically extracted from the cooking video stream, and the information is automatically recorded into the recipe template in the standard format, so that the corresponding cooking recipe is automatically generated, the manual recording of the cooking process data is not needed, the complexity of the recipe making process is reduced, and the recipe making efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart of a recipe making method according to an embodiment of the present invention;
fig. 2 is a structural diagram of a cooker according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a recipe making method according to a second embodiment of the present invention;
FIG. 4 is a block diagram of a menu making apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic block diagram of a cooker according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The invention provides a menu making method, a device and a cooking machine, aiming at the technical problems of very complex menu making process, labor consumption and low efficiency caused by the fact that the cooking process data are recorded in a purely manual mode.
Example one
Referring to fig. 1, a flowchart of a recipe making method according to a first embodiment of the present invention is schematically shown, and the recipe making method can be applied to a recipe making apparatus, for example, the recipe making apparatus can be a controller of a cooking machine, the recipe making apparatus can be specifically implemented by software and/or hardware, and the method includes steps S01 to S03.
Step S01, a cooking video stream and cooking temperature data of the cooking process are obtained.
Wherein, the culinary art video stream can be shot by the camera and obtain, and culinary art temperature data can be gathered by temperature sensor in real time to culinary art temperature data includes the culinary art temperature of every moment among this culinary art process, and culinary art temperature data can be solitary temperature value data, also can be temperature time sequence curve.
Referring to fig. 2, a structural diagram of a cooking machine according to a first embodiment of the present invention is shown, including a pot 1, a camera 2 disposed above the pot, and a temperature sensor 3 disposed at the bottom of the pot 1, where the camera 1 is configured to capture an image of a cooking process of dishes in the pot 1 to obtain the cooking video stream, and the temperature sensor 3 is configured to collect a cooking temperature of the cooking process in real time to obtain the cooking temperature data. The invention is not limited to this, the camera 2 may also be fixedly arranged at other positions that can shoot the cooking process image of the dish, such as the pot mouth of the pot 1, or the camera 2 may also be a camera of a handheld camera device, which may be but not limited to a mobile phone, a video camera, a flat panel, etc., so that the cooking video stream may also be transmitted from the handheld camera device; in addition, the temperature sensor 3 can also be arranged at other positions capable of collecting cooking temperature, such as the pot mouth and the pot wall of the pot 1, and is preferably arranged at a position close to the heater of the pot 1, so that the sensed cooking temperature is more accurate.
Step S02, extracting, from the cooking video stream, a food material adding timing sequence and a food material adding category corresponding to each food material adding point in the food material adding timing sequence.
In some optional embodiments of the present invention, the step of extracting the food material adding timing sequence from the cooking video stream may be specifically implemented as the following refining steps, and the refining step includes:
performing scene change identification based on each frame of image in the cooking video stream;
extracting front and rear two frames of images when each scene changes, and extracting front and rear two frames of images with the position of the stirrer in the images consistent front and rear so as to extract two frames of key images when the food material is added from the images; (ii) a
And generating the food material adding time sequence according to the time of the key images when all food materials are added in the cooking video stream.
It should be noted that there is inevitably a difference between the cooking images before and after the food material is added, for example, red pepper is added to the pot at a certain time, there is no image feature point corresponding to red pepper in the cooking image shot before the addition, and there is an image feature point corresponding to red pepper in the cooking image shot after the addition, so that the system can identify scene change according to the sudden change of the feature point, thereby determining that the food material adding action occurs at present. Meanwhile, as the scene change is also caused by the stirring action of the stirrer, the stirrer needs to be segmented from the two frames of images before and after the extracted scene change, and whether the positions of the stirrer of the two frames of images before and after the scene change are consistent is judged through an image processing technology, if the positions of the stirrer in the image before and after the scene change are inconsistent, the two frames of images before and after the current moment are not considered as key frames corresponding to the food material adding time, and if the positions of the stirrer in the image before and after the scene change are consistent, the two frames of images before and after the current moment are considered as key frames corresponding to the food material adding time, so that the two frames of key images before and after the food material adding are obtained, and the time point of each time of adding the food material can be determined according to the time of the key images before and after each time of adding the food material, so.
Further, the scene change identification step may be implemented as the following steps:
performing feature matching on every two adjacent frames of images to obtain the feature point matching number of every two adjacent frames of images;
calculating the ratio of the matching number of each feature point to the number of the feature points of the previous frame of image of the two adjacent frames of images corresponding to the feature point so as to calculate the ratio of a plurality of feature points;
and when the characteristic point ratio is smaller than a specified threshold value, judging that the scene changes.
Wherein the threshold is 0.9. In specific implementation, the SIFT algorithm can be used to extract the number of feature points of each frame of image and the matching number of feature points of two adjacent frames of images. Based on this, the extraction process of the food material adding time sequence is as follows: the method comprises the steps of extracting feature points of frames before and after a cooking video stream respectively by utilizing an SIFT algorithm, counting the number of the feature points respectively, then carrying out image matching on the frames before and after the cooking video stream, counting the number of the matched feature points, and finally making a ratio of the number of the matched feature points of the frame to the number of the feature points of the frame before, so as to judge scene change conditions according to the ratio, thereby realizing the purpose of obtaining a scene change key frame under a food material adding period, obtaining a time sequence of food material throwing through a time sequence corresponding to the key frame, thereby extracting the adding time of the food material and finally generating a food material adding time sequence curve. The detailed process is as follows:
and (3) matching the feature points of two adjacent frames in the image, namely the current frame and the previous frame, by utilizing an SIFT algorithm, and recording the number of the feature points obtained after the two frames are matched as b. In order to reflect the condition of sudden change of a video scene through the matching rate of two adjacent frames, the matching number b of the feature points is compared with the number a of the feature points of the previous framen-1Making a ratio when the ratio satisfiesAnd when the current value is less than 0.9, determining that the current scene mutation occurs, thereby determining that the food material throwing action occurs currently.
In some optional embodiments of the present invention, the step of extracting the food material addition category from the cooking video stream may be specifically implemented as the following refining steps, and the refining step includes:
taking the image of the previous frame of the two frames of key images when the food material is added as a background and the image of the next frame as a foreground;
subtracting the background frame from the foreground frame to obtain a differential image, and preprocessing the differential image to obtain a mask of the newly added food material;
and multiplying the mask of the newly added food material with the foreground image to obtain a segmentation image of the newly added food material, and inputting the segmentation image into a neural network for identification to obtain the category of the newly added food material.
The specific process can be as follows: taking the previous frame image of two frames of key images when the food material is added as a background and the next frame image as a foreground, subtracting the background frame from the foreground frame to obtain a differential image, performing binarization processing on the differential image, performing morphological opening operation on a binarization result to remove some small protrusions, performing closing operation to fill some small holes and gaps, and fusing gaps on the boundary to obtain an ideal mask of the newly added food material, and multiplying the mask of the newly added food material with the foreground frame to obtain a segmentation image of the newly added food material. Inputting the segmented result into a neural network obtained by training and learning a large number of preset food material images for target detection and classification, counting the number of the target classified results, then sorting the statistical results in a descending order, and outputting the category (the category with the highest similarity) corresponding to top-1 as a final result to obtain the category (specific food material name, category and the like) of the newly added food material, so that the category of the food material added each time can be determined according to two frames of key images before and after each time of food material addition, and the food material addition category corresponding to each food material addition point in the food material addition time sequence is extracted and obtained.
In other alternative embodiments of the present invention, an image recognition technique may be adopted to capture food material images from each frame of image of the cooking video stream, and identify the food material category corresponding to each food material image by adopting image matching calculation, so as to obtain the food material category included in each frame of image of the cooking video stream, and determine a food material adding action according to the change of the food material category included in each frame of image, so as to extract the food material adding timing sequence and the corresponding food material adding category.
And step S03, recording the food material adding time sequence, the food material adding category and the cooking temperature data into a menu template with a standard format, and generating a corresponding cooking menu.
The menu template with the standard format can be pre-manufactured according to the data to be recorded, and during manufacturing, an area corresponding to the record of each data to be recorded can be set and data association is suggested, so that after the data are acquired, the data can be automatically recorded in the corresponding position of the template. In addition, different menu templates can be made for different models of cooking machines, and the corresponding menu template can be called according to the model when making the menu.
In addition, it should be noted that the finally generated cooking recipe can be set to an editable mode, so that the user can edit the cooking recipe, such as adding missing or unextractable cooking process data, correcting recipe data errors, adjusting cooking parameters to meet personal tastes, and the like, so as to further refine the cooking recipe.
To sum up, in the recipe making method in this embodiment, by acquiring the cooking video stream and the cooking temperature data of the cooking process, the food material adding timing sequence and the food material adding category corresponding to each food material adding point are automatically extracted from the cooking video stream, and these pieces of information are automatically recorded in the recipe template in the standard format, so as to automatically generate the corresponding cooking recipe, and it is not necessary to manually record the cooking process data, thereby reducing the complexity of the recipe making process and improving the recipe making efficiency.
Example two
Referring to fig. 3, a flowchart of a recipe making method according to a second embodiment of the present invention is schematically shown, and the recipe making method can be applied to a recipe making apparatus, which may be a controller of a cooker, and the recipe making apparatus can be implemented by software and/or hardware, and the method includes steps S11 to S19.
And step S11, acquiring a cooking video stream of the cooking process through a camera above the pot, and acquiring cooking temperature data of the cooking process through a temperature sensor at the bottom of the pot.
Step S12, extracting, from the cooking video stream, a food material adding timing sequence and a food material adding category corresponding to each food material adding point in the food material adding timing sequence.
The specific implementation manner of this step can refer to the corresponding contents in the first embodiment, and is not described herein again.
Step S13, acquiring the weight of the food material added corresponding to each food material adding point in the food material adding sequence.
It should be noted that the food materials are generally contained in the food box, and the food box is controlled to turn over or be turned upside down when the food materials are added, so that the contained food materials are poured into the pot. In specific implementation, as an implementation manner, a weight sensor may be disposed on the food box of each food material to determine the weight of the added food material by monitoring the weight reduction amount of the food material in the corresponding food box, for example, when pepper is currently added, and the weight reduction amount of pepper is 2g, it may be determined that the current added weight of pepper is 2 g. In step S12, the food material adding category corresponding to each food material adding point is extracted, and the weight of the food material added corresponding to each food material adding point can be obtained by obtaining the weight sensing data in the food box corresponding to each food material adding category. As another implementation manner, a weight sensor may be further disposed at the bottom of the pot to determine the weight of the added food material according to the increase amount of the food material in the pot. As another implementation manner, the user may also manually weigh the food materials before cooking the food materials, and input the weighed weight of the food materials, so that when the food material addition of each category is obtained, the previously preset input weight of the food materials by the user is automatically obtained.
And step S14, recording the food material adding time sequence, the food material adding category, the cooking temperature data and the food material weight into a menu template with a standard format, and generating a corresponding cooking menu.
And step S15, when the editing instruction of the user for the cooking menu is obtained, correcting the corresponding information recorded in the cooking menu according to the correction information input by the user.
It should be noted that the finally generated cooking recipe may be set to an editable mode, so that the user may edit the cooking recipe, and when receiving an edit instruction of the cooking recipe from the user, the user corrects the corresponding information recorded in the cooking recipe according to the correction information input by the user, such as adding missing or unextractable cooking process data, correcting a recipe data error, adjusting a cooking parameter to meet personal taste, and the like, so as to further improve the cooking recipe. For example, when the determined food material category is wrong due to the limited image recognition technology, the food material category information can be corrected to realize the correction of the recipe data error.
And step S16, storing the cooking video stream and the cooking temperature data.
EXAMPLE III
Another aspect of the present invention further provides a menu making apparatus, please refer to fig. 4, which is a schematic block diagram of a menu making apparatus according to a third embodiment of the present invention, and the menu making apparatus can be applied to a menu making apparatus, for example, the menu making apparatus can be a controller of a cooking machine, and the menu making apparatus 10 includes:
the data acquisition module 11 is used for acquiring cooking video stream and cooking temperature data in a cooking process;
a data extraction module 12, configured to extract, from the cooking video stream, a food material addition timing sequence and a food material addition category corresponding to each food material addition point in the food material addition timing sequence;
and a recipe making module 13, configured to record the food material adding timing sequence, the food material adding category, and the cooking temperature data in a recipe template with a standard format, and generate a corresponding cooking recipe.
Wherein, the culinary art video stream can be shot by the camera and obtain, and culinary art temperature data can be gathered by temperature sensor in real time to culinary art temperature data includes the culinary art temperature of every moment among this culinary art process, and culinary art temperature data can be solitary temperature value data, also can be temperature time sequence curve.
In specific implementation, a camera can be arranged above the pot and a temperature sensor (as shown in fig. 2) is arranged at the bottom of the pot, the camera is configured to shoot an image of a cooking process of dishes in the pot to obtain the cooking video stream, and the temperature sensor is configured to collect cooking temperature of the cooking process in real time to obtain the cooking temperature data. The invention is not limited to the above, the camera can also be fixedly arranged at other positions capable of shooting the cooking process image of the dish, such as the pot mouth of a pot, or the camera can also be the camera of a handheld camera device, the handheld camera device can be but not limited to a mobile phone, a camera, a flat panel and the like, so that the cooking video stream can also be transmitted from the handheld camera device; in addition, the temperature sensor can also be arranged at other positions capable of collecting cooking temperature, such as a pot opening and a pot wall of a pot, and is preferably arranged at a position close to a heater of the pot, so that the sensed cooking temperature is more accurate.
In some optional embodiments of the present invention, the data extraction module 12 may specifically include:
a scene change identification unit for identifying scene changes based on each frame of image in the cooking video stream;
the image extraction unit is used for extracting front and rear two frames of images when each scene changes and extracting front and rear two frames of images with the positions of the stirrer consistent in front and rear in the images so as to extract two frames of key images when the food material is added;
and the time sequence generating unit is used for generating the food material adding time sequence according to the time of the key images when all food materials are added in the cooking video stream.
It should be noted that there is inevitably a difference between the cooking images before and after the food material is added, for example, red pepper is added to the pot at a certain time, there is no image feature point corresponding to red pepper in the cooking image shot before the addition, and there is an image feature point corresponding to red pepper in the cooking image shot after the addition, so that the system can identify scene change according to the sudden change of the feature point, thereby determining that the food material adding action occurs at present. Meanwhile, as the scene change is also caused by the stirring action of the stirrer, the stirrer needs to be segmented from the two frames of images before and after the extracted scene change, and whether the positions of the stirrer of the two frames of images before and after the scene change are consistent is judged through an image processing technology, if the positions of the stirrer in the image before and after the scene change are inconsistent, the two frames of images before and after the current moment are not considered to be the key frames corresponding to the food material adding time, if the positions of the stirrer in the image before and after the scene change are consistent, the two frames of images before and after the current moment are considered to be the key frames corresponding to the food material adding time, so that the two frames of key images before and after the food material adding are obtained, and the time point of each time of adding the food material can be determined according to the time of the key images before and after each time of adding the food.
Further, the scene change identification unit may specifically include:
the characteristic matching subunit is used for carrying out characteristic matching on each two adjacent frames of images to obtain the matching number of characteristic points of each two adjacent frames of images;
the characteristic point ratio subunit is used for calculating the ratio of the matching number of each characteristic point to the number of the characteristic points of the previous frame image of the two adjacent frame images corresponding to the matching number of each characteristic point so as to calculate the ratio of a plurality of characteristic points;
and the scene change identification subunit is used for judging that the scene changes when the characteristic point ratio is smaller than a specified threshold value.
Wherein the threshold is 0.9. In specific implementation, the SIFT algorithm can be used to extract the number of feature points of each frame of image and the matching number of feature points of two adjacent frames of images. Based on this, the extraction process of the food material adding time sequence is as follows: the method comprises the steps of extracting feature points of frames before and after a cooking video stream respectively by utilizing an SIFT algorithm, counting the number of the feature points respectively, then carrying out image matching on the frames before and after the cooking video stream, counting the number of the matched feature points, and finally making a ratio of the number of the matched feature points of the frame to the number of the feature points of the frame before, so as to judge scene change conditions according to the ratio, thereby realizing the purpose of obtaining a scene change key frame under a food material adding period, obtaining a time sequence of food material throwing through a time sequence corresponding to the key frame, thereby extracting the adding time of the food material and finally generating a food material adding time sequence curve. The detailed process is as follows:
and (3) matching the feature points of two adjacent frames in the image, namely the current frame and the previous frame, by utilizing an SIFT algorithm, and recording the number of the feature points obtained after the two frames are matched as b. In order to reflect the condition of sudden change of a video scene through the matching rate of two adjacent frames, the matching number b of the feature points is compared with the number a of the feature points of the previous framen-1And (5) making a ratio, and when the ratio is smaller than a specified threshold value of 0.9, determining that the current frame has a scene mutation, so as to determine that the food material throwing action currently occurs.
In some optional embodiments of the present invention, the data extraction module 12 further includes:
the image setting unit is used for taking the image of the previous frame of the two frames of key images as a background and the image of the next frame of key images as a foreground when the food material is added;
the image processing unit is used for subtracting the background frame from the foreground frame to obtain a differential image, and preprocessing the differential image to obtain a mask of the newly added food material;
and the image matching unit is used for multiplying the mask of the newly added food material with the foreground image to obtain a segmentation image of the newly added food material, and inputting the segmentation image into a neural network for identification to obtain the category of the newly added food material.
The specific process can be as follows: taking the previous frame image of two frames of key images when the food material is added as a background and the next frame image as a foreground, subtracting the background frame from the foreground frame to obtain a differential image, performing binarization processing on the differential image, performing morphological opening operation on a binarization result to remove some small protrusions, performing closing operation to fill some small holes and gaps, and fusing gaps on the boundary to obtain an ideal mask of the newly added food material, and multiplying the mask of the newly added food material with the foreground frame to obtain a segmentation image of the newly added food material. Inputting the segmented result into a neural network obtained by training and learning a large number of preset food material images for target detection and classification, counting the number of the target classified results, then sorting the statistical results in a descending order, and outputting the category (the category with the highest similarity) corresponding to top-1 as a final result to obtain the category (specific food material name, category and the like) of the newly added food material, so that the category of the food material added each time can be determined according to two frames of key images before and after each time of food material addition, and the food material addition category corresponding to each food material addition point in the food material addition time sequence is extracted and obtained.
In other alternative embodiments of the present invention, an image recognition technique may be adopted to capture food material images from each frame of image of the cooking video stream, and identify the food material category corresponding to each food material image by adopting image matching calculation, so as to obtain the food material category included in each frame of image of the cooking video stream, and determine a food material adding action according to the change of the food material category included in each frame of image, so as to extract the food material adding timing sequence and the corresponding food material adding category.
The menu template with the standard format can be pre-manufactured according to the data to be recorded, and during manufacturing, an area corresponding to the record of each data to be recorded can be set and data association is suggested, so that after the data are acquired, the data can be automatically recorded in the corresponding position of the template. In addition, different menu templates can be made for different models of cooking machines, and the corresponding menu template can be called according to the model when making the menu.
In addition, it should be noted that the finally generated cooking recipe can be set to an editable mode, so that the user can edit the cooking recipe, such as adding missing or unextractable cooking process data, correcting recipe data errors, adjusting cooking parameters to meet personal tastes, and the like, so as to further refine the cooking recipe.
To sum up, the recipe making device 10 in this embodiment extracts the food material adding timing sequence and the food material adding category corresponding to each food material adding point from the cooking video stream by acquiring the cooking video stream and the cooking temperature data of the cooking process, and automatically records these information into the recipe template in the standard format to automatically generate the corresponding cooking recipe, so that the manual recording of the cooking process data is not needed, the complexity of the recipe making process is reduced, and the recipe making efficiency is improved.
In still other alternative embodiments of the present invention, the apparatus further comprises:
the weight acquisition module is used for acquiring the weight of the food materials added corresponding to each food material adding point in the food material adding time sequence;
the recipe making module is further used for recording the weight of the food materials into the recipe template.
In still other alternative embodiments of the present invention, the apparatus further comprises:
and the menu correction module is used for correcting corresponding information recorded in the cooking menu according to correction information input by the user when an editing instruction of the user on the cooking menu is obtained.
In still other alternative embodiments of the present invention, the apparatus further comprises:
and the data holding module is used for storing the cooking video stream and the cooking temperature data.
The functions or operation steps of the modules and units when executed are substantially the same as those of the method embodiments, and are not described herein again.
Example four
In another aspect, the present invention further provides a cooker, please refer to fig. 5, which shows a module structure diagram of a cooker according to a fourth embodiment of the present invention, including a pot, a camera 20 for shooting a video stream of a cooking process, a temperature sensor 30 for acquiring a cooking temperature of the cooking process, and a recipe making device 10 connected to the camera 20 and the temperature sensor 30, respectively, further including a memory 40 and a computer program 50 stored in the memory and operable on the processor, where the recipe making device 10 is the recipe making device 10 according to any of the above embodiments, and when the recipe making device 10 executes the computer program 50, the recipe making method according to any of the above embodiments is implemented.
Specifically, the temperature sensor 30 and the camera 20 may be arranged in the manner shown in fig. 2. The recipe making device 10 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 40 or Processing data.
The memory 40 includes at least one type of readable storage medium, which includes flash memory, hard disk, multi-media card, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like. The memory 40 may in some embodiments be an internal storage unit of the device, for example a hard disk of the device. The memory 40 may also be an external storage device of the apparatus in other embodiments, such as a plug-in hard disk provided on the apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 40 may also include both an internal storage unit of the apparatus and an external storage device. The memory 40 may be used not only to store application software and various types of data installed in the device, but also to temporarily store data that has been output or will be output.
Optionally, the cooker may further include a cooker heater, a food adding mechanism, a driving device, a user interface, a network interface, a communication bus, etc., the user interface may include a Display (Display), an input unit such as a remote controller, physical keys, etc., and the optional user interface may further include a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the device and for displaying a visual user interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication link between the apparatus and other electronic devices. The communication bus is used to enable connection communication between these components.
It should be noted that the configuration shown in fig. 5 does not constitute a limitation of the device, which in other embodiments may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
To sum up, the cooking machine in this embodiment extracts the food material adding time sequence and the food material adding category corresponding to each food material adding point from the cooking video stream automatically by acquiring the cooking video stream and the cooking temperature data of the cooking process, and automatically records these information into the recipe template of the standard format to automatically generate the corresponding cooking recipe, so that the cooking process data does not need to be recorded manually, the complexity of the recipe making process is reduced, and the recipe making efficiency is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (15)

1. A method for making a menu, the method comprising:
acquiring cooking video stream and cooking temperature data of a cooking process;
extracting a food material adding time sequence and a food material adding category corresponding to each food material adding point in the food material adding time sequence from the cooking video stream;
and recording the food material adding time sequence, the food material adding category and the cooking temperature data into a menu template with a standard format to generate a corresponding cooking menu.
2. The recipe making method according to claim 1, further comprising:
acquiring the weight of the food materials added corresponding to each food material adding point in the food material adding time sequence;
and recording the weight of the food material into the menu template.
3. The recipe making method according to claim 1, further comprising, after the step of generating the corresponding cooking recipe:
and when an editing instruction of the user for the cooking menu is acquired, correcting corresponding information recorded in the cooking menu according to correction information input by the user.
4. The recipe making method as set forth in claim 1, further comprising:
and storing the cooking video stream and the cooking temperature data.
5. The recipe making method according to claim 1, wherein the step of extracting the food material addition timing from the cooking video stream comprises:
performing scene change identification based on each frame of image in the cooking video stream;
extracting front and rear two frames of images when each scene changes, and extracting front and rear two frames of images with the position of the stirrer in the images consistent front and rear so as to extract two frames of key images when the food material is added from the images;
and generating the food material adding time sequence according to the time of the key images when all food materials are added in the cooking video stream.
6. The recipe making method according to claim 5, wherein the step of extracting the food material addition category from the cooking video stream comprises:
taking the image of the previous frame of the two frames of key images when the food material is added as a background and the image of the next frame as a foreground;
subtracting the background frame from the foreground frame to obtain a differential image, and preprocessing the differential image to obtain a mask of the newly added food material;
and multiplying the mask of the newly added food material with the foreground image to obtain a segmentation image of the newly added food material, and inputting the segmentation image into a neural network for identification to obtain the category of the newly added food material.
7. The menu making method according to claim 5, wherein the scene change recognition step comprises:
performing feature matching on every two adjacent frames of images to obtain the feature point matching number of every two adjacent frames of images;
calculating the ratio of the matching number of each feature point to the number of the feature points of the previous frame of image of the two adjacent frames of images corresponding to the feature point so as to calculate the ratio of a plurality of feature points;
and when the characteristic point ratio is smaller than a specified threshold value, judging that the scene changes.
8. A menu making apparatus, characterized in that the apparatus comprises:
the data acquisition module is used for acquiring cooking video stream and cooking temperature data in the cooking process;
the data extraction module is used for extracting a food material adding time sequence and a food material adding category corresponding to each food material adding point in the food material adding time sequence from the cooking video stream;
and the recipe making module is used for recording the food material adding time sequence, the food material adding category and the cooking temperature data into a recipe template with a standard format to generate a corresponding cooking recipe.
9. The menu making apparatus of claim 8, characterized in that the apparatus further comprises:
the weight acquisition module is used for acquiring the weight of the food materials added corresponding to each food material adding point in the food material adding time sequence;
the recipe making module is further used for recording the weight of the food materials into the recipe template.
10. The menu making apparatus of claim 8, characterized in that the apparatus further comprises:
and the menu correction module is used for correcting corresponding information recorded in the cooking menu according to correction information input by the user when an editing instruction of the user on the cooking menu is obtained.
11. The menu making apparatus of claim 8, characterized in that the apparatus further comprises:
and the data holding module is used for storing the cooking video stream and the cooking temperature data.
12. The apparatus for making a recipe as set forth in claim 8, wherein the data extraction module comprises:
a scene change identification unit for identifying scene changes based on each frame of image in the cooking video stream;
the image extraction unit is used for extracting front and rear two frames of images when each scene changes and extracting front and rear two frames of images with the positions of the stirrer consistent in front and rear in the images so as to extract two frames of key images when the food material is added;
and the time sequence generating unit is used for generating the food material adding time sequence according to the time of the key images when all food materials are added in the cooking video stream.
13. The apparatus for making a recipe as set forth in claim 12, wherein the data extraction module further comprises:
the image setting unit is used for taking the image of the previous frame of the two frames of key images as a background and the image of the next frame of key images as a foreground when the food material is added;
the image processing unit is used for subtracting the background frame from the foreground frame to obtain a differential image, and preprocessing the differential image to obtain a mask of the newly added food material;
and the image matching unit is used for multiplying the mask of the newly added food material with the foreground image to obtain a segmentation image of the newly added food material, and inputting the segmentation image into a neural network for identification to obtain the category of the newly added food material.
14. The menu making apparatus of claim 12, wherein the scene change identifying unit comprises:
the characteristic matching subunit is used for carrying out characteristic matching on each two adjacent frames of images to obtain the matching number of characteristic points of each two adjacent frames of images;
the characteristic point ratio subunit is used for calculating the ratio of the matching number of each characteristic point to the number of the characteristic points of the previous frame image of the two adjacent frame images corresponding to the matching number of each characteristic point so as to calculate the ratio of a plurality of characteristic points;
and the scene change identification subunit is used for judging that the scene changes when the characteristic point ratio is smaller than a specified threshold value.
15. A cooker, comprising a pan, characterized by further comprising a camera for shooting a video stream of a cooking process, a temperature sensor for collecting a cooking temperature of the cooking process, and a recipe making device according to any one of claims 8 to 14, the recipe making device being connected to the camera and the temperature sensor, respectively.
CN201911086129.8A 2019-11-08 2019-11-08 Menu making method and device and cooking machine Withdrawn CN112784640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911086129.8A CN112784640A (en) 2019-11-08 2019-11-08 Menu making method and device and cooking machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911086129.8A CN112784640A (en) 2019-11-08 2019-11-08 Menu making method and device and cooking machine

Publications (1)

Publication Number Publication Date
CN112784640A true CN112784640A (en) 2021-05-11

Family

ID=75748322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911086129.8A Withdrawn CN112784640A (en) 2019-11-08 2019-11-08 Menu making method and device and cooking machine

Country Status (1)

Country Link
CN (1) CN112784640A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115245272A (en) * 2022-09-13 2022-10-28 国网江苏省电力有限公司扬州供电分公司 Self-learning cooking robot based on AI assistance and learning method thereof
CN115291782A (en) * 2022-07-01 2022-11-04 宁波拓邦智能控制有限公司 Method, system, computer readable medium and electronic device for automatically generating menu
CN117319745A (en) * 2023-09-28 2023-12-29 火星人厨具股份有限公司 Interaction method, device, equipment and storage medium based on menu

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115291782A (en) * 2022-07-01 2022-11-04 宁波拓邦智能控制有限公司 Method, system, computer readable medium and electronic device for automatically generating menu
CN115245272A (en) * 2022-09-13 2022-10-28 国网江苏省电力有限公司扬州供电分公司 Self-learning cooking robot based on AI assistance and learning method thereof
CN117319745A (en) * 2023-09-28 2023-12-29 火星人厨具股份有限公司 Interaction method, device, equipment and storage medium based on menu

Similar Documents

Publication Publication Date Title
CN111684368B (en) Food preparation method and system based on ingredient identification
CN112784640A (en) Menu making method and device and cooking machine
CN110378420A (en) A kind of image detecting method, device and computer readable storage medium
CN110059654A (en) A kind of vegetable Automatic-settlement and healthy diet management method based on fine granularity identification
CN109237582A (en) Range hood control method based on image recognition, control system, range hood
CN103714327B (en) Method and system for correcting image direction
CN111080493B (en) Dish information identification method and device and dish self-service settlement system
KR101562364B1 (en) Automatic calorie caculation method using food image and feeding behavior managing system using thereof
CN103098078A (en) Smile detection systems and methods
CN104063686B (en) Crop leaf diseases image interactive diagnostic system and method
CN108108767A (en) A kind of cereal recognition methods, device and computer storage media
CN108961547A (en) A kind of commodity recognition method, self-service machine and computer readable storage medium
CN104361357B (en) Photo album categorizing system and sorting technique based on image content analysis
CN109615358B (en) Deep learning image recognition-based restaurant automatic settlement method and system
CN110415212A (en) Abnormal cell detection method, device and computer readable storage medium
CN110781805A (en) Target object detection method, device, computing equipment and medium
CN108090517A (en) A kind of cereal recognition methods, device and computer storage media
CN112101300A (en) Medicinal material identification method and device and electronic equipment
CN112784641A (en) Food material feeding method and device and cooking machine
CN104027074A (en) Health data collection and recognition method used for health equipment
WO2021082285A1 (en) Method and device for measuring volume of ingredient, and kitchen appliance apparatus
CN111435427A (en) Method and device for identifying rice and cooking appliance
CN115187972A (en) Dish identification method based on feature comparison
CN114612897A (en) Intelligent fruit and vegetable weighing and ticketing method and device, electronic equipment and storage medium
CN112859619A (en) Cooking control method and device and cooking machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210511