CN117319745A - Interaction method, device, equipment and storage medium based on menu - Google Patents
Interaction method, device, equipment and storage medium based on menu Download PDFInfo
- Publication number
- CN117319745A CN117319745A CN202311278876.8A CN202311278876A CN117319745A CN 117319745 A CN117319745 A CN 117319745A CN 202311278876 A CN202311278876 A CN 202311278876A CN 117319745 A CN117319745 A CN 117319745A
- Authority
- CN
- China
- Prior art keywords
- cooking
- user
- menu
- action
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000003993 interaction Effects 0.000 title claims abstract description 41
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000010411 cooking Methods 0.000 claims abstract description 265
- 230000009471 action Effects 0.000 claims abstract description 137
- 239000012634 fragment Substances 0.000 claims abstract description 40
- 230000008569 process Effects 0.000 claims abstract description 33
- 239000000463 material Substances 0.000 claims description 52
- 239000000779 smoke Substances 0.000 claims description 51
- 235000013305 food Nutrition 0.000 claims description 50
- 238000012545 processing Methods 0.000 claims description 24
- 235000011194 food seasoning agent Nutrition 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 16
- 230000011218 segmentation Effects 0.000 claims description 10
- 239000010421 standard material Substances 0.000 claims description 10
- 238000005520 cutting process Methods 0.000 claims description 3
- 235000021067 refined food Nutrition 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 6
- 230000000875 corresponding effect Effects 0.000 description 97
- 238000004891 communication Methods 0.000 description 8
- 244000061456 Solanum tuberosum Species 0.000 description 5
- 235000002595 Solanum tuberosum Nutrition 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 235000012015 potatoes Nutrition 0.000 description 5
- 235000013555 soy sauce Nutrition 0.000 description 5
- 238000005303 weighing Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 235000019640 taste Nutrition 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 150000003839 salts Chemical class 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 238000010025 steaming Methods 0.000 description 2
- 239000004278 EU approved seasoning Substances 0.000 description 1
- 235000007688 Lycopersicon esculentum Nutrition 0.000 description 1
- 240000003768 Solanum lycopersicum Species 0.000 description 1
- CZMRCDWAGMRECN-UGDNZRGBSA-N Sucrose Chemical compound O[C@H]1[C@H](O)[C@@H](CO)O[C@@]1(CO)O[C@@H]1[C@H](O)[C@@H](O)[C@H](O)[C@@H](CO)O1 CZMRCDWAGMRECN-UGDNZRGBSA-N 0.000 description 1
- 229930006000 Sucrose Natural products 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000009835 boiling Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000013601 eggs Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses an interaction method, device, equipment and storage medium based on a menu. Acquiring a cooking recorded video and user cooking information of a first user in a cooking process; generating a first menu according to the cooking information of the user; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action; dividing the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first divided video fragment set, and editing each first divided video fragment in the first divided video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip. The beneficial effects that the generated menu can meet the personalized requirements of users and the practicability of the menu is enhanced are achieved.
Description
Technical Field
The present invention relates to the field of home appliances, and in particular, to a menu-based interaction method, apparatus, device, and storage medium.
Background
With the continuous acceleration of modern life rhythm, people become abnormally busy, and have difficulty in enough time and energy to learn to cook, and a menu is an important tool for assisting a user in cooking, so that great convenience can be brought to the user.
However, the existing menu is generally set according to the tastes of the masses, and the personalized requirements of users cannot be met, so that the practicability of the menu is not strong.
Disclosure of Invention
The invention provides an interaction method, device, equipment and storage medium based on a menu, which are used for solving the problem that the current menu cannot meet the personalized requirements of users, so that the menu has low practicability.
According to an aspect of the present invention, there is provided a menu-based interaction method, including:
acquiring a cooking recorded video and user cooking information of a first user in a cooking process;
generating a first menu according to the cooking information of the user; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action;
dividing the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first divided video fragment set, and editing each first divided video fragment in the first divided video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip.
According to another aspect of the present invention, there is provided a menu-based interaction device, comprising:
the information acquisition module is used for acquiring a cooking recorded video and user cooking information of a first user in a cooking process;
the first menu generating module is used for generating a first menu according to the cooking information of the user; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action;
the video processing module is used for carrying out segmentation processing on the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first segmented video fragment set, and editing each first segmented video fragment in the first segmented video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip.
According to another aspect of the present invention, there is provided a menu-based interaction device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the recipe-based interaction method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement a recipe-based interaction method of any one of the embodiments of the present invention when executed.
According to the technical scheme provided by the embodiment of the invention, the cooking recorded video and the cooking information of the user in the cooking process of the first user are obtained; generating a first menu according to the cooking information of the user; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action; dividing the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first divided video fragment set, and editing each first divided video fragment in the first divided video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip. According to the technical scheme, the cooking recorded video and the user cooking information of the first user are obtained, the first menu can be generated according to the user cooking information, the first video clip collection can be obtained by dividing and clipping the user cooking information, and the problems that the current menu cannot meet the personalized requirements of the user and the menu is poor in practicability are solved. The beneficial effects that the generated menu can meet the personalized requirements of users and the practicability of the menu is enhanced are achieved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an interaction method based on a menu according to an embodiment of the present invention;
FIG. 2 is a flowchart of an interaction method based on a menu according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an interaction device based on a menu according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an interaction device based on a menu according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a menu-based interaction method provided in an embodiment of the present invention, where the method may be applicable to a scenario of generating a menu, and the method may be performed by a menu-based interaction device, where the menu-based interaction device may be implemented in hardware and/or software, and the menu-based interaction device may be configured in a menu-based interaction device, and the cooking real-time guiding device may be a server, and may be a cloud server or a cooking device, and hereinafter will be described by taking a cloud server as an example. As shown in fig. 1, the method includes:
S110, acquiring a cooking recorded video and user cooking information of the first user in the cooking process.
In this embodiment, the cooking recorded video and the user cooking information may be acquired by a cooking device, which includes, for example, a video acquisition device, a temperature detection device, a weighing device, a display device, and a data transmission device. The video acquisition equipment can be a camera and is arranged above the cooking equipment and used for recording cooking actions, food materials in a pot and the like of a user in the cooking process to obtain cooking recorded videos. The temperature detection device may be a temperature sensing probe. The weighing equipment can be a high-precision electronic scale and is positioned at the bottom of the gas stove. The display device is used for displaying videos, images or characters and the like, the data transmission device comprises a WIFI module, and the acquired cooking recording videos and the cooking information of the user can be transmitted to the cloud server. The present embodiment is not limited to the specific types of image capturing apparatus, temperature detecting apparatus, display apparatus, data transmitting apparatus, and weighing apparatus.
Optionally, the cooking recorded video may also be acquired by a mobile phone, a video camera, or the like.
The first user includes a user who makes a dish by recording a dish. The user cooking information includes information related to the user generated during the cooking process, such as cooking temperature, fire power, and weight of food materials.
Illustratively, the cooking recorded video and the user cooking information are acquired by the cooking apparatus. After the first user opens the function of sharing the menu, when the first user places the cookware and the weighing equipment recognizes the weight, related equipment on the cooking equipment starts to collect corresponding data, for example, a cooking recording video is collected through a camera, a temperature sensing probe records temperature data of the bottom of the pot in real time, the weighing equipment records weight data in real time, smoke machine information and the like are obtained by the first cooking equipment in real time, and the cooking recording video, the temperature data, the smoke machine information, the weight data and the like are transmitted to a cloud server in real time through a data transmission device. The cooking information of the user can be obtained by processing the cooking recorded video and the temperature data, the smoke machine information, the weight data and other information uploaded by the cooking equipment, so that the cooking recorded video and the cooking information of the user are obtained.
S120, generating a first menu according to the cooking information of the user; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action.
In this embodiment, the first recipe may be understood as a recipe generated according to a dish making process of the first user, and may include first cooking food material information, first cooking operation information, first cooking fire information. The first cooking operation information includes information such as a first cooking operation and a time corresponding to each of the first cooking operations. The first cooking action comprises adding action, stewing and boiling the upper cover, stir-frying the top pan and the like. The time corresponding to each action in the first cooking action comprises the starting time, the ending time and the like corresponding to each action in the first cooking action.
Specifically, considering that some ready-made recipes, for example, a recipe which is pre-manufactured by a manufacturer or a recipe searched through internet before the cooking equipment leaves the factory, are generally set based on popular taste, and cannot meet the personalized taste of the user, the recipe can be processed according to the user cooking information of the first user and a preset processing mode, so that the first recipe which meets the taste of the user is generated. The preset processing mode may include ordering the first cooking action in the cooking information of the user and the time corresponding to each action in the first cooking action according to a time sequence, and the like.
S130, dividing the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first divided video segment set, and editing each first divided video segment in the first divided video segment set to obtain a first editing video segment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip.
In this embodiment, the first set of divided video segments includes a first divided video segment corresponding to a first cooking action in a first recipe. The duration of the first video clip is less than the duration of the corresponding first split video clip.
Specifically, considering that the cooking recorded video is generally large in size, low in transmission speed and weak in pertinence of guidance on a certain cooking action, a second user using a first menu to cook is difficult to accurately adjust to a video part which the second user wants to watch, therefore, the cooking recorded video is divided based on a first cooking action in the first menu and time corresponding to each action in the first cooking action, so as to obtain a set of video clips corresponding to each action in the first cooking action, namely a first divided video clip set, and further, each first divided video clip in the first divided video clip set is clipped, so that a first clip video clip set can be obtained. For example, considering that certain actions are generally unchanged over a period of time, the duration of a first segment of video corresponding to the action may be clipped such that the resulting duration of the first clip of video is less than the duration of the first segment of video, and the actual time corresponding to the action in the first clip of video is noted on the first clip of video. Illustratively, a 15 minute steaming action, for example, is clipped to 2 minutes, and a "steaming duration of 15 minutes" is added over the corresponding 2 minutes to indicate a specific operation by the second user. Therefore, the effects of reducing the total volume of the video and improving the video transmission speed are achieved, meanwhile, the second user can quickly select the corresponding segment according to the corresponding relation between the first clip video segment set and the menu, for example, a certain place is not clear, and the segment can be quickly played back. The present embodiment is not limited to a specific manner of dividing and editing the cooking recorded video.
According to the technical scheme provided by the embodiment of the invention, the cooking recorded video and the cooking information of the user in the cooking process of the first user are obtained; generating a first menu according to the cooking information of the user; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action; dividing the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first divided video fragment set, and editing each first divided video fragment in the first divided video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip. According to the technical scheme, the cooking recorded video and the user cooking information of the first user are obtained, the first menu can be generated according to the user cooking information, the first video clip collection can be obtained by dividing and clipping the user cooking information, and the problems that the current menu cannot meet the personalized requirements of the user and the menu is poor in practicability are solved. The beneficial effects that the generated menu can meet the personalized requirements of users and the practicability of the menu is enhanced are achieved.
In some embodiments, the user cooking information includes: user food material information, user operation information, user fire information and user smoke control information; the user food material information comprises food material names, food material weights, food material processing modes, seasoning names and seasoning weights; the user operation information comprises adding actions, cooking actions and cooking time corresponding to the actions; the user fire information comprises a fire gear and fire adjusting time corresponding to the fire gear; the user smoke control information comprises the time for opening the smoke machine, the time for closing the smoke machine, the smoke machine gear and the smoke machine adjusting time corresponding to the smoke machine gear. By the technical scheme, more accurate and comprehensive cooking information can be obtained, and a foundation is laid for generating a menu.
Specifically, the user food material information, the user operation information, the user firepower information and the user smoke control information in the user cooking information are obtained by processing information such as cooking recorded video and temperature data, smoke machine information and weight data. The specific processing mode is as follows:
the method includes the steps of firstly processing a cooking recorded video into a picture, preprocessing the picture, for example, carrying out size adjustment and image enhancement, positioning one or more objects on the picture, for example, a pot, a food material, a seasoning bottle, a cooking tool and limbs in the picture by using a selection search algorithm, labeling areas on the positioned objects, extracting features of the image in the areas through a convolutional neural network model, for example, extracting shape features, color features, texture features, spatial features and the like in the image, and carrying out feature matching to obtain names of the objects, for example, shredded potatoes, soy sauce bottles, hands, turners, pot covers and the like.
And then, determining user food material information and user operation information according to the image recognition result and the weight data, wherein the user food material information comprises user food material information comprising food material names, food material weights, food material processing modes, seasoning names and seasoning weights, and the user operation information comprises adding actions, stir-frying actions and cooking times corresponding to the actions. For example, if the image recognition result is that the hand and the shredded potatoes are obtained, determining that the food material is named as potatoes, shredding the food material, adding shredded potatoes in the current action, and simultaneously obtaining the weight of the food material, for example, marking the weight difference between the current moment and the previous moment as the weight of shredded potatoes; if the image recognition result is that the arm and the soy sauce bottle are obtained, determining that the current action is a soy sauce adding action, and simultaneously marking the weight change difference between the current moment and the previous moment as the soy sauce weight.
Determining user fire information and user smoke control information according to the temperature data and the smoke machine information, wherein the user fire information comprises a fire gear and fire adjusting time corresponding to the fire gear; the user smoke control information comprises the time for opening the smoke machine, the time for closing the smoke machine, the smoke machine gear and the smoke machine adjusting time corresponding to the smoke machine gear. The thermal power gear and the thermal power adjusting time corresponding to the thermal power gear required for reaching the preset temperature value are calculated by performing function analysis according to temperature data uploaded by the temperature sensor. And analyzing the time for opening the smoke machine, the time for closing the smoke machine, the smoke machine gear and the smoke machine adjusting time corresponding to the smoke machine gear according to the smoke machine information.
Optionally, the interaction method based on the menu further comprises: and the generated first menu and the first video clip set can be issued to cooking equipment corresponding to the second user, so that the cooking equipment corresponding to the second user guides the second user to cook by using the first menu and the first video clip set.
In some embodiments, the recipe-based interaction method further comprises: receiving a second menu corresponding to the first menu returned by the cooking equipment corresponding to the second user, wherein the second menu is obtained by changing the first menu according to the menu editing operation of the first menu uploaded by the second user. Through the technical scheme, the menu can be updated, so that the stored menu content is more perfect, and the cooking success rate of a user is further improved.
In this embodiment, the second menu may be understood as a menu uploaded to the cloud server after the second user updates the first menu.
Alternatively, the second user and the first user may be the same user, or may be different users. If the second user and the first user are the same user, changing the first menu according to the menu editing operation aiming at the first menu uploaded by the first user, and further obtaining the second menu. If the second user and the first user are not the same user, changing the first menu according to the menu editing operation aiming at the first menu uploaded by the second user, and further obtaining a second menu. The first user and the second user are different users.
Specifically, in order to enable the content of the menu stored by the cloud server to be more perfect, a second user is allowed to update the first menu in the using process. The second user can edit the first menu, and then upload the edited second menu to the cloud server. Optionally, after receiving the second recipe, the iterative version and the updated state of the recipe may be recorded. For example, the second user may fry the tomato eggs according to a first recipe provided on the corresponding cooking device, the first recipe shows that the added seasonings are only white sugar and salt, and the second user feels better to add a proper amount of light soy sauce, and then the seasoning name and the corresponding weight of the seasoning are newly added in the first recipe.
Optionally, when the third user uses the second recipe, the cooking device corresponding to the third user may display difference information between the second recipe and the first recipe to the third user, such as the new seasoning name and the corresponding weight thereof in the above example, where the third user is a user different from the first user and the second user, so that multiple choices can be provided for the third user, and further user experience is improved.
Optionally, in order to enable the recipes stored by the cloud server to meet the requirements of different users, when the cooking process of the second user does not accord with the related information in the first cooking recipe, the second recipe is automatically generated. Specifically, second user cooking information and second cooking recording information corresponding to a second user are obtained and stored in real time, wherein the cooking equipment corresponding to the second user is in an on state in the cooking production process of the second user, and the second user cooking information is generated based on related information in the cooking production process of the second user, which is acquired by the cooking equipment corresponding to the second user; further, judging whether each item of information in the second user cooking information meets the allowable range of each item of information in the first menu, and if any item of information does not meet the allowable range, generating a second menu and a second fire control program instruction associated with the second menu according to the second user cooking information, wherein the second menu comprises second cooking food material information, second cooking operation information and second cooking fire information, and the second cooking operation information comprises second cooking actions and time corresponding to each action in the second cooking actions; and dividing the cooking recorded video according to the time corresponding to each action in the second cooking action to obtain a second divided video fragment set, and editing each second divided video fragment in the second divided video fragment set to obtain a second video clip fragment set, wherein the second video clip fragment set comprises a plurality of second video clip fragments, and the duration of the second video clip fragments is smaller than that of the corresponding second divided video fragments, so that the automatic generation of the second menu and the second video clip fragments is realized. For example, if the first recipe shows that the currently added seasoning is salt and the second user actually added seasoning is sugar, the second user's cooking information is proved to be not in line with the first seasoning in the first recipe, and the second recipe is automatically generated according to the above steps.
Optionally, in order to guide the cooking operation of the second user in real time, when the first menu is not suitable for the cooking operation of the second user, a predicted menu is generated according to the current cooking operation of the second user, so as to assist the second user to better complete the cooking operation. Specifically, the cooking device corresponding to the second user is in an on state in the cooking process of the second user, relevant information, such as food material information, seasoning information, cooking actions and the like, of the cooking process of the second user, which is acquired by the cooking device corresponding to the second user, is acquired and stored in real time, and the relevant information of the cooking process of the second user is converted into the cooking information of the second user; further, whether each item of information in the second user cooking information meets the allowable range of each item of information in the first menu is judged, if any item of information does not meet the allowable range, at this time, if the second user continues to operate according to the first menu, a corresponding effect cannot be achieved, so that the second user cooking information can be input into a cooking prediction model, and a predicted cooking menu is determined according to the current second user cooking information, so that a user is instructed to complete subsequent cooking operations according to the predicted cooking menu. The predicted cooking menu comprises predicted cooking food material information, predicted cooking operation information and predicted cooking fire information, and the cooking prediction model can be a model trained in advance.
Example two
Fig. 2 is a flowchart of an interaction method based on a menu according to a second embodiment of the present invention, where the present embodiment optimizes and expands on the basis of the foregoing alternative embodiments, and the present embodiment describes in detail a process of generating a first menu and obtaining a first set of divided video segments. As shown in fig. 2, the method includes:
s210, acquiring a cooking recording video and user cooking information of a first user in a cooking process, wherein the user cooking information comprises user food material information, user operation information, user firepower information and user smoke control information.
S220, generating a text menu based on the user food material information, the user operation information and the user fire information.
Specifically, a preset recipe text template is set in advance, and based on the content format of the preset recipe text module, the user food material information, the user operation information and the user firepower information are input into the preset recipe text template to generate a text recipe.
S230, according to the food material information of the user and the operation information of the user, a corresponding target picture is obtained from a preset standard material library and/or a cooking recorded video, wherein the target picture comprises at least one of an unprocessed food material picture, a processed food material picture, a seasoning picture, a cooking action picture, a cooking process picture and a dish picture.
In this embodiment, the preset standard material library is stored in the cloud server, and is used for storing a cooking-related picture, where the picture may be acquired from a network. By constructing a preset standard material library in advance, relevant picture information can be obtained quickly, and the generation efficiency of the first menu can be improved further. In addition, the pictures in the preset standard material library can have better image quality compared with the pictures in the cooking recorded video, such as higher definition or better shooting angle.
Specifically, according to the food material information and the user operation information of the user, a corresponding target picture is obtained from a preset standard material library and/or a cooking recording video.
The image recognition technology is used for recognizing the images in the preset standard material library, the recognition result is marked in the image information, and then the corresponding target image is determined by matching the food material information of the user and the operation information of the user with the image information respectively.
S240, generating a first menu according to the text menu and the target picture.
Specifically, the text menu and the target picture are combined according to a preset typesetting rule, so that a first menu is generated. The preset typesetting rules comprise rules designed in advance, and the preset typesetting rules are not limited in this embodiment.
S250, cutting the cooking recorded video according to the time corresponding to each action in the first cooking action of the first menu to obtain a plurality of first segmentation video segments.
In this embodiment, the time corresponding to each of the first cooking actions is stored in the first recipe, including the start time and the end time corresponding to each of the first cooking actions.
Specifically, the starting time and the ending time corresponding to each action in the first cooking action can determine the first cooking time corresponding to each action in the first cooking action, and the cooking recorded video is cut according to the first cooking time corresponding to each action in the first cooking action, so that a plurality of cooking video clips can be obtained.
For example, the first cooking action of the first user is a stir-frying action, the start time corresponding to the stir-frying action is 5 minutes and 10 seconds, the end time corresponding to the stir-frying action is 8 minutes and 20 seconds, the first cooking time corresponding to the stir-frying action is 3 minutes and 10 seconds, and then the video record of the time period is obtained as a stir-frying action video.
S260, sorting the plurality of first divided video clips according to the starting time corresponding to each action in the first cooking action to obtain a first divided video clip set.
Specifically, considering that the sequence of each action in the first cooking action affects the mouthfeel of the food, the cut plurality of cooking video clips may be ordered according to the start time of each action in the first cooking action, to obtain the first set of divided video clips.
S270, clipping each first divided video clip in the first divided video clip set to obtain a first clipped video clip set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip.
According to the technical scheme provided by the embodiment II of the invention, after the cooking recording video and the user cooking information are acquired, a text menu is generated based on the user food material information, the user operation information and the user firepower information in the user cooking information, and a corresponding target picture is added for the text menu through a preset standard material library and/or the cooking recording video, so that a first menu corresponding to the picture and text is generated, and the readability of the menu is enhanced; further, according to the time corresponding to each action in the first cooking action, the cooking recorded video is cut, the starting time of each action in the first cooking action is sequenced to obtain a first divided video segment set, each first divided video segment in the first divided video segment set is clipped to obtain a first clipped video segment set, and therefore convenience for a user to use is effectively improved. Through the technical scheme, a menu with strong readability and a detailed and comprehensive dish making process are provided, guidance can be conducted based on the needs of the user, the experience and the use feeling of the user are effectively enhanced, and the cooking success rate of the user is further improved.
In some embodiments, the recipe-based interaction method further comprises: and adding an association identifier corresponding to the target picture in the first video clip set, wherein the association identifier is used for triggering the display of the corresponding target picture in an association display area of the association identifier in the process of playing the first video clip. Through the technical scheme, the experience and the use feeling of the user can be effectively enhanced.
In this embodiment, the association identifier includes a selectable control disposed on the first video clip. The associated display areas are in one-to-one correspondence with the associated identifiers, and can be used for amplifying and displaying details, and can also be used for reflecting the corresponding relation between the target picture and the first clip video clip, for example, a plurality of associated identifiers such as food materials and cooking actions can exist in one video picture.
Optionally, the target picture can be obtained from a preset standard material library, but not cut from the cooking video clips, so that the efficiency of displaying the target picture is improved, and the multiple pictures are beneficial to the reference of users.
Specifically, in order to further improve experience and use feeling of the user, an association identifier corresponding to the target picture may be added to a first video clip in the first video clip set, and the second user clicks the association identifier, so that the corresponding picture is displayed in the association display area. For example, a dot control is added to the food material in the cooking video clip, and after the second user clicks the dot, a picture of the enlarged food material can be displayed in the associated display area corresponding to the dot.
In some embodiments, further comprising: generating fire control program instructions associated with the first menu according to the user fire information and the user smoke control information; the fire control program instruction is used for enabling the cooking equipment corresponding to the second user to automatically control firepower and the smoke machine in the process of guiding the second user to cook by utilizing the first menu. Through the technical scheme, the control of the smoke machine can be assisted by a user, and the cooking success rate is further improved.
In this embodiment, the fire control program instructions include program instructions for controlling the fire power and range hood.
Specifically, considering the situation that the user may have fire and smoke machine in the cooking process which are not controlled properly, and the cooking failure is caused, a fire control program instruction associated with the first menu can be generated to assist the user to complete the cooking. Based on the user firepower information and the user smoke control information, a fire control program instruction is generated, and when the cooking equipment corresponding to the second user guides the second user to perform cooking operation by utilizing the first menu, the firepower and the smoke machine can be automatically adjusted to assist the second user to complete cooking work.
Example III
Fig. 3 is a schematic structural diagram of an interaction device based on a menu according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes: the information acquisition module 31, the first recipe generation module 32, the video processing module 33.
The information obtaining module 31 is configured to obtain a cooking recording video of the first user during the cooking process and cooking information of the user; a first recipe generation module 32 for generating a first recipe from the user cooking information; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action; the video processing module 33 is configured to segment the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first segmented video segment set, and clip each first segmented video segment in the first segmented video segment set to obtain a first clipped video segment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than that of the corresponding first divided video clip.
The technical scheme provided by the third embodiment of the invention solves the problem that the existing menu cannot meet the personalized demand of the user, so that the menu has poor practicability, and has the beneficial effects of enabling the generated menu to meet the personalized demand of the user and enhancing the menu practicability.
Optionally, the user cooking information includes:
User food material information, user operation information, user fire information and user smoke control information; the user food material information comprises food material names, food material weights, food material processing modes, seasoning names and seasoning weights; the user operation information comprises adding actions, cooking actions and cooking time corresponding to the actions; the user fire information comprises a fire gear and fire adjusting time corresponding to the fire gear; the user smoke control information comprises the time for opening the smoke machine, the time for closing the smoke machine, the smoke machine gear and the smoke machine adjusting time corresponding to the smoke machine gear.
Optionally, the video processing module 33 includes a video segmentation unit and a video clipping unit;
the video segmentation unit is used for carrying out segmentation processing on the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first segmented video fragment set; and the video clipping unit is used for clipping each first divided video clip in the first divided video clip set to obtain a first clipped video clip set.
Optionally, the video segmentation unit includes:
the video segmentation subunit is used for cutting the cooking recorded video according to the time corresponding to each action in the first cooking actions to obtain a plurality of first segmentation video fragments;
The video sequencing subunit is configured to sequence the plurality of first split video segments according to the start time corresponding to each action in the first cooking action to obtain a first split video segment set, where the time corresponding to each action in the first cooking action includes the start time corresponding to each action in the first cooking action.
Optionally, the first recipe generation module 32 includes:
the text menu generating unit is used for generating a text menu based on the user food material information, the user operation information and the user firepower information;
the target picture generation unit is used for acquiring corresponding target pictures from a preset standard material library and/or cooking recorded video according to the food material information and the user operation information of the user, wherein the target pictures comprise at least one of unprocessed food material pictures, processed food material pictures, seasoning pictures, cooking action pictures, cooking process pictures and dish pictures;
the first menu generating unit is used for generating a first menu according to the text menu and the target picture.
Optionally, the interaction device based on the menu further comprises an identifier association module, wherein the identifier association module is used for adding an association identifier corresponding to the target picture in a first video clip in the first video clip set, and the association identifier is used for triggering the display of the corresponding target picture in an association display area of the association identifier in the process of playing the first video clip.
Optionally, the interaction device based on the menu further comprises a second menu receiving module, wherein the second menu receiving module is used for receiving a second menu corresponding to the first menu returned by the cooking device corresponding to the second user, and the second menu is obtained by changing the first menu according to the menu editing operation of the first menu uploaded by the second user.
Optionally, the interaction device based on the menu further comprises an instruction generation module for; and generating fire control program instructions associated with the first menu according to the user fire information and the user smoke control information, wherein the fire control program instructions are used for enabling cooking equipment corresponding to the second user to automatically control fire and a smoke machine in the process of guiding the second user to cook by using the first menu.
The interaction device based on the menu provided by the embodiment of the invention can execute the interaction method based on the menu provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 is a schematic structural diagram of an interaction device based on a menu according to a fourth embodiment of the present invention. The menu-based interactive device may be an electronic device intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as menu-based interaction methods.
In some embodiments, the recipe-based interaction method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the recipe-based interaction method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the recipe-based interaction method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. A menu-based interaction method, comprising:
acquiring a cooking recorded video and user cooking information of a first user in a cooking process;
generating a first menu according to the user cooking information; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action;
dividing the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first divided video fragment set, and editing each first divided video fragment in the first divided video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than the duration of the corresponding first divided video clip.
2. The method of claim 1, wherein the user cooking information comprises:
user food material information, user operation information, user fire information and user smoke control information; wherein the user food material information comprises food material names, food material weights, food material processing modes, seasoning names and seasoning weights; the user operation information comprises adding actions, cooking actions and cooking time corresponding to the actions; the user fire information comprises a fire gear and a fire adjusting time corresponding to the fire gear; the user smoke control information comprises a smoke machine opening time, a smoke machine closing time, a smoke machine gear and a smoke machine adjusting time corresponding to the smoke machine gear.
3. The method of claim 2, wherein the segmenting the cooking recorded video according to the time corresponding to each of the first cooking actions to obtain a first set of segmented video segments comprises:
cutting the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a plurality of first segmentation video fragments;
and sequencing the plurality of first divided video clips according to the starting time corresponding to each action in the first cooking action to obtain a first divided video clip set, wherein the time corresponding to each action in the first cooking action comprises the starting time corresponding to each action in the first cooking action.
4. The method of claim 2, wherein generating a first recipe from the user cooking information comprises:
generating a text menu based on the user food material information, the user operation information and the user firepower information;
obtaining a corresponding target picture from a preset standard material library and/or the cooking recorded video according to the user food material information and the user operation information, wherein the target picture comprises at least one of an unprocessed food material picture, a processed food material picture, a seasoning picture, a cooking action picture, a cooking process picture and a dish picture;
and generating a first menu according to the text menu and the target picture.
5. The method as recited in claim 4, further comprising:
and adding an association identifier corresponding to the target picture in a first video clip in the first video clip set, wherein the association identifier is used for triggering the display of the corresponding target picture in an association display area of the association identifier in the process of playing the first video clip.
6. The method of claims 1-5, further comprising:
Receiving a second menu corresponding to the first menu and returned by a cooking device corresponding to a second user, wherein the second menu is obtained by changing the first menu according to a menu editing operation for the first menu uploaded by the second user.
7. The method as recited in claim 4, further comprising:
generating fire control program instructions associated with the first menu according to the user fire information and the user smoke control information;
the fire control program instruction is used for enabling the cooking equipment corresponding to the second user to automatically control firepower and a smoke machine in the process of guiding the second user to cook by utilizing the first menu.
8. A menu-based interactive system, comprising:
the information acquisition module is used for acquiring a cooking recorded video and user cooking information of a first user in a cooking process;
the first menu generation module is used for generating a first menu according to the user cooking information; the first menu comprises a first cooking action and time corresponding to each action in the first cooking action;
the video processing module is used for carrying out segmentation processing on the cooking recorded video according to the time corresponding to each action in the first cooking action to obtain a first segmented video fragment set, and editing each first segmented video fragment in the first segmented video fragment set to obtain a first editing video fragment set; the first video clip collection comprises a plurality of first video clip, and the duration of the first video clip is smaller than the duration of the corresponding first divided video clip.
9. A menu-based interaction device, the menu-based interaction device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the recipe-based interaction method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the recipe-based interaction method of any one of claims 1-7 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311278876.8A CN117319745B (en) | 2023-09-28 | 2023-09-28 | Menu generation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311278876.8A CN117319745B (en) | 2023-09-28 | 2023-09-28 | Menu generation method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117319745A true CN117319745A (en) | 2023-12-29 |
CN117319745B CN117319745B (en) | 2024-05-24 |
Family
ID=89287957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311278876.8A Active CN117319745B (en) | 2023-09-28 | 2023-09-28 | Menu generation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117319745B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103884035A (en) * | 2014-02-28 | 2014-06-25 | 四川长虹电器股份有限公司 | Range hood device capable of displaying recipes |
WO2020055029A1 (en) * | 2018-09-13 | 2020-03-19 | 삼성전자주식회사 | Cooking device and control method therefor |
CN111131855A (en) * | 2019-12-30 | 2020-05-08 | 上海纯米电子科技有限公司 | Cooking process sharing method and device |
CN111754118A (en) * | 2020-06-24 | 2020-10-09 | 重庆电子工程职业学院 | Intelligent menu optimization system based on self-adaptive learning |
CN111861405A (en) * | 2020-07-24 | 2020-10-30 | 上海连尚网络科技有限公司 | Method and device for generating interactive cooking tutorial |
US20210043108A1 (en) * | 2018-03-14 | 2021-02-11 | Hestan Smart Cooking, Inc. | Recipe conversion system |
CN112784640A (en) * | 2019-11-08 | 2021-05-11 | 张玮 | Menu making method and device and cooking machine |
CN113133680A (en) * | 2020-01-17 | 2021-07-20 | 佛山市顺德区美的电热电器制造有限公司 | Cooking equipment and cooking control method and device thereof |
CN113497971A (en) * | 2020-03-20 | 2021-10-12 | 珠海格力电器股份有限公司 | Method, device, storage medium and terminal for obtaining menu |
KR20220126597A (en) * | 2021-03-09 | 2022-09-16 | 박명재 | Method and apparatus for providing home cooking contents in multiple languages |
CN115708021A (en) * | 2021-08-04 | 2023-02-21 | 佛山市顺德区美的洗涤电器制造有限公司 | Recipe generation method and cooking method |
CN116033103A (en) * | 2023-01-16 | 2023-04-28 | 杭州老板电器股份有限公司 | Multimedia menu generation application method and system |
CN116708907A (en) * | 2022-02-28 | 2023-09-05 | 青岛海尔创新科技有限公司 | Menu generation method, apparatus, device, storage medium, and program product |
-
2023
- 2023-09-28 CN CN202311278876.8A patent/CN117319745B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103884035A (en) * | 2014-02-28 | 2014-06-25 | 四川长虹电器股份有限公司 | Range hood device capable of displaying recipes |
US20210043108A1 (en) * | 2018-03-14 | 2021-02-11 | Hestan Smart Cooking, Inc. | Recipe conversion system |
WO2020055029A1 (en) * | 2018-09-13 | 2020-03-19 | 삼성전자주식회사 | Cooking device and control method therefor |
CN112784640A (en) * | 2019-11-08 | 2021-05-11 | 张玮 | Menu making method and device and cooking machine |
CN111131855A (en) * | 2019-12-30 | 2020-05-08 | 上海纯米电子科技有限公司 | Cooking process sharing method and device |
CN113133680A (en) * | 2020-01-17 | 2021-07-20 | 佛山市顺德区美的电热电器制造有限公司 | Cooking equipment and cooking control method and device thereof |
CN113497971A (en) * | 2020-03-20 | 2021-10-12 | 珠海格力电器股份有限公司 | Method, device, storage medium and terminal for obtaining menu |
CN111754118A (en) * | 2020-06-24 | 2020-10-09 | 重庆电子工程职业学院 | Intelligent menu optimization system based on self-adaptive learning |
CN111861405A (en) * | 2020-07-24 | 2020-10-30 | 上海连尚网络科技有限公司 | Method and device for generating interactive cooking tutorial |
KR20220126597A (en) * | 2021-03-09 | 2022-09-16 | 박명재 | Method and apparatus for providing home cooking contents in multiple languages |
CN115708021A (en) * | 2021-08-04 | 2023-02-21 | 佛山市顺德区美的洗涤电器制造有限公司 | Recipe generation method and cooking method |
CN116708907A (en) * | 2022-02-28 | 2023-09-05 | 青岛海尔创新科技有限公司 | Menu generation method, apparatus, device, storage medium, and program product |
CN116033103A (en) * | 2023-01-16 | 2023-04-28 | 杭州老板电器股份有限公司 | Multimedia menu generation application method and system |
Non-Patent Citations (1)
Title |
---|
黄鸿益;张小兰;: "基于烟灶锅联动控制系统的控温菜谱及其自学习方法", 日用电器, no. 12, 25 December 2019 (2019-12-25) * |
Also Published As
Publication number | Publication date |
---|---|
CN117319745B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104914898A (en) | Digital menu generating method and system | |
CN106503442A (en) | Menu recommendation method and device | |
CN104146586A (en) | Intelligent cooking device and working method of intelligent cooking device | |
CN107664959A (en) | Intelligent cooking system and its menu generation, cooking methods | |
CN110009955A (en) | A kind of intelligent tutoring cooking system and its method of automatic lead learning culinary art | |
CN111125340A (en) | Menu information adjusting method and device, storage medium and terminal | |
CN110604459A (en) | Totally-enclosed oil-smoke-free intelligent cooking robot and control system thereof | |
CN114245155A (en) | Live broadcast method and device and electronic equipment | |
CN112017754A (en) | Menu recommendation method and device, range hood and storage medium | |
US10796601B2 (en) | Information processing method, information processing system, and terminal | |
CN112902406A (en) | Parameter setting method, device and computer readable storage medium | |
CN117319745B (en) | Menu generation method, device, equipment and storage medium | |
CN113662446A (en) | Internet of things-based cooking assistance method and device, intelligent terminal and storage medium | |
CN118035429A (en) | Intelligent menu adaptation method, intelligent menu adaptation device, computer equipment and storage medium | |
KR102577604B1 (en) | Japanese bar menu recommendation system based on artificial intelligence | |
CN112420162A (en) | Intelligent recipe recommendation method and device and intelligent cabinet | |
CN111046259A (en) | Menu recording method and device, storage medium and terminal | |
CN117349523A (en) | Cooking real-time guiding method, device, equipment and storage medium | |
CN113588089A (en) | Infrared identification and visible light identification cooking control system | |
CN114115646B (en) | Dish making method and device | |
CN118733812A (en) | Custom menu generation method, system, equipment and storage medium | |
CN112506082B (en) | Method, device and equipment for adjusting electronic menu and computer readable storage medium | |
CN118573828A (en) | AR (augmented reality) -based glasses cooking auxiliary method and device, electronic equipment and storage medium | |
CN114269213B (en) | Information processing device, information processing method, cooking robot, cooking method, and cooking apparatus | |
CN115423652A (en) | Intelligent refrigerator communication method, device and system for assisting cooking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |