CN113497971A - Method, device, storage medium and terminal for obtaining menu - Google Patents
Method, device, storage medium and terminal for obtaining menu Download PDFInfo
- Publication number
- CN113497971A CN113497971A CN202010204135.5A CN202010204135A CN113497971A CN 113497971 A CN113497971 A CN 113497971A CN 202010204135 A CN202010204135 A CN 202010204135A CN 113497971 A CN113497971 A CN 113497971A
- Authority
- CN
- China
- Prior art keywords
- cooking
- shared
- video
- instruction information
- dish
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000010411 cooking Methods 0.000 claims abstract description 163
- 239000012634 fragment Substances 0.000 claims abstract description 24
- 230000011218 segmentation Effects 0.000 claims abstract description 17
- 230000009471 action Effects 0.000 claims abstract description 14
- 235000013305 food Nutrition 0.000 claims description 51
- 239000000463 material Substances 0.000 claims description 48
- 235000011194 food seasoning agent Nutrition 0.000 claims description 14
- 230000006399 behavior Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 8
- 239000000523 sample Substances 0.000 claims description 4
- 230000008569 process Effects 0.000 description 14
- 235000008534 Capsicum annuum var annuum Nutrition 0.000 description 13
- 240000008384 Capsicum annuum var. annuum Species 0.000 description 13
- 238000004891 communication Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 238000005303 weighing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 239000004278 EU approved seasoning Substances 0.000 description 2
- 244000061456 Solanum tuberosum Species 0.000 description 2
- 235000002595 Solanum tuberosum Nutrition 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 150000003839 salts Chemical class 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 240000002234 Allium sativum Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008157 edible vegetable oil Substances 0.000 description 1
- 235000004611 garlic Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000003921 oil Substances 0.000 description 1
- 235000019198 oils Nutrition 0.000 description 1
- 235000002639 sodium chloride Nutrition 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/60—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4398—Processing of audio elementary streams involving reformatting operations of audio signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Primary Health Care (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Epidemiology (AREA)
- Public Health (AREA)
- General Physics & Mathematics (AREA)
- Nutrition Science (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to the technical field of intelligent kitchens, in particular to a method, a storage medium and a terminal for obtaining a menu. The method comprises the following steps: the method comprises the steps of obtaining a dish name, cooking data and a cooking video of a dish to be shared, wherein the cooking data comprises a temperature curve of the dish to be shared; carrying out segmentation processing on the cooking video to obtain a plurality of segments; determining instruction information corresponding to each fragment, and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information; and obtaining a menu of the dishes to be shared according to the names of the dishes, the cooking data and the videos to be shared. The problem of can not help among the prior art accurate the cooking time of controlling and culinary art temperature is solved.
Description
Technical Field
The application relates to the technical field of intelligent kitchens, in particular to a method, a device, a storage medium and a terminal for obtaining a menu.
Background
With the continuous improvement of internet technology, the form of sharing information by users gradually tends to diversification, wherein the sharing information can be shared to other users in the forms of characters, pictures or videos and the like.
At present, in a menu sharing mode, a user still shares with a video through characters, pictures and videos, and the user can obtain information such as cooking temperature and types of dishes when cooking certain types of dishes through the characters, the pictures or the videos. However, due to limitations of text, images or videos, users who share recipes cannot accurately control cooking time and cooking temperature, and thus cooking effects are poor.
Therefore, how to provide a recipe that can assist a user in accurately controlling cooking data such as cooking time and cooking temperature is a problem to be solved.
Disclosure of Invention
In order to solve the problems, the application provides a method, a device, a storage medium and a terminal for obtaining a menu, and solves the problem that a user cannot accurately control cooking time and cooking temperature.
In a first aspect, the present application provides a method of obtaining a recipe, the method comprising:
the method comprises the steps of obtaining a dish name, cooking data and a cooking video of a dish to be shared, wherein the cooking data comprises a temperature curve of the dish to be shared;
carrying out segmentation processing on the cooking video to obtain a plurality of segments;
determining instruction information corresponding to each fragment, and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information;
and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared.
According to an embodiment of the present application, optionally, in the method, the segmenting the cooking video to obtain a plurality of segments includes:
detecting a key image frame in the cooking video, wherein the key image frame is as follows: an image frame in which the type of the food material or the type of the seasoning is changed compared with the previous image frame of the key image frame;
and according to the time nodes of the detected key image frames in the cooking video, carrying out segmentation processing on the cooking video to obtain a plurality of segments.
According to an embodiment of the present application, optionally, in the method, determining instruction information corresponding to each of the segments includes:
respectively identifying a key image frame in the cooking video and a previous image frame of the key image frame, determining a newly added food material or seasoning in the key image frame, determining instruction information of the key image frame according to the newly added food material or seasoning, and taking the instruction information as instruction information of a segment comprising the key image frame.
According to an embodiment of the present application, optionally, in the method, determining instruction information corresponding to each of the segments includes:
extracting voice information input for each of the segments;
and identifying the voice information of each segment to obtain text information corresponding to the voice information, and taking the text information as instruction information corresponding to the segment.
According to an embodiment of the application, optionally, in the above method, the obtaining of the cooking curve includes:
acquiring at least one cooking temperature detected by an infrared sensing probe and time corresponding to each cooking temperature;
and generating a temperature curve according to the at least one cooking temperature and the time corresponding to each cooking temperature.
According to an embodiment of the application, optionally, in the method, obtaining the recipe of the dish to be shared according to the name of the dish, the cooking data, and the video to be shared includes:
identifying each frame of image of the video to be shared, determining a segment of the video to be shared, wherein the cooking equipment is kept in an opening state, and superposing the temperature curve in the segment to obtain a new video to be shared;
and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the new video to be shared.
According to an embodiment of the application, optionally, in the method, obtaining the recipe of the dish to be shared according to the name of the dish, the cooking data, and the video to be shared includes:
receiving a first frame image to be deleted and a second frame image to be deleted which are selected by a user aiming at the video to be shared;
deleting the image frame between the first frame of image to be deleted and the second frame of image to be deleted to obtain a new video to be shared;
and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the new video to be shared.
In a second aspect, the present application provides an apparatus for obtaining a recipe, the apparatus comprising:
the acquisition module is used for acquiring the name of a dish to be shared, cooking data and a cooking video, wherein the cooking data comprises a temperature curve of the dish to be shared;
the segmentation module is used for carrying out segmentation processing on the cooking video to obtain a plurality of segments;
the processing module is used for determining instruction information corresponding to each fragment and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information;
and the determining module is used for obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared.
In a third aspect, the present application provides a storage medium storing a computer program which, when executed by one or more processors, implements a method as described above.
In a fourth aspect, the present application provides a terminal comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, implements the above method.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
the application provides a method, a device, a storage medium and a terminal for obtaining a menu, wherein the method comprises the following steps: the method comprises the steps of obtaining a dish name, cooking data and a cooking video of a dish to be shared, wherein the cooking data comprises a temperature curve of the dish to be shared; carrying out segmentation processing on the cooking video to obtain a plurality of segments; determining instruction information corresponding to each fragment, and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information; and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared, so that the problem that a user cannot accurately control the cooking time and the cooking temperature is solved.
Drawings
The present application will be described in more detail below on the basis of embodiments and with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a method for obtaining a recipe according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a temperature curve according to an embodiment of the present disclosure.
Fig. 3 is a connection block diagram of an apparatus for obtaining recipes according to the second embodiment of the present application.
Detailed Description
The following detailed description will be provided with reference to the accompanying drawings and embodiments, so that how to apply the technical means to solve the technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments and various features in the embodiments of the present application can be combined with each other without conflict, and the formed technical solutions are all within the scope of protection of the present application.
Example one
Referring to fig. 1, the present application provides a method for obtaining a menu applicable to a terminal such as a mobile phone or a tablet computer, wherein the terminal performs steps S110 to S130.
Step S110: and acquiring the name of the dish to be shared, cooking data and a cooking video.
In this embodiment, it can be understood that the user may record his or her cooking process so as to share the cooking process with other users. The cooking data comprises a temperature curve of dishes to be shared, and the temperature curve is a relation curve of temperature points required to be reached by the user at each stage in the process of cooking the dishes. Wherein, the obtaining mode of the cooking curve comprises the following steps: firstly, acquiring at least one cooking temperature detected by an infrared sensing probe and time corresponding to each cooking temperature; then, a temperature curve is generated according to the at least one cooking temperature and the time corresponding to each cooking temperature. For example, the change of cooking temperature detected by the infrared sensing probe within 5 minutes is shown in fig. 2, wherein the abscissa is time in minutes; the ordinate is the cooking temperature in degrees celsius, and the temperature point that the cooking device needs to reach at each time point within five minutes is shown in fig. 2.
The cooking data can further comprise a food material weight mapping table, wherein the food material weight mapping table comprises names of each food material required when the dishes to be shared are cooked and the weight of the food material corresponding to each food material, and each food material can be cut on a cutting board when the early-stage food material processing is considered. Therefore, the weight of each food material needed by the dish to be shared can be obtained through the terminal linkage intelligent chopping board: the intelligent cutting board weighs each food material, records the sequence of each weighing data, and the recorded sequence can be marked by each weighing data. The manner of marking may be a number. The name of each food material on the chopping board is determined through image recognition, and the name of each food material is marked in a digital mode. Marking according to each weighing data and the name of each food material, forming a corresponding relation by the weighing data and the name of the food material with the same marks, and forming a food material weight mapping table by the corresponding relations.
And the user records the whole cooking process in the cooking process in a video recording mode to form the cooking video. Considering that the user and the other users cannot accurately control the cooking time, the time point of adding food materials and the like only when watching the video, the video can be further processed, for example, the user and the other users can be helped to finish cooking better by adding prompt information, and the user experience is improved.
It is understood that the terminal may have various functions. In some embodiments, the terminal has corresponding functions according to different application scenarios. For example, the terminal may have an editing function, and the name of the dish is input by the user through the terminal, so that the terminal obtains the name of the dish to be shared; the terminal can have a voice recognition function, the name of the food material required for cooking the dish to be cooked is input by a user through voice, the voice input by the user is recognized by the terminal, and the name of the food material required for cooking the dish to be cooked is obtained; the terminal can also have a video recording function, and the cooking video is shot by a camera carried by the terminal.
Step S120: and carrying out segmentation processing on the cooking video to obtain a plurality of segments.
In the present embodiment, since the cooking video includes a plurality of cooking steps. For example, the steps of washing food materials, adding edible oil, adding salt, and the like. In order to add the prompt information of each step into the cooking video, the cooking video needs to be segmented, and then a plurality of segments are needed. It will be appreciated that different segments represent different cooking steps.
The step S120 specifically includes a step S121 and a step S122.
S121: detecting a key image frame in the cooking video.
S122: and according to the time nodes of the detected key image frames in the cooking video, carrying out segmentation processing on the cooking video to obtain a plurality of segments.
In this embodiment, the key image frame is: and the image frame with the food material type or the seasoning type changed compared with the last image frame of the key image frame. It can be understood that, when a food or seasoning different from the previous frame appears in a certain frame in the cooking video, it can be understood that the certain frame is about to enter a new cooking step, the new cooking step processes the new food or seasoning, and if a time node of the certain frame corresponding to the cooking video is taken as a segmentation basis of the cooking video, the cooking steps related in the cooking video can be segmented according to the segmentation basis to obtain a plurality of segments, so as to add corresponding prompt information into each segment. For example, assuming that the key image frames detected in the cooking video include 7 frames, where the food materials respectively appearing in each frame are green pepper, potato, oil, garlic, green pepper, potato, and salt, and the time nodes appearing in the cooking video are respectively 5S, 30S, 70S, 75S, 80S, 100S, and 210S, the cooking video is divided into 8 segments according to the time nodes respectively appearing in the cooking video of the 7 frames.
Step S130: and determining instruction information corresponding to each fragment, and overlaying the instruction information corresponding to each fragment to the fragment to obtain the video to be shared.
In this embodiment, the instruction information (i.e., the prompt information) is used to instruct the user to execute a behavior action corresponding to the instruction information, and by determining the instruction information corresponding to each segment and recording the instruction information of each segment in the corresponding segment, a video to be shared is obtained, so that it is ensured that the user watching the video to be shared can specify the cooking time of each cooking step and the time when various food materials or seasonings are added into the pot, the food material processing information, and the like according to the instruction information appearing in the video. It can be understood that the instruction information corresponding to each clip is superimposed on each frame of image included in the clip, so that a user can execute corresponding behavior actions at any time according to the instruction information appearing in each picture when the video is played, and the problem that the food or seasoning is added in the picture indicated by the fact that the user does not see a certain picture including the instruction information in the cooking process is avoided, and the taste of the dish is further influenced.
It can be understood that, according to the time node of each segment, the segments are spliced according to the time sequence of each segment after the instruction information is superimposed on each segment, and then the video to be shared is obtained.
It is to be understood that the instruction information may be obtained by an image recognition technique, or may be obtained by a voice recognition technique. Specifically, when the image recognition technology is adopted, determining the instruction information corresponding to each segment includes: respectively identifying a key image frame in the cooking video and a previous image frame of the key image frame, determining a newly added food material or seasoning in the key image frame, determining instruction information of the key image frame according to the newly added food material or seasoning, and taking the instruction information as instruction information of a segment comprising the key image frame. Illustratively, identifying a key image frame and a previous image frame of the key image frame respectively, determining that a newly added food material in the key image frame is a green pepper, and the identification result of the key image frame further indicates that the green pepper is added to the pan, determining that the instruction information of the key image frame is the green pepper to be added according to the identification result, and adding the instruction information of the key image frame which is the green pepper to the segment where the key image frame is located to remind a user to execute a behavior action corresponding to the instruction information of the green pepper to be added to the segment, wherein the behavior action can be understood as adding the green pepper to the pan.
When the voice recognition technology is adopted, the step of determining the instruction information corresponding to each segment comprises the following steps: and extracting the voice information input aiming at each segment, identifying the voice information of each segment, obtaining text information corresponding to the voice information, and taking the text information as instruction information corresponding to the segment. For example, if the voice information input for the third segment is a green pepper added, the text information obtained by recognizing the voice information is a green pepper added, and the information of the green pepper added is determined as instruction information of the third segment and added to the third segment, so as to remind the user to perform a behavior action of adding a green pepper in the segment, where the behavior action may be understood as adding a green pepper into the pan. Step S140: and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared.
In this embodiment, in order to facilitate the user and other users to cook by using the recipe, the recipe may be stored in a local database and uploaded to a cloud database. The reason for saving the cooking time and the cooking temperature to the local database is that the user can obtain a corresponding menu when cooking the dish to be cooked again, and then the cooking operation can be completed through the menu, so that the purpose of assisting the user in accurately controlling the cooking time and the cooking temperature is achieved. For example, according to a video to be cooked in the menu, the indication information in the video to be cooked can be viewed to assist the user to explicitly add various seasonings and various food materials at the time point; the cooking device can be controlled according to the temperature curve included in the cooking data in the recipe to reach the corresponding temperature point in each cooking stage according to the temperature curve.
The reason for uploading to the cloud database is that other users can obtain a menu corresponding to the dish when cooking the dish like the dish to be cooked, and then can finish cooking operation through the menu. When the menus are searched in the cloud database by the other users, a plurality of menus related to main food materials of cooking menus can be searched according to names of the menus, then the menu with the smallest weight difference with the food materials prepared by the user is searched in the plurality of menus to serve as a target menu, and finally, the cooking operation is completed according to the target menu, so that the purpose of assisting the local user to accurately control the cooking time and the cooking temperature is achieved.
It can be understood that, in order to make the user and the other users clearly determine the consumption of food materials corresponding to various food materials when watching the video to be shared. Therefore, the name of the food material corresponding to the segment and the consumption of the food material corresponding to the name can be added into each segment, so that the weight of each food material when the user and the other users definitely cook is ensured, and the success rate of repeated carving of the menu is improved.
In the present embodiment, the data included in the obtained recipe are: the dish name, the cooking data and the video to be shared. The name of the dish can help a local user or other users to quickly find a plurality of menus related to the food materials prepared by the local user or other users; in order to further find the menu with smaller weight difference with the self-prepared food material in the plurality of menus, the food material weight mapping table in the cooking data can be checked, so that a user can conveniently find the menu which is related to the self-prepared food material and has the smallest weight difference with the self-prepared food material as a reference menu, and the success rate of repeated carving of the menu is improved. The temperature curve included in the cooking data is used for helping a local user or other users to realize automatic synchronization of the cooking temperature of the cooking equipment when the local user or other users cook the dishes without manually controlling the cooking temperature of the cooking equipment, and the cooking equipment is ensured to reach corresponding temperature points in each cooking stage so as to achieve the purpose of accurately controlling the cooking temperature.
It can be understood that, in order to enrich the cooking information provided by the video to be shared, the instruction information corresponding to each segment may be superimposed on the segment to obtain the video to be shared, and then the addition processing of the temperature information may be performed on the segment which keeps the cooking device in the on state in the video to be shared. Specifically, firstly, each frame of image of the video to be shared is identified, a segment of the video to be shared, in which the cooking equipment is kept in an open state, is determined, and then the temperature curve is superimposed on the segment, so as to obtain a new video to be shared; and finally, obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the new video to be shared.
It can be further understood that after the video to be shared is obtained, the video to be shared can be screened so as to meet the change requirement of the user. Specifically, firstly, a first frame image to be deleted and a second frame image to be deleted, which are selected by a user for the video to be shared, are received; secondly, deleting the image frame between the first frame of image to be deleted and the second frame of image to be deleted to obtain a new video to be shared; and finally, obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the new video to be shared. For example, suppose that a user only wants to keep a process of cooking dishes in a video to be shared, but does not want to keep a process of processing food materials, after the video to be shared is obtained, a first frame image to be deleted and a second frame image to be deleted are selected, and an image frame between the first frame image to be deleted and the second frame image to be deleted is deleted, so that a new video to be shared is obtained. It is understood that, during the deleting process, the first frame image to be deleted, the second frame image to be deleted, and the image frame between the first frame image to be deleted and the second frame image to be deleted may also be deleted together. This embodiment is not limited in any way.
Example two
Referring to fig. 3, the present application provides an apparatus for obtaining a recipe, the apparatus comprising:
the acquisition module 201 is configured to acquire a name of a dish to be shared, cooking data and a cooking video, where the cooking data includes a temperature curve of the dish to be shared.
An implementation principle of the obtaining module 201 is similar to that of the step S110 in the first embodiment, and therefore, the implementation principle of the obtaining module 201 may specifically refer to the first embodiment, which is not described herein again.
And a segmenting module 202, configured to segment the cooking video to obtain multiple segments.
The implementation principle of the segmentation module 202 is similar to that of the step S120 in the first embodiment, and therefore, the implementation principle of the segmentation module 202 may specifically refer to the first embodiment, which is not described herein again.
The processing module 203 is configured to determine instruction information corresponding to each segment, and superimpose the instruction information corresponding to each segment onto the segment to obtain a video to be shared, where the instruction information is used to instruct a user to execute a behavior action corresponding to the instruction information.
The implementation principle of the processing module 203 is similar to that of the step S130 in the first embodiment, and therefore, the implementation principle of the processing module 203 may specifically refer to the first embodiment, which is not described herein again.
And the determining module 204 is configured to obtain a menu of the dishes to be shared according to the name of the dish, the cooking data, and the video to be shared.
The implementation principle of the determining module 204 is similar to that of the step S140 in the first embodiment, and therefore, the implementation principle of the determining module 204 may specifically refer to the first embodiment, which is not described herein again.
EXAMPLE III
The present embodiment further provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., where a computer program is stored, and when the computer program is executed by a processor, all or part of the method steps in the first embodiment may be implemented, and specific implementation processes of all or part of the method steps may be referred to in the first embodiment, and no repeated description is repeated herein.
Example four
The embodiment of the application provides a terminal, which can be a mobile phone or a tablet computer and the like, and includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, implements the method for obtaining a menu as described in the first embodiment. It is to be understood that the terminal may also include multimedia components, as well as communication components.
Wherein the processor is used for executing all or part of the steps in the method for obtaining the menu as in the first embodiment. The memory is used to store various types of data, which may include, for example, instructions for any application or method in the terminal, as well as application-related data.
The Processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and is configured to execute the method for obtaining a recipe in the first embodiment.
The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The multimedia component may include a screen, which may be a touch screen, and an audio component for outputting and/or inputting an audio signal. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in a memory or transmitted through a communication component.
The communication component is used for carrying out wired or wireless communication between the terminal and other equipment. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding Communication component may include: Wi-Fi module, bluetooth module, NFC module.
In summary, the present application provides a method, an apparatus, a storage medium, and a terminal for obtaining a recipe, where the method includes: the method comprises the steps of obtaining a dish name, cooking data and a cooking video of a dish to be shared, wherein the cooking data comprises a temperature curve of the dish to be shared; carrying out segmentation processing on the cooking video to obtain a plurality of segments; determining instruction information corresponding to each fragment, and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information; and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared. The problem of can not help among the prior art accurate the cooking time of controlling and culinary art temperature is solved.
It is further understood that, in order to facilitate the user and other users to cook using the recipe, the recipe may be stored in a local database and uploaded to a cloud database.
Further, it can be understood that the name of the food material corresponding to the segment and the consumption of the food material corresponding to the name can be added to each segment, so that the weight of each food material is determined when the user and the other users watch the video to be shared, and the success rate of repeated carving of the menu is improved.
It can be further understood that after the video to be shared is obtained, the video to be shared can be screened so as to meet the change requirement of the user.
In the embodiments provided in the present application, it should be understood that the disclosed method can be implemented in other ways. The above-described method embodiments are merely illustrative.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments disclosed in the present application are described above, the descriptions are only for the convenience of understanding the present application, and are not intended to limit the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.
Claims (10)
1. A method of obtaining a recipe, the method comprising:
the method comprises the steps of obtaining a dish name, cooking data and a cooking video of a dish to be shared, wherein the cooking data comprises a temperature curve of the dish to be shared;
carrying out segmentation processing on the cooking video to obtain a plurality of segments;
determining instruction information corresponding to each fragment, and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information;
and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared.
2. The method of claim 1, wherein segmenting the cooking video into a plurality of segments comprises:
detecting a key image frame in the cooking video, wherein the key image frame is as follows: an image frame in which the type of the food material or the type of the seasoning is changed compared with the previous image frame of the key image frame;
and according to the time nodes of the detected key image frames in the cooking video, carrying out segmentation processing on the cooking video to obtain a plurality of segments.
3. The method of claim 2, wherein determining instruction information for each of the segments comprises:
respectively identifying a key image frame in the cooking video and a previous image frame of the key image frame, determining a newly added food material or seasoning in the key image frame, determining instruction information of the key image frame according to the newly added food material or seasoning, and taking the instruction information as instruction information of a segment comprising the key image frame.
4. The method of claim 2, wherein determining instruction information for each of the segments comprises:
extracting voice information input for each of the segments;
and identifying the voice information of each segment to obtain text information corresponding to the voice information, and taking the text information as instruction information corresponding to the segment.
5. The method of claim 1, wherein the cooking profile is obtained by:
acquiring at least one cooking temperature detected by an infrared sensing probe and time corresponding to each cooking temperature;
and generating a temperature curve according to the at least one cooking temperature and the time corresponding to each cooking temperature.
6. The method of claim 1, wherein obtaining a recipe for a dish to be shared based on the dish name, the cooking data, and the video to be shared comprises:
identifying each frame of image of the video to be shared, determining a segment of the video to be shared, wherein the cooking equipment is kept in an opening state, and superposing the temperature curve in the segment to obtain a new video to be shared;
and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the new video to be shared.
7. The method of claim 1, wherein obtaining a recipe for a dish to be shared based on the dish name, the cooking data, and the video to be shared comprises:
receiving a first frame image to be deleted and a second frame image to be deleted which are selected by a user aiming at the video to be shared;
deleting the image frame between the first frame of image to be deleted and the second frame of image to be deleted to obtain a new video to be shared;
and obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the new video to be shared.
8. An apparatus for obtaining a recipe, the apparatus comprising:
the acquisition module is used for acquiring the name of a dish to be shared, cooking data and a cooking video, wherein the cooking data comprises a temperature curve of the dish to be shared;
the segmentation module is used for carrying out segmentation processing on the cooking video to obtain a plurality of segments;
the processing module is used for determining instruction information corresponding to each fragment and overlaying the instruction information corresponding to each fragment to the fragment to obtain a video to be shared, wherein the instruction information is used for indicating a user to execute a behavior action corresponding to the instruction information;
and the determining module is used for obtaining a menu of the dishes to be shared according to the dish names, the cooking data and the videos to be shared.
9. A storage medium, characterized in that the storage medium stores a computer program which, when executed by one or more processors, implements the method according to any one of claims 1-7.
10. A terminal, characterized in that it comprises a memory and a processor, said memory having stored thereon a computer program which, when executed by said processor, implements the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204135.5A CN113497971B (en) | 2020-03-20 | 2020-03-20 | Method, device, storage medium and terminal for obtaining menu |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204135.5A CN113497971B (en) | 2020-03-20 | 2020-03-20 | Method, device, storage medium and terminal for obtaining menu |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113497971A true CN113497971A (en) | 2021-10-12 |
CN113497971B CN113497971B (en) | 2023-01-20 |
Family
ID=77993164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010204135.5A Active CN113497971B (en) | 2020-03-20 | 2020-03-20 | Method, device, storage medium and terminal for obtaining menu |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113497971B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117319745A (en) * | 2023-09-28 | 2023-12-29 | 火星人厨具股份有限公司 | Interaction method, device, equipment and storage medium based on menu |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104185088A (en) * | 2014-03-03 | 2014-12-03 | 无锡天脉聚源传媒科技有限公司 | Video processing method and device |
CN207230670U (en) * | 2017-09-11 | 2018-04-13 | 广东万家乐燃气具有限公司 | Range hood control device and culinary art control system |
US20190139444A1 (en) * | 2017-11-09 | 2019-05-09 | ALK Ventures LLC | Interactive Cooking Application |
CN110222720A (en) * | 2019-05-10 | 2019-09-10 | 九阳股份有限公司 | A kind of cooking equipment with short video acquisition function |
-
2020
- 2020-03-20 CN CN202010204135.5A patent/CN113497971B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104185088A (en) * | 2014-03-03 | 2014-12-03 | 无锡天脉聚源传媒科技有限公司 | Video processing method and device |
CN207230670U (en) * | 2017-09-11 | 2018-04-13 | 广东万家乐燃气具有限公司 | Range hood control device and culinary art control system |
US20190139444A1 (en) * | 2017-11-09 | 2019-05-09 | ALK Ventures LLC | Interactive Cooking Application |
CN110222720A (en) * | 2019-05-10 | 2019-09-10 | 九阳股份有限公司 | A kind of cooking equipment with short video acquisition function |
Non-Patent Citations (1)
Title |
---|
马壮实HERA: "这个【煎牛排】教程能解决你日常99%牛肉问题", 《哔哩哔哩》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117319745A (en) * | 2023-09-28 | 2023-12-29 | 火星人厨具股份有限公司 | Interaction method, device, equipment and storage medium based on menu |
CN117319745B (en) * | 2023-09-28 | 2024-05-24 | 火星人厨具股份有限公司 | Menu generation method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113497971B (en) | 2023-01-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105653636B (en) | Information processing method and device for information processing | |
KR101700656B1 (en) | Method and device for acquiring user information | |
US10430456B2 (en) | Automatic grouping based handling of similar photos | |
CN105116762A (en) | Cooking control method and device and electronic device | |
CN105095361A (en) | Method and device for acquiring cooking control schemes and electronic device | |
EP2892015A1 (en) | Media data processing method and non-transitory computer readable storage medium thereof | |
CN113495904A (en) | Method, device, storage medium and terminal for obtaining menu | |
US20160070960A1 (en) | Image processing apparatus | |
CN113497971B (en) | Method, device, storage medium and terminal for obtaining menu | |
CN107432656A (en) | Processing method, device and the intelligent appliance of food materials | |
CN107094231A (en) | Intelligent image pickup method and device | |
US20160112498A1 (en) | Methods and devices for acquiring user information | |
JP2017021650A (en) | Cooking recipe creation method and program | |
JP6416429B1 (en) | Information processing apparatus, information processing method, information processing program, and content distribution system | |
JP2017033293A (en) | Image processing device, image processing method, program, and recording medium | |
US20230089725A1 (en) | Cooking apparatus and controlling method thereof | |
WO2013001990A1 (en) | Photograph memo creation device | |
CN109510752B (en) | Information display method and device | |
CN113870195A (en) | Target map detection model training and map detection method and device | |
US11283945B2 (en) | Image processing apparatus, image processing method, program, and recording medium | |
CN106991176A (en) | File management method, device, equipment and storage medium | |
JP6395965B1 (en) | Recipe submission support server, recipe submission support method, recipe submission support program, and recipe submission support system | |
CN111723278A (en) | Menu recommendation method, device, recommendation system and related equipment | |
JP2019201396A (en) | Information processor, information processing method, information processing program, and content distribution system | |
US10009572B2 (en) | Method for enhancing media content of a picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |