CN110547665B - Cooking state determining method and device, storage medium and server - Google Patents

Cooking state determining method and device, storage medium and server Download PDF

Info

Publication number
CN110547665B
CN110547665B CN201810571999.3A CN201810571999A CN110547665B CN 110547665 B CN110547665 B CN 110547665B CN 201810571999 A CN201810571999 A CN 201810571999A CN 110547665 B CN110547665 B CN 110547665B
Authority
CN
China
Prior art keywords
cooking
sound
state
voiceprint
voice control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810571999.3A
Other languages
Chinese (zh)
Other versions
CN110547665A (en
Inventor
孟德龙
陈炽锵
曾成鑫
尹二强
杨应彬
杜放
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Original Assignee
Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd filed Critical Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Priority to CN201810571999.3A priority Critical patent/CN110547665B/en
Publication of CN110547665A publication Critical patent/CN110547665A/en
Application granted granted Critical
Publication of CN110547665B publication Critical patent/CN110547665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J27/00Cooking-vessels
    • A47J27/002Construction of cooking-vessels; Methods or processes of manufacturing specially adapted for cooking-vessels
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J36/00Parts, details or accessories of cooking-vessels
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J36/00Parts, details or accessories of cooking-vessels
    • A47J36/32Time-controlled igniting mechanisms or alarm devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Food Science & Technology (AREA)
  • Manufacturing & Machinery (AREA)
  • Electric Ovens (AREA)

Abstract

The invention discloses a method for determining a cooking state, which comprises the following steps: receiving a cooking sound sent by voice control equipment; extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result; and generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal. The invention also discloses a cooking state determining device, a computer readable storage medium and a server.

Description

Cooking state determining method and device, storage medium and server
Technical Field
The invention relates to an intelligent household appliance technology, in particular to a method and a device for determining a cooking state, a computer readable storage medium and a server.
Background
As is known, a rich and delicious dish needs to be cooked, the duration and degree of heating are required to be accurately controlled, and the importance of the duration and degree of heating is emphasized when a three-part mound and seven-part stove is called in a kitchen. The degree of maturity of the dish can be accurately controlled by properly controlling and applying the degree of heat, and the loss of nutrient substances of the dish is avoided as much as possible. In short, the duration of heat is the most important thing in cooking and the most difficult thing to explain.
The existing cooking equipment is matched with a corresponding cloud recipe, and a control instruction is issued to the cooking equipment according to the cloud recipe so as to adjust firepower and time and achieve the purpose of cooking dishes. However, the user prepares the corresponding food materials according to the cloud recipe, but often cannot cook satisfactory dishes; the reason for this is as follows:
the traditional temperature sensor is used for acquiring the temperature of the existing cooking equipment, and the wind power around the cooking equipment and the slight change of the ambient temperature cannot be identified by the temperature sensor, so that the accuracy of the acquired temperature is influenced; furthermore, the firepower and duration are controlled according to the collected temperature, so that the duration required by the cloud recipe cannot be perfectly restored.
Disclosure of Invention
In view of the above, the present invention provides a cooking state determining method, apparatus, computer readable storage medium and server.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides a method for determining a cooking state, which comprises the following steps:
receiving a cooking sound sent by voice control equipment;
extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result;
and generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal.
In the above aspect, the extracting voiceprint features of the cooking sound includes:
and acquiring voiceprint information of the cooking sound, performing noise reduction processing on the voiceprint information, and extracting voiceprint characteristics from the voiceprint information subjected to the noise reduction processing.
In the foregoing solution, the performing pattern recognition according to the voiceprint feature includes:
acquiring cooking operation of cooking equipment, and determining a cooking state corresponding to the cooking operation;
acquiring an acoustic model corresponding to the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
In the foregoing solution, the determining the corresponding cooking state according to the result of the pattern recognition includes:
judging whether the matching value exceeds a preset threshold value or not, determining that the matching value exceeds the preset threshold value, and determining that the cooking equipment reaches a cooking state corresponding to the cooking operation.
In the above scheme, the method further comprises:
and in the cooking process, acquiring the corresponding relation between at least one group of cooking sounds and the cooking state, and updating the acoustic model corresponding to the cooking state according to the corresponding relation between the at least one group of cooking sounds and the cooking state.
The embodiment of the invention also provides a device for determining the cooking state, which comprises: the device comprises a receiving module, a processing module and a sending module; wherein,
the receiving module is used for receiving the cooking sound sent by the voice control equipment;
the processing module is used for extracting the voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining the corresponding cooking state according to the result of the mode recognition;
the sending module is used for generating a prompt instruction according to the cooking state and sending the prompt instruction to a voice control device and/or a mobile terminal, and the prompt instruction is received and executed by the voice control device and/or the mobile terminal.
In the above scheme, the processing module is specifically configured to acquire voiceprint information of the cooking sound, perform noise reduction processing on the voiceprint information, and extract voiceprint features from the voiceprint information after the noise reduction processing.
In the above scheme, the processing module is specifically configured to acquire a cooking operation of a cooking device, and determine a cooking state corresponding to the cooking operation;
acquiring an acoustic model corresponding to the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
In the above scheme, the processing module is specifically configured to determine whether the matching value exceeds a preset threshold, determine that the matching value exceeds the preset threshold, and determine that the cooking device reaches a cooking state corresponding to the cooking operation.
In the above scheme, the processing module is further configured to obtain a corresponding relationship between at least one group of cooking sounds and a cooking state during a cooking process, and update the acoustic model corresponding to the cooking state according to the corresponding relationship between the at least one group of cooking sounds and the cooking state.
The embodiment of the invention also provides a device for determining the cooking state, which comprises: a processor and a memory for storing a computer program capable of running on the processor;
wherein the processor is configured to execute the steps of any one of the above-mentioned cooking state determining methods when running the computer program.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the above-mentioned methods for determining a cooking state.
The cooking state determining method, the cooking state determining device, the computer readable storage medium and the server provided by the embodiment of the invention receive cooking sound sent by the voice control equipment; extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result; and generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal. According to the scheme of the embodiment of the invention, the cooking sound sent out by the cooking equipment is acquired, the cooking sound is identified to determine the current cooking state, and the current cooking state is informed to the user in a voice playing and/or message prompting mode, so that the user can adjust the cooking operation in time, control the duration and degree of heating, achieve the best edible taste, and complete satisfactory dishes.
Drawings
Fig. 1 is a schematic flowchart of a first cooking state determining method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a cooking method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a second cooking state determining method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a first cooking state determining device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a second cooking state determining device according to an embodiment of the present invention.
Detailed Description
In various embodiments of the present invention, a cooking sound transmitted by a voice control apparatus is received; extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result; and generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal.
The present invention will be described in further detail with reference to examples.
Fig. 1 is a schematic flowchart of a first cooking state determining method according to an embodiment of the present invention; the method is applied to a cloud server, and as shown in fig. 1, the method includes:
step 101, receiving a cooking sound sent by voice control equipment;
here, the voice control device is a device having a voice receiving function and a voice playing function, such as a smart Speaker (Hub Speaker).
The cooking device generates cooking sound after executing cooking operation, and the voice control device collects the cooking sound and sends the cooking sound to the cloud server.
The cooking sound may include: the sound of hot oil, the sound of boiling of clear soup, the sound of frying vegetables and fruits, the sound of frying seasonings, the sound of boiling of thick soup, the sound of oil production of meat and the like.
Specifically, before the receiving of the cooking sound transmitted by the voice control apparatus, the method may include:
the cloud server determines a cooking recipe, and sends cooking food materials included in the cooking recipe to voice control equipment, wherein the cooking food materials are played by the voice control equipment;
the cloud server sends a first control instruction to the cooking equipment according to the cooking recipe, so as to control the cooking equipment to perform cooking operation; or the cloud server sends a first voice instruction to the voice control equipment to inform a user of cooking, and the user inputs a second control instruction to the cooking equipment through an operation key of the cooking equipment to control the cooking equipment to perform cooking operation; the first control instruction or the second control instruction is received and executed by the cooking device.
Here, the cooking apparatus may further transmit operation information to a cloud server to inform the cloud server of the current cooking operation itself; and the cloud server receives the operation message and acquires the current cooking operation of the cooking equipment according to the operation message.
The voice control device can also be used as a functional module of the cooking device, namely the cooking device can comprise a voice control module with a voice receiving function and a voice playing function, cooking sounds generated during cooking can be acquired through the voice control module and sent to the cloud server, and the cloud server receives the cooking sounds sent by the cooking device.
102, extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result;
specifically, the cloud server extracts voiceprint features of the cooking sound by using a voiceprint recognition technology, and performs pattern recognition according to the voiceprint features.
Specifically, the extracting the voiceprint feature of the cooking sound includes:
the cloud server determines the voiceprint information of the cooking sound, and extracts voiceprint characteristics from the voiceprint information of the cooking sound; the voiceprint features include, but are not limited to: Mel-Frequency cepstral coefficients (MFCC), gamma-pass based cepstral coefficients (GFCC), Linear Predictive Cepstral Coefficients (LPCC), and the like.
Here, before the extracting the voiceprint feature of the cooking sound, the method may further include:
and the cloud server acquires the voiceprint information of the cooking sound, and performs noise reduction processing on the voiceprint information to acquire the voiceprint information after the noise reduction processing. The cloud server may extract voiceprint features from the noise-reduced voiceprint information.
Specifically, the performing pattern recognition according to the voiceprint feature includes:
the method comprises the steps that a cloud server obtains cooking operation of cooking equipment and determines a cooking state corresponding to the cooking operation;
acquiring an acoustic model corresponding to the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
Specifically, the determining the corresponding cooking state according to the result of the pattern recognition includes:
judging whether the matching value exceeds a preset threshold value or not, and if the matching value exceeds the preset threshold value, determining that the cooking equipment reaches a cooking state corresponding to the cooking operation; i.e. it is determined that the cooking device completed the cooking operation.
The cooking state, namely the duration of heat, refers to the maturity of food materials reached by cooking equipment according to certain firepower and time in the cooking process.
Here, at least one cooking recipe is saved in the cloud server; the cooking recipe comprising: the method comprises the steps of cooking food materials, cooking operation, a cooking state corresponding to the cooking operation (namely the cooking state required to be achieved after the cooking operation is executed by the cooking equipment), a cooking sound corresponding to the cooking state, a preset threshold corresponding to the cooking sound, and the next cooking operation corresponding to the cooking operation. Here, the cooking sound includes: an acoustic model for pattern recognition; the acoustic model, comprising: an acoustic Model for the cooking sound obtained based on a Gaussian Mixture Model (GMM), a Hidden Markov Model (HMM), or the like.
It should be noted that in the cooking recipe, different cooking states correspond to different cooking sounds, and different cooking sounds may also correspond to different preset thresholds. The preset threshold values corresponding to different cooking sounds can be determined by a manufacturer using the cloud server according to the requirement of accurate matching; generally, the higher the requirement for precise matching, the higher the preset threshold value; when the requirement for precise matching is relatively low, the preset threshold value can be relatively reduced.
The obtaining of the cooking operation of the cooking device comprises: the cloud server receives operation information sent by the cooking equipment, and determines the current cooking operation of the cooking equipment according to the operation information.
The acquiring of the cooking state corresponding to the cooking operation includes: and the cloud server inquires a cooking recipe and acquires a cooking state corresponding to the cooking operation.
The obtaining of the acoustic model corresponding to the cooking state includes: the cloud server inquires a cooking recipe, determines the cooking sound corresponding to the cooking state, and then obtains the acoustic model of the cooking sound corresponding to the cooking state.
Two cooking recipes are provided below, cooking recipe one, which may include:
cooking food materials, such as: oil, a certain vegetable;
cooking operations, such as: frying in hot oil and stir-frying;
the cooking state corresponding to hot oil is a hot oil state, the cooking sound corresponding to the hot oil state is a hot oil sound, the preset threshold value corresponding to the hot oil sound can be 80% (which means that the acquired cooking sound is matched with the acoustic model of the hot oil sound, and the obtained matching value exceeds 80%), and the next cooking operation corresponding to the hot oil is stir-frying;
the cooking state corresponding to the quick-frying is vegetable and fruit oil passing sound, the preset threshold corresponding to the vegetable and fruit oil passing sound can be 70% (indicating that the acquired cooking sound is matched with the acoustic model of the vegetable and fruit oil passing sound, and the obtained matching value is more than 70%), and the next cooking operation of the quick-frying is empty (namely, the next cooking operation is not needed, and the cooking is finished).
Cooking recipe two, can include:
cooking food materials, such as: rice;
cooking operations, such as: cooking;
the cooking state corresponding to cooking is a cooking completion state, the cooking sound corresponding to the cooking completion state is a steam sound, the preset threshold corresponding to the steam sound can be 90% (indicating that the acquired cooking sound is matched with the acoustic model of the steam sound and the obtained matching value needs to exceed 90%), and the next cooking operation corresponding to cooking is empty (namely, the next cooking operation is not needed and the cooking is completed). Here, the collected cooking sound may be an exhaust sound or a buzzer sound emitted after the electric pressure cooker or the electric rice cooker completes cooking.
Here, different cooking operations, cooking states, cooking sounds, and preset thresholds in the cooking recipe may be preset and saved according to various recipes and cooking experiences by a manufacturer using the cloud server.
103, generating a prompt instruction according to the cooking state, and sending the prompt instruction to a voice control device and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control device and/or the mobile terminal.
Here, the prompt instruction may include: a prompt voice and/or a prompt message;
the prompt instruction is received and executed by the voice control device and comprises:
the voice control equipment receives the prompt instruction, obtains the prompt voice from the prompt instruction, and plays the prompt voice when the voice control equipment determines to obtain the prompt voice.
Here, the voice control apparatus may further have a display module (e.g., a display screen). When the voice control equipment is provided with a display module, the voice control equipment can acquire the prompt message from the prompt instruction and display the prompt message through the display module.
Here, the mobile terminal may include: smart phones, tablet computers, and the like;
the prompt instruction is received and executed by the mobile terminal, and comprises the following steps:
the mobile terminal receives the prompt instruction and acquires the prompt voice and/or the prompt message from the prompt instruction;
when the mobile terminal determines to acquire the prompt voice, the prompt voice is played; and/or, when the mobile terminal determines to acquire the prompt message, displaying the prompt message.
In this embodiment, in step 103, after the cloud server generates the prompt instruction according to the cooking state, the cloud server may only send the prompt instruction to the voice control device, and the voice control device receives the prompt instruction and forwards the prompt instruction to the mobile terminal; or only sending the prompt instruction to a mobile terminal, and receiving the prompt instruction by the mobile terminal and forwarding the prompt instruction to the voice control equipment.
Here, the voice control device may have a bluetooth module, a WIreless Fidelity (WIFI) module, or a cellular-based narrowband Internet of Things (NB-IoT) module, and the mobile terminal is connected through the bluetooth module, the WIFI module, or the NB-IoT module.
Here, the step 103 may further include:
the cloud server can inquire a cooking recipe according to the current cooking operation of the cooking equipment and determine the next cooking operation; and generating a prompt instruction according to the cooking state and the determined next cooking operation, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal so as to inform a user of the current cooking state and the next cooking operation.
In this embodiment, the method may further include:
and 104, in the cooking process, the cloud server acquires at least one group of corresponding relations between the cooking sounds and the cooking states, and updates the acoustic model corresponding to the cooking states stored in the cloud server according to the at least one group of corresponding relations between the cooking sounds and the cooking states.
In this embodiment, the cloud server may further include: at least one fault condition, fault sound (including acoustic models used for fault identification), and fault threshold. For example, the fault state may be a pot overflow state, the fault sound may be a pot overflow sound, and the fault threshold corresponding to the pot overflow sound may be 80%. Here, the correspondence relationship between the failure state, the failure sound, and the failure threshold value may be preset and stored by a manufacturer using the cloud server according to various recipes and cooking experiences.
Accordingly, the method may further include:
the cloud server acquires an acoustic model of a fault sound corresponding to at least one fault state, and matches the extracted voiceprint features with the acoustic model of the fault sound corresponding to the at least one fault state to obtain a matching value, wherein the matching value is used as a fault identification result;
judging whether the matching value exceeds a fault threshold value, if so, generating a fault prompt instruction according to the fault state, and sending the fault prompt instruction to voice control equipment and/or a mobile terminal, wherein the fault prompt instruction is received and executed by the voice control equipment and/or the mobile terminal. Here, the fault indication instruction may include: fault alert voice and/or fault alert message.
The fault prompting instruction is received and executed by the voice control equipment and comprises the following steps:
the voice control equipment receives the fault prompting instruction and acquires the fault prompting voice and/or the fault prompting message from the fault prompting instruction;
when the voice control equipment determines to acquire the fault prompting voice, playing the fault prompting voice; and/or when the voice control equipment determines to acquire the fault prompt message, displaying the fault prompt message so as to remind a user of cooking faults, such as pot overflow and the like.
The fault prompting instruction is received and executed by the mobile terminal, and comprises the following steps:
the mobile terminal receives the fault prompting instruction and acquires the fault prompting voice and/or the fault prompting message from the fault prompting instruction;
when the mobile terminal determines to acquire the fault prompting voice, the fault prompting voice is played; and/or when the mobile terminal determines to acquire the fault prompt message, displaying the fault prompt message.
FIG. 2 is a schematic flow chart of a cooking method according to an embodiment of the present invention; the above-described cooking state determination method may be applied to the cooking method, as shown in fig. 2, which includes:
step 201, the cloud server informs a user of a cooking recipe, and the user prepares a corresponding food material according to the cooking recipe;
specifically, the step 201 includes: the cloud server determines a cooking recipe, and sends cooking food materials in the cooking recipe to the voice control device, wherein the cooking food materials are played by the voice control device to inform a user of preparing corresponding food materials.
Step 202, cooking is carried out by cooking equipment, and a cooking sound is emitted in the cooking process;
specifically, the cooking apparatus performs cooking, and may include:
the cloud server sends a first control instruction to the cooking equipment according to the cooking recipe, so as to control the cooking equipment to perform cooking operation; or the cloud server sends a first voice instruction to the voice control device to inform a user of cooking, and the user inputs a second control instruction to the cooking device through an operation key of the cooking device to control the cooking device to execute cooking operation; the first control instruction or the second control instruction is received by the cooking equipment and cooking operation is carried out.
Here, after the user places the cooking food material in the cooking device, the cooking device may make a cooking sound when cooking, such as: the sound of hot oil, the sound of boiling of clear soup, the sound of frying vegetables and fruits, the sound of frying seasonings, the sound of boiling of thick soup, the sound of oil production of meat and the like.
After the step 202, the method may further include:
the cooking equipment sends operation information to a cloud server to inform the cloud server of the currently executed cooking operation; and the cloud server receives the operation message and acquires the cooking operation currently executed by the cooking equipment according to the operation message.
Step 203, collecting the cooking sound by a voice control device, and sending the cooking sound to a cloud server;
step 204, the cloud server performs noise reduction processing, feature extraction and pattern recognition on the voiceprint information of the cooking sound;
specifically, step 204 includes: the cloud server acquires voiceprint information of the cooking sound, performs noise reduction processing on the voiceprint information, and extracts voiceprint characteristics from the voiceprint information after the noise reduction processing;
the cloud server determines the cooking operation of the cooking equipment and acquires the cooking state corresponding to the cooking operation; acquiring a stored acoustic model of the cooking sound corresponding to the cooking state, and matching the voiceprint characteristics with the stored acoustic model to obtain a matching value; the matching value is the result of pattern recognition.
Step 205, the cloud server determines a cooking state according to a pattern recognition result;
specifically, the cloud server judges whether the matching value exceeds a preset threshold value, and if the matching value exceeds the preset threshold value, the cloud server determines that the cooking equipment reaches a cooking state corresponding to the cooking operation, namely, reaches a corresponding duration;
here, if it is determined that the matching value does not exceed the preset threshold, the cooking sound continues to be received and a corresponding matching value is obtained until it is determined that the matching value exceeds the preset threshold.
Step 206, the cloud server determines the next cooking operation, generates a prompt instruction according to the cooking state and the next cooking operation, and sends the prompt instruction to the voice control device and/or the mobile terminal, wherein the prompt instruction is received and executed by the voice control device and/or the mobile terminal;
specifically, the step 206 may include:
the cloud server inquires a cooking recipe according to the current cooking operation of the cooking equipment and determines the next cooking operation; and generating a prompt instruction according to the cooking state and the determined next cooking operation, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal so as to inform a user of the current cooking state and the next cooking operation.
Step 207, the cloud server updates the stored acoustic model according to the collected cooking sound;
specifically, the cloud server comprises a self-learning module, wherein the self-learning module acquires the corresponding relation between at least one group of cooking sound and cooking states and extracts the voiceprint characteristics of the cooking sound; according to the voiceprint characteristics and the stored acoustic model corresponding to the cooking state, applying a deep learning algorithm, such as: and (3) a Convolutional neural network (CNNs, Convolutional neural networks), a Deep Belief network (DBNs, Deep Belief networks and the like), optimizing the acoustic model, and acquiring and storing a new acoustic module.
The following provides a specific application example illustrating the cooking method, assuming that a part of the hotpot condiment needs to be cooked, the cloud server determines a cooking recipe, including:
1. cooking food materials: oil, ginger, garlic, pepper, fermented soya beans, shallot;
2. cooking operation, the culinary art state that cooking operation corresponds, the culinary art sound that cooking state corresponds, the preset threshold value that cooking sound corresponds, the next cooking operation that cooking operation corresponds specifically includes:
hot oil (first cooking operation), hot oil state, hot oil sound, hot oil threshold, the next operation of hot oil is stir-frying (second cooking operation);
and (3) frying, a frying state, a seasoning frying sound threshold value and the next operation of frying is empty (indicating that no next cooking operation is available, and the cooking of the hotpot condiment is completed after the frying is completed).
The user determines the corresponding cooking food material according to the cooking recipe: oil, ginger, garlic, pepper, fermented soya beans, shallot; placing the cooking food materials into a cooking device, generating a hot oil control instruction by a cloud server according to a first cooking operation (namely hot oil) of a cooking recipe, and sending the hot oil control instruction to the cooking device, wherein the cooking device performs hot oil; when the food material 'oil' is heated to a certain temperature, a corresponding cooking sound is sent out, the voice control device collects the cooking sound and sends the cooking sound to the cloud server, the cloud server identifies the cooking sound by using a voiceprint identification technology, and when the collected cooking sound is determined to reach a preset threshold value with the matching value of the cooking sound corresponding to the hot oil operation stored in the cloud server, the corresponding hot oil state is considered to be reached, and the hot oil is finished; the cloud server generates a prompt instruction according to the hot oil state, and sends the prompt instruction to the voice control device and/or the mobile terminal, and the prompt instruction is received and executed by the voice control device and/or the mobile terminal so as to inform a user that the cooking device finishes the hot oil.
Further, the cloud server determines that the first cooking operation (hot oil) is completed and then enters the next cooking operation (i.e. stir-frying); the cloud server generates a stir-frying control instruction according to the second cooking operation, sends the stir-frying control instruction to the cooking equipment, and stir-frying the seasoning by the cooking equipment; the voice control equipment continues to collect cooking sounds and send the cooking sounds to the cloud server, the cloud server determines a cooking state according to the collected cooking sounds, whether corresponding cooking operation is finished or not is continuously determined, and the steps are repeated until the cooking is finished.
Fig. 3 is a flowchart illustrating a second cooking state determining method according to an embodiment of the present invention; as shown in fig. 3, the method includes:
during the cooking process, the cooking device makes a cooking sound, such as: hot oil sound, clear soup boiling sound, vegetable and fruit oil passing sound, seasoning frying sound, thick soup boiling sound and meat oil discharging sound; the voice control equipment collects the cooking sound and sends the cooking sound to the cloud server;
the cloud server receives the cooking sound for the first time, records the cooking sound as a first cooking sound, collects sample data of the first cooking sound, performs self-learning and training according to the collected sample data, obtains voiceprint characteristics corresponding to different cooking states, and determines and stores a corresponding acoustic model according to the voiceprint characteristics;
in the subsequent cooking process, if the cloud server receives the cooking sound corresponding to the same cooking state again, the cooking sound is recorded as a second cooking sound, the voiceprint feature of the second cooking sound is extracted, the extracted voiceprint feature of the second cooking sound is matched with the stored acoustic model, a matching value is obtained, and the cooking state is determined according to the matching value.
Here, the cloud server may further continuously optimize the acoustic model according to the extracted voiceprint feature of the second cooking sound.
Fig. 4 is a schematic structural diagram of a first cooking state determining device according to an embodiment of the present invention; as shown in fig. 4, the apparatus includes: the device comprises a receiving module, a processing module and a sending module; wherein,
the receiving module is used for receiving the cooking sound sent by the voice control equipment;
the processing module is used for extracting the voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining the corresponding cooking state according to the result of the mode recognition;
the sending module is used for generating a prompt instruction according to the cooking state and sending the prompt instruction to a voice control device and/or a mobile terminal, and the prompt instruction is received and executed by the voice control device and/or the mobile terminal.
Specifically, the processing module is specifically configured to acquire voiceprint information of the cooking sound, perform noise reduction processing on the voiceprint information, and extract a voiceprint feature from the voiceprint information after the noise reduction processing.
Specifically, the processing module is specifically configured to acquire a cooking operation of a cooking device, and determine a cooking state corresponding to the cooking operation;
acquiring an acoustic model corresponding to the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
Specifically, the processing module is specifically configured to determine whether the matching value exceeds a preset threshold, determine that the matching value exceeds the preset threshold, and determine that the cooking device reaches a cooking state corresponding to the cooking operation.
Specifically, the processing module is further configured to, during a cooking process, obtain a corresponding relationship between at least one group of cooking sounds and a cooking state, and update the acoustic model corresponding to the cooking state according to the corresponding relationship between the at least one group of cooking sounds and the cooking state.
Fig. 5 is a schematic structural diagram of a second cooking state determining device according to an embodiment of the present invention; as shown in fig. 5, the apparatus 50 may be disposed on a server, and the apparatus 50 includes:
a processor 501 and a memory 502 for storing computer programs executable on the processor; wherein,
the processor 501 is configured to, when running the computer program, perform:
receiving a cooking sound sent by voice control equipment;
extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result;
and generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal.
The processor 501 is further configured to, when running the computer program, perform:
and acquiring voiceprint information of the cooking sound, performing noise reduction processing on the voiceprint information, and extracting voiceprint characteristics from the voiceprint information subjected to the noise reduction processing.
The processor 501 is further configured to, when running the computer program, perform:
acquiring cooking operation of cooking equipment, and determining a cooking state corresponding to the cooking operation;
acquiring an acoustic model of the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
The processor 501 is further configured to, when running the computer program, perform:
judging whether the matching value exceeds a preset threshold value or not, determining that the matching value exceeds the preset threshold value, and determining that the cooking equipment reaches a cooking state corresponding to the cooking operation.
The processor 501 is further configured to, when running the computer program, perform:
in the cooking process, acquiring at least one group of corresponding relations between cooking sounds and cooking states, and updating the acoustic model of the cooking states according to the at least one group of corresponding relations between the cooking sounds and the cooking states.
It should be noted that: the cooking state determining device and the cooking state determining method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
In practical applications, the apparatus 50 may further include: at least one network interface 503. The various components in the speech processing device 50 are coupled together by a bus system 504. It is understood that the bus system 504 is used to enable communications among the components. The bus system 504 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 504 in fig. 4. The number of the processors 504 may be at least one. The network interface 503 is used for communication between the voice processing apparatus 50 and other devices in a wired or wireless manner. The memory 502 in embodiments of the present invention is used to store various types of data to support the operation of the speech processing apparatus 50.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the apparatus 50 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field-Programmable Gate arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs:
receiving a cooking sound sent by voice control equipment;
extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining a corresponding cooking state according to a mode recognition result;
and generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal.
The computer program, when executed by a processor, performs:
and acquiring voiceprint information of the cooking sound, performing noise reduction processing on the voiceprint information, and extracting voiceprint characteristics from the voiceprint information subjected to the noise reduction processing.
The computer program, when executed by a processor, performs:
acquiring cooking operation of cooking equipment, and determining a cooking state corresponding to the cooking operation;
acquiring an acoustic model of the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
The computer program, when executed by a processor, performs:
judging whether the matching value exceeds a preset threshold value or not, determining that the matching value exceeds the preset threshold value, and determining that the cooking equipment reaches a cooking state corresponding to the cooking operation.
The computer program, when executed by a processor, performs:
in the cooking process, acquiring at least one group of corresponding relations between cooking sounds and cooking states, and updating the acoustic model of the cooking states according to the at least one group of corresponding relations between the cooking sounds and the cooking states.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (8)

1. A method of determining a cooking state, the method comprising:
receiving a cooking sound sent by voice control equipment;
extracting voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining whether a corresponding cooking state is reached according to a mode recognition result;
generating a prompt instruction according to the cooking state, and sending the prompt instruction to voice control equipment and/or a mobile terminal, wherein the prompt instruction is received and executed by the voice control equipment and/or the mobile terminal;
in the cooking process, acquiring at least one group of corresponding relations between cooking sounds and cooking states, and updating an acoustic model corresponding to the cooking states according to the at least one group of corresponding relations between the cooking sounds and the cooking states;
the performing pattern recognition according to the voiceprint feature includes:
acquiring cooking operation of cooking equipment, and determining a cooking state corresponding to the cooking operation;
acquiring an acoustic model corresponding to the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
2. The method of claim 1, wherein said extracting the voiceprint feature of the cooking sound comprises:
and acquiring voiceprint information of the cooking sound, performing noise reduction processing on the voiceprint information, and extracting voiceprint characteristics from the voiceprint information subjected to the noise reduction processing.
3. The method of claim 1, wherein the determining whether the corresponding cooking state is reached according to the result of the pattern recognition comprises:
judging whether the matching value exceeds a preset threshold value or not, determining that the matching value exceeds the preset threshold value, and determining that the cooking equipment reaches a cooking state corresponding to the cooking operation.
4. An apparatus for determining a cooking state, the apparatus comprising: the device comprises a receiving module, a processing module and a sending module; wherein,
the receiving module is used for receiving the cooking sound sent by the voice control equipment;
the processing module is used for extracting the voiceprint characteristics of the cooking sound, performing mode recognition according to the voiceprint characteristics, and determining whether the corresponding cooking state is reached or not according to the mode recognition result;
the sending module is used for generating a prompt instruction according to the cooking state and sending the prompt instruction to a voice control device and/or a mobile terminal, and the prompt instruction is received and executed by the voice control device and/or the mobile terminal;
the processing module is further used for acquiring the corresponding relation between at least one group of cooking sounds and the cooking states in the cooking process, and updating the acoustic model corresponding to the cooking states according to the corresponding relation between the at least one group of cooking sounds and the cooking states;
the processing module is further used for acquiring the cooking operation of the cooking equipment and determining the cooking state corresponding to the cooking operation; acquiring an acoustic model corresponding to the cooking state, and matching the extracted voiceprint features with the acoustic model to obtain a matching value; the matching value is used as a result of pattern recognition.
5. The apparatus according to claim 4, wherein the processing module is specifically configured to obtain voiceprint information of the cooking sound, perform noise reduction processing on the voiceprint information, and extract a voiceprint feature from the noise-reduced voiceprint information.
6. The apparatus according to claim 4, wherein the processing module is specifically configured to determine whether the matching value exceeds a preset threshold, determine that the matching value exceeds the preset threshold, and determine that the cooking device reaches a cooking state corresponding to the cooking operation.
7. An apparatus for determining a cooking state, the apparatus comprising: a processor and a memory for storing a computer program capable of running on the processor;
wherein the processor is adapted to perform the steps of the method of any one of claims 1 to 3 when running the computer program.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
CN201810571999.3A 2018-06-04 2018-06-04 Cooking state determining method and device, storage medium and server Active CN110547665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810571999.3A CN110547665B (en) 2018-06-04 2018-06-04 Cooking state determining method and device, storage medium and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810571999.3A CN110547665B (en) 2018-06-04 2018-06-04 Cooking state determining method and device, storage medium and server

Publications (2)

Publication Number Publication Date
CN110547665A CN110547665A (en) 2019-12-10
CN110547665B true CN110547665B (en) 2022-04-01

Family

ID=68736038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810571999.3A Active CN110547665B (en) 2018-06-04 2018-06-04 Cooking state determining method and device, storage medium and server

Country Status (1)

Country Link
CN (1) CN110547665B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110974038B (en) * 2019-12-26 2021-07-23 卓尔智联(武汉)研究院有限公司 Food material cooking degree determining method and device, cooking control equipment and readable storage medium
CN114305132A (en) * 2021-11-18 2022-04-12 珠海格力电器股份有限公司 Control method and device of cooking equipment, cooking equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938811A (en) * 2012-10-15 2013-02-20 华南理工大学 Household mobile phone communication system based on voice recognition
CN103902613A (en) * 2012-12-30 2014-07-02 青岛海尔软件有限公司 Speech recognition and cloud search engine technology based man-machine interactive system and method
CN107978315A (en) * 2017-11-20 2018-05-01 徐榭 Dialog mode radiotherapy treatment planning system and formulating method based on speech recognition

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011069695A1 (en) * 2009-12-07 2011-06-16 Msx Technology Ag Method for controlling a cooking process
CN102961026A (en) * 2011-08-31 2013-03-13 阮刚 Cooking device with communication interface
CN103006041A (en) * 2013-01-09 2013-04-03 胡达广 Voiceprint recognition controlled energy-saving electric cooker
CN103673008B (en) * 2013-09-16 2017-01-04 宁波方太厨具有限公司 A kind of intelligent range hood and the control method of this range hood
CN103743065B (en) * 2014-01-20 2019-03-08 美的集团股份有限公司 Control method, control system, air conditioner and the terminal of air conditioner
KR20170058530A (en) * 2015-11-19 2017-05-29 주식회사 대유위니아 Apparatus for setting a display characters of an electric rice cooker
CN105810196B (en) * 2016-06-02 2020-01-31 佛山市顺德区美的电热电器制造有限公司 Voice control method and voice control device of cooking appliance and cooking appliance
CN207249417U (en) * 2016-07-27 2018-04-17 邓东东 State mapping tool and cooking pot and cooking furnace in a kind of cooking pot
CN105973998A (en) * 2016-07-27 2016-09-28 邓东东 Cooking tool and method for blindly determining food cooking degree
US20190172323A1 (en) * 2016-07-27 2019-06-06 Dongdong Deng Culinary mapping tool for detecting cooking status in pot and culinary mapping and evaluation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938811A (en) * 2012-10-15 2013-02-20 华南理工大学 Household mobile phone communication system based on voice recognition
CN103902613A (en) * 2012-12-30 2014-07-02 青岛海尔软件有限公司 Speech recognition and cloud search engine technology based man-machine interactive system and method
CN107978315A (en) * 2017-11-20 2018-05-01 徐榭 Dialog mode radiotherapy treatment planning system and formulating method based on speech recognition

Also Published As

Publication number Publication date
CN110547665A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
US9316400B2 (en) Appliance control method, speech-based appliance control system, and cooking appliance
CN110953609B (en) Cooking control method, storage medium, cooking control device and cooking system
CN108681283B (en) Intelligent cooking method and system
CN110045638B (en) Cooking information recommendation method and device and storage medium
CN110488696B (en) Intelligent dry burning prevention method and system
CN205881452U (en) Electric appliance for cooking
CN111035261B (en) Cooking control method, device and equipment
CN110547665B (en) Cooking state determining method and device, storage medium and server
CN109683516A (en) Auxiliary cooking method, household appliance and computer storage medium
CN208957616U (en) Cooking utensil and control system and server thereof
US20120072842A1 (en) Apparatus for cooking and method of helping a user to cook
CN108415308B (en) Cooking appointment control method and device and cooker
CN116030812B (en) Intelligent interconnection voice control method, device, equipment and medium for gas stove
CN108415301A (en) Cook parameter modification method and device
CN110866844A (en) Menu execution method and device, storage medium and computer equipment
CN109358538B (en) Monitoring method, device, equipment and system for cooking appliance
KR102448745B1 (en) Cooking utensil control method and apparatus, storage medium and cooking utensil
CN110974038B (en) Food material cooking degree determining method and device, cooking control equipment and readable storage medium
CN109903757A (en) Method of speech processing, device, computer readable storage medium and server
CN110989409A (en) Dish cooking method and device and storage medium
CN109953634B (en) Cooking method, cooking apparatus, cooking appliance, and computer-readable storage medium
JP6800537B2 (en) Cooking aids and programs for cooking aids
CN111616577A (en) Intelligent auxiliary cooking system and cooking process interactive control method
Xiaoguang et al. Design and implementation of smart cooking based on amazon echo
CN113647830B (en) Cooking equipment and heating control method and equipment after power failure recovery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant