CN109671426B - Voice control method and device, storage medium and air conditioner - Google Patents

Voice control method and device, storage medium and air conditioner Download PDF

Info

Publication number
CN109671426B
CN109671426B CN201811489155.0A CN201811489155A CN109671426B CN 109671426 B CN109671426 B CN 109671426B CN 201811489155 A CN201811489155 A CN 201811489155A CN 109671426 B CN109671426 B CN 109671426B
Authority
CN
China
Prior art keywords
voice
controlled
voice recognition
recognition state
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811489155.0A
Other languages
Chinese (zh)
Other versions
CN109671426A (en
Inventor
王慧君
张新
毛跃辉
刘健军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201811489155.0A priority Critical patent/CN109671426B/en
Publication of CN109671426A publication Critical patent/CN109671426A/en
Application granted granted Critical
Publication of CN109671426B publication Critical patent/CN109671426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/50Control or safety arrangements characterised by user interfaces or communication
    • F24F11/52Indication arrangements, e.g. displays
    • F24F11/526Indication arrangements, e.g. displays giving audible indications
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/89Arrangement or mounting of control or safety devices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Chemical & Material Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Combustion & Propulsion (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Air Conditioning Control Device (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The invention discloses a voice control method, a voice control device, a storage medium and an air conditioner, wherein the method comprises the following steps: acquiring sound information in an environment to which the equipment to be controlled belongs in a voice recognition state; the sound information includes: voice commands and/or ambient noise; determining whether the noise volume of the environmental noise in the sound information exceeds a set noise threshold value; if the noise volume does not exceed the noise threshold, controlling the equipment to be controlled to be kept in the voice recognition state according to a voice instruction in the voice information; or if the noise volume exceeds the noise threshold, controlling the equipment to be controlled to exit the voice recognition state according to the voice command in the sound information and the environmental noise. The scheme of the invention can solve the problem that the user experience is influenced by the repeated behavior of awakening word control for preventing misoperation, and achieves the effect of improving the user experience.

Description

Voice control method and device, storage medium and air conditioner
Technical Field
The invention belongs to the technical field of voice control, particularly relates to a voice control method and device, a storage medium and an air conditioner, and particularly relates to a voice recognition state control method and device for a voice air conditioner, a storage medium and an AI air conditioner.
Background
In the prior document with patent (application) number 201710543572.8, a voice wake-up method is disclosed, in which after the smart device enters a voice recognition phase, if the received first request is a wake-up word, the received wake-up word is determined to be used to wake up the smart device again. The behavior of repeatedly waking up word control for preventing misoperation influences user experience, increases complexity of a voice control process, is not humanized and hard to interact.
Disclosure of Invention
The present invention aims to solve the above-mentioned drawbacks, and provide a voice control method, apparatus, storage medium, and air conditioner to solve the problem that in the prior art, if a received first request is a wakeup word after an intelligent device enters a voice recognition stage, the received wakeup word is determined to be used to wake up the intelligent device again, which affects user experience by repeatedly performing wakeup word control to prevent misoperation, and achieve an effect of improving user experience.
The invention provides a voice control method, which comprises the following steps: acquiring sound information in an environment to which the equipment to be controlled belongs in a voice recognition state; the sound information includes: voice commands and/or ambient noise; determining whether the noise volume of the environmental noise in the sound information exceeds a set noise threshold value; if the noise volume does not exceed the noise threshold, controlling the equipment to be controlled to be kept in the voice recognition state according to a voice instruction in the voice information; or if the noise volume exceeds the noise threshold, controlling the equipment to be controlled to exit the voice recognition state according to the voice command in the sound information and the environmental noise.
Optionally, controlling the device to be controlled to remain in the voice recognition state according to the voice instruction in the sound information includes: performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction; if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction; or if the voice instruction in the voice information is not the user control instruction, continuing to acquire the voice information in the environment to which the device to be controlled belongs in the voice recognition state.
Optionally, controlling the device to be controlled to exit the speech recognition state according to the speech instruction in the sound information and the environmental noise, including: performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction; if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state; or if the voice instruction in the sound information is not the user control instruction, determining that the sound information is ambient noise, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state.
Optionally, controlling the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state includes: determining whether a new voice command is input within the set listening time in the voice recognition state; if a new voice command is input within the listening time, continuously acquiring the sound information of the environment to which the equipment to be controlled belongs in the voice recognition state; or if no new voice command is input in the listening time, exiting the voice recognition state.
Optionally, the method further comprises: acquiring a voice awakening word of a voice service for awakening the equipment to be controlled in the environment to which the equipment to be controlled belongs; and controlling the equipment to be controlled to enter a voice recognition state according to the voice awakening word so as to start the voice service of the equipment to be controlled.
Optionally, the method further comprises: indicating that the equipment to be controlled is in a first state of the voice recognition state and/or a second state that the equipment to be controlled exits the voice recognition state; and/or prompting the state change condition if the equipment to be controlled sends the state change between the first state in the voice recognition state and the second state exiting the voice recognition state.
In accordance with the above method, another aspect of the present invention provides a voice control apparatus, including: the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring sound information in the environment of the device to be controlled in the voice recognition state of the device to be controlled; the sound information includes: voice commands and/or ambient noise; the control unit is used for determining whether the noise volume of the environmental noise in the sound information exceeds a set noise threshold value; the control unit is further configured to control the device to be controlled to be kept in the voice recognition state according to a voice instruction in the sound information if the noise volume does not exceed the noise threshold; or, the control unit is further configured to control the device to be controlled to exit the voice recognition state according to the voice instruction in the sound information and the environmental noise if the noise volume exceeds the noise threshold.
Optionally, the controlling unit controls the device to be controlled to be kept in the voice recognition state according to the voice instruction in the sound information, and includes: performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction; if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction; or if the voice instruction in the voice information is not the user control instruction, continuing to acquire the voice information in the environment to which the device to be controlled belongs in the voice recognition state.
Optionally, the controlling unit controls the device to be controlled to exit the speech recognition state according to the speech instruction in the sound information and the environmental noise, including: performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction; if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state; or if the voice instruction in the sound information is not the user control instruction, determining that the sound information is ambient noise, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state.
Optionally, the controlling unit controls the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state, including: determining whether a new voice command is input within the set listening time in the voice recognition state; if a new voice command is input within the listening time, continuously acquiring the sound information of the environment to which the equipment to be controlled belongs in the voice recognition state; or if no new voice command is input in the listening time, exiting the voice recognition state.
Optionally, the method further comprises: the acquiring unit is further configured to acquire a voice wakeup word of a voice service for waking up the device to be controlled in an environment to which the device to be controlled belongs; the control unit is further configured to control the device to be controlled to enter a voice recognition state according to the voice wake-up word, so as to start a voice service of the device to be controlled.
Optionally, the method further comprises: the control unit is further used for indicating a first state that the equipment to be controlled is in the voice recognition state and/or a second state that the equipment to be controlled exits the voice recognition state; and/or the control unit is further configured to prompt the state change condition if the device to be controlled sends a state change between a first state in the voice recognition state and a second state in which the device exits the voice recognition state.
In accordance with another aspect of the present invention, there is provided an air conditioner including: the voice control device described above.
In accordance with the above method, a further aspect of the present invention provides a storage medium comprising: the storage medium has stored therein a plurality of instructions; the instructions are used for loading and executing the voice control method by the processor.
In accordance with the above method, another aspect of the present invention provides an air conditioner, comprising: a processor for executing a plurality of instructions; a memory to store a plurality of instructions; wherein the instructions are stored in the memory, and loaded by the processor and used for executing the voice control method.
According to the scheme of the invention, the intelligent control is directly carried out by omitting the awakening words in a certain scene, so that the user experience is optimized, the interaction is more friendly and intelligent, and the control flow is simplified.
Furthermore, the scheme of the invention simplifies the user operation under the condition of not influencing the accuracy rate of voice recognition, intelligently omits the input of user awakening words, and improves the user experience and the humanization degree of interaction.
Furthermore, according to the scheme of the invention, the listening time is adaptively adjusted according to the ambient noise condition, and is prolonged in a quiet environment, so that the annoyance caused by frequently inputting the awakening words by the user is avoided, the user experience can be optimized, and the control flow is simplified.
Furthermore, according to the scheme of the invention, the error recognition rate of voice recognition is judged based on the detected ambient noise, the voice recognition listening exit time is intelligently adjusted, the user operation is simplified under the condition of not influencing the voice recognition accuracy rate, the user awakening word input is intelligently omitted, and the user experience is improved.
Furthermore, according to the scheme of the invention, after the intelligent device is awakened, if the intelligent device is always in a quiet environment, the false recognition is low, the intelligent device can accurately recognize the voice control operation of the user, the false recognition cannot be generated, the intelligent device keeps a voice recognition state, the voice awakening word does not need to be input again in the next voice control, and the user experience can be improved.
Therefore, according to the scheme provided by the invention, the error recognition rate of voice recognition is judged based on the detected ambient noise, the voice recognition listening exit time is intelligently adjusted, and the problem that the user experience is influenced by the behavior of repeatedly carrying out awakening word control to prevent misoperation in the prior art that the received awakening word is determined to be used for awakening the intelligent equipment again if the received first request is the awakening word after the intelligent equipment enters the voice recognition stage is solved, so that the defects of poor user experience, complex control process and poor interaction flexibility in the prior art are overcome, and the beneficial effects of good user experience, simple control process and good interaction flexibility are realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart illustrating a voice control method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an embodiment of controlling a device to be controlled to remain in the voice recognition state according to the voice command in the voice message in the method of the present invention;
FIG. 3 is a flowchart illustrating an embodiment of the method for controlling the device to be controlled to exit the speech recognition state according to the speech command in the voice message and the ambient noise;
FIG. 4 is a flowchart illustrating an embodiment of controlling a device to be controlled to exit from the speech recognition state according to the listening time in the speech recognition state in the method of the present invention;
FIG. 5 is a flowchart illustrating an embodiment of controlling a device to be controlled to enter a speech recognition state according to a speech wake-up word in the method of the present invention;
FIG. 6 is a schematic structural diagram of a voice control apparatus according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating a wake-up control process of an embodiment of an air conditioner according to the present invention.
The reference numbers in the embodiments of the present invention are as follows, in combination with the accompanying drawings:
102-an obtaining unit; 104-control unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to an embodiment of the present invention, a method for controlling speech is provided, as shown in fig. 1, which is a flow chart of an embodiment of the method of the present invention. The voice control method may include: step S110 to step S130.
At step S110, sound information in the environment to which the device to be controlled belongs in the voice recognition state thereof is acquired. The sound information may include: voice commands and/or ambient noise. For example: after the intelligent device is awakened, the intelligent device is in a voice recognition stage.
At step S120, it is determined whether the noise volume of the ambient noise in the sound information exceeds a set noise threshold. For example: and (3) judging the environmental noise: if the collected voice can identify the voice control command, other waveform segment inputs of the voice control command are filtered to be environmental noise. If the collected voice can not recognize the voice control command, the voice is all regarded as the environmental noise.
In step S130, if the noise volume does not exceed the noise threshold, controlling the device to be controlled to be kept in the voice recognition state according to the voice instruction in the sound information, so as to directly perform voice control in the voice recognition state without waking up the device to be controlled again in the next voice control. For example: after the intelligent device is awakened, if the intelligent device is always in a quiet environment, the false recognition is low, the intelligent device can accurately recognize the voice control operation of the user, and the false recognition cannot be generated, the intelligent device keeps a voice recognition state; the voice control does not need to input the voice awakening word again next time.
Optionally, with reference to a schematic flow chart of an embodiment of the method shown in fig. 2, in which the device to be controlled is controlled to be kept in the voice recognition state according to the voice instruction in the sound information, further describing a specific process of controlling the device to be controlled to be kept in the voice recognition state according to the voice instruction in the sound information in step S130, the method may include: step S210 to step S230.
Step S210, performing semantic analysis on the voice command in the sound information to determine whether the voice command in the sound information is a set user control command.
Step S220, if the voice command in the sound information is the user control command, controlling the device to be controlled to execute the user control command.
Or, in step S230, if the voice instruction in the sound information is not the user control instruction, continuing to acquire the sound information in the environment to which the device to be controlled belongs in the voice recognition state.
For example: the voice collection is carried out continuously in the continuous recognition mode, and a user does not need to send a voice awakening instruction before sending a control voice instruction every time. And switching to the waiting exit mode until the user exits the voice recognition state by using the voice control instruction or the intelligent equipment recognizes that the environmental noise exceeds a preset threshold value.
Therefore, the equipment to be controlled is controlled to be kept in the voice recognition state under the condition of low environmental noise, the equipment to be controlled is directly controlled to execute the voice command if the semantic analysis of the voice command is successful under the voice recognition state, and the voice information in the environment is continuously acquired to continuously carry out the voice analysis and control of the voice command under the voice recognition state if the semantic analysis of the voice command is unsuccessful, so that the user can send the voice command to control the equipment to be controlled at any time after awakening the voice service of the equipment to be controlled, the control efficiency is better, and the use convenience is better.
Or, in step S140, if the noise volume exceeds the noise threshold, controlling the device to be controlled to exit the voice recognition state according to the voice command in the sound information and the environmental noise. For example: in the speech recognition stage, once the environment noise is detected to be large, the speech recognition state is retreated within a certain waiting time, and the intelligent equipment is awakened through the speech awakening word again if the control is needed.
For example: in order to solve the problem that awakening word control influences user experience, the error recognition rate of voice recognition is judged based on the detected ambient noise, the voice recognition listening exit time is intelligently adjusted, user operation is simplified under the condition that the voice recognition accuracy rate is not influenced, and user awakening word input is intelligently omitted. Therefore, by adopting the awakening control method, the awakening words can be omitted in a certain scene, intelligent control can be directly carried out, user experience is optimized, and interaction is more friendly and intelligent.
For example: and voice is collected, the environmental noise is detected, and if the environmental noise exceeds a set threshold decibel, the voice is collected. The speech recognition initiates a wait for exit mode. If the environmental noise is lower than the set threshold decibel. The speech recognition turns on the continuous recognition mode.
Therefore, when the equipment to be controlled is in the voice recognition state, the equipment to be controlled is controlled to be kept in the voice recognition state or quit the voice recognition state according to the noise volume in the acquired sound information in the environment, the voice recognition state can be quitted without the set listening time when the noise volume is low, a user can conveniently and directly perform voice control next time in the voice recognition state, repeated awakening is avoided, user experience is improved, and the control flow is simplified.
Optionally, referring to a flowchart of an embodiment of the method shown in fig. 3, which controls the device to be controlled to exit the speech recognition state according to the speech instruction in the sound information and the ambient noise, further describing a specific process of controlling the device to be controlled to exit the speech recognition state according to the speech instruction in the sound information and the ambient noise in step S140, the specific process may include: step S310 to step S330.
Step S310, performing semantic analysis on the voice command in the voice message to determine whether the voice command in the voice message is a set user control command.
Step S320, if the voice command in the sound information is the user control command, controlling the device to be controlled to execute the user control command, and controlling the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state.
Or, in step S330, if the voice command in the sound information is not the user control command, determining that the device to be controlled cannot recognize the voice command in the sound information, that is, determining that the sound information is ambient noise, and controlling the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state.
For example: waiting for exiting mode and exiting speech recognition mode if no valid speech input is waiting to be collected within a certain time t. Re-entering the speech recognition state requires that a wake-up instruction is first received.
Therefore, under the condition of high environmental noise, if the semantic analysis of the voice command is successful, the device to be controlled is directly controlled to execute the voice command and then is controlled to exit from the voice recognition state according to the set listening time, and if the semantic analysis of the voice command is unsuccessful, the device to be controlled is determined to be the environmental noise and is controlled to exit from the voice recognition state according to the set listening time, so that the device to be controlled exits from the voice recognition state under the condition of high environmental noise, resources are saved, and the device to be controlled is humanized.
More optionally, referring to a flowchart of an embodiment of the method shown in fig. 4, which controls the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state, further describing a specific process of controlling the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state in step S320 or step S330, the specific process may include: step S410 to step S430.
Step S410, determining whether a new voice command is input within the listening time set in the voice recognition state.
Step S420, if a new voice command is input within the listening time, continuously acquiring the sound information in the environment to which the device to be controlled belongs in the voice recognition state.
Or, in step S430, if no new voice command is input within the listening time, exiting the voice recognition state to continuously obtain a voice wakeup word of the voice service that can be used for waking up the device to be controlled in the environment to which the device to be controlled belongs.
Therefore, the voice recognition state is exited under the condition that no new voice command is input within the set listening time, so that the current voice recognition state is exited under the condition of high environmental noise to be awakened again and reentered into the new voice recognition state, the reliability of control is favorably improved, and resources consumed by continuous voice recognition under the condition of high environmental noise are also favorably saved.
In an alternative embodiment, the method may further include: and controlling the equipment to be controlled to enter a voice recognition state according to the voice awakening word.
The following further describes a specific process of controlling the device to be controlled to enter the voice recognition state according to the voice wakeup word, with reference to a flowchart of an embodiment of controlling the device to be controlled to enter the voice recognition state according to the voice wakeup word in the method of the present invention shown in fig. 5, and the specific process may include: step S510 and step S520.
Step S510, before acquiring the sound information in the environment to which the device to be controlled belongs in the voice recognition state of the device to be controlled, acquiring a voice wakeup word that can be used to wake up the voice service of the device to be controlled in the environment to which the device to be controlled belongs.
Step S520, controlling the equipment to be controlled to enter a voice recognition state according to the voice awakening word so as to start the voice service of the equipment to be controlled.
For example: and the intelligent voice equipment receives the voice awakening words and is awakened. And the intelligent voice equipment enters a voice recognition stage, receives voice and recognizes the voice.
Therefore, the device to be controlled is controlled to enter the voice recognition state under the condition that the voice wake-up word in the environment to which the device to be controlled belongs is received, so that the voice service of the device to be controlled is started, and the voice service is convenient and safe to use.
In an alternative embodiment, at least one of the following control modes may be further included.
The first control mode is as follows: and indicating the equipment to be controlled in a first state of the voice recognition state and/or a second state of the equipment to be controlled exiting the voice recognition state. For example: and if the equipment to be controlled is in the voice recognition state, carrying out indicator lamp starting indication or display on the first state of the equipment to be controlled in the voice recognition state. And if the equipment to be controlled exits from the voice recognition state, the indicator lamp is turned off or at least displayed in a second state that the equipment to be controlled exits from the voice recognition state.
The second control mode is as follows: and if the equipment to be controlled sends state change between the first state in the voice recognition state and the second state in the voice recognition state, prompting the state change condition.
For example: and when the intelligent equipment is in a voice recognition state, the indicating lamp is turned on for indication. The exit voice recognition status indicator light is turned off. The voice device sends out a buzzing prompt tone when the state changes.
Therefore, the current state of the equipment to be controlled can be conveniently known by a user through indicating the online state or the offline state of the voice recognition state and prompting the state change between the online state and the offline state of the voice recognition state, and the method is strong in intuition and good in humanization.
Through a large number of tests, the technical scheme of the embodiment is adopted, intelligent control is directly carried out by omitting the awakening words in a certain scene, user experience is optimized, interaction is more friendly and intelligent, and the control flow is simplified.
According to the embodiment of the invention, a voice control device corresponding to the voice control method is also provided. Referring to fig. 6, a schematic diagram of an embodiment of the apparatus of the present invention is shown. The voice control apparatus may include: an acquisition unit 102 and a control unit 104.
In an alternative example, the obtaining unit 102 may be configured to obtain sound information of an environment to which the device to be controlled belongs in a voice recognition state of the device to be controlled. The sound information may include: voice commands and/or ambient noise. The specific functions and processes of the acquiring unit 102 are referred to in step S110. For example: after the intelligent device is awakened, the intelligent device is in a voice recognition stage.
In an alternative example, the control unit 104 may be configured to determine whether a noise volume of the ambient noise in the sound information exceeds a set noise threshold. The specific function and processing of the control unit 104 are referred to in step S120. For example: and (3) judging the environmental noise: if the collected voice can identify the voice control command, other waveform segment inputs of the voice control command are filtered to be environmental noise. If the collected voice can not recognize the voice control command, the voice is all regarded as the environmental noise.
In an optional example, the control unit 104 may be further configured to control the device to be controlled to remain in the voice recognition state according to a voice instruction in the sound information if the noise volume does not exceed the noise threshold, so as to perform voice control in the voice recognition state directly without waking up the device to be controlled again in the next voice control. The specific function and processing of the control unit 104 are also referred to in step S130. For example: after the intelligent device is awakened, if the intelligent device is always in a quiet environment, the false recognition is low, the intelligent device can accurately recognize the voice control operation of the user, and the false recognition cannot be generated, the intelligent device keeps a voice recognition state; the voice control does not need to input the voice awakening word again next time.
Optionally, the controlling unit 104 controls the device to be controlled to be kept in the voice recognition state according to the voice instruction in the sound information, and may include:
the control unit 104 may be further configured to perform semantic analysis on the voice command in the sound information to determine whether the voice command in the sound information is a set user control command. The specific functions and processes of the control unit 104 are also referred to in step S210.
The control unit 104 may be further specifically configured to control the device to be controlled to execute the user control instruction if the voice instruction in the sound information is the user control instruction. The specific functions and processes of the control unit 104 are also referred to in step S220.
Or, the control unit 104 may be further specifically configured to, if the voice instruction in the sound information is not the user control instruction, continue to acquire the sound information in the environment to which the device to be controlled belongs in the voice recognition state. The specific function and processing of the control unit 104 are also referred to in step S230.
For example: the voice collection is carried out continuously in the continuous recognition mode, and a user does not need to send a voice awakening instruction before sending a control voice instruction every time. And switching to the waiting exit mode until the user exits the voice recognition state by using the voice control instruction or the intelligent equipment recognizes that the environmental noise exceeds a preset threshold value.
Therefore, the equipment to be controlled is controlled to be kept in the voice recognition state under the condition of low environmental noise, the equipment to be controlled is directly controlled to execute the voice command if the semantic analysis of the voice command is successful under the voice recognition state, and the voice information in the environment is continuously acquired to continuously carry out the voice analysis and control of the voice command under the voice recognition state if the semantic analysis of the voice command is unsuccessful, so that the user can send the voice command to control the equipment to be controlled at any time after awakening the voice service of the equipment to be controlled, the control efficiency is better, and the use convenience is better.
Or, in an optional example, the control unit 104 may be further configured to control the device to be controlled to exit the voice recognition state according to a voice instruction in the sound information and an ambient noise if the noise volume exceeds the noise threshold. The specific function and processing of the control unit 104 are also referred to in step S140. For example: in the speech recognition stage, once the environment noise is detected to be large, the speech recognition state is retreated within a certain waiting time, and the intelligent equipment is awakened through the speech awakening word again if the control is needed.
For example: in order to solve the problem that awakening word control influences user experience, the error recognition rate of voice recognition is judged based on the detected ambient noise, the voice recognition listening exit time is intelligently adjusted, user operation is simplified under the condition that the voice recognition accuracy rate is not influenced, and user awakening word input is intelligently omitted. Therefore, by adopting the awakening control method, the awakening words can be omitted in a certain scene, intelligent control can be directly carried out, user experience is optimized, and interaction is more friendly and intelligent.
For example: and voice is collected, the environmental noise is detected, and if the environmental noise exceeds a set threshold decibel, the voice is collected. The speech recognition initiates a wait for exit mode. If the environmental noise is lower than the set threshold decibel. The speech recognition turns on the continuous recognition mode.
Therefore, when the equipment to be controlled is in the voice recognition state, the equipment to be controlled is controlled to be kept in the voice recognition state or quit the voice recognition state according to the noise volume in the acquired sound information in the environment, the voice recognition state can be quitted without the set listening time when the noise volume is low, a user can conveniently and directly perform voice control next time in the voice recognition state, repeated awakening is avoided, user experience is improved, and the control flow is simplified.
Optionally, the controlling unit 104 controls the device to be controlled to exit the speech recognition state according to the speech instruction in the sound information and the environmental noise, and may include:
the control unit 104 may be further configured to perform semantic analysis on the voice command in the sound information to determine whether the voice command in the sound information is a set user control command. The specific functions and processes of the control unit 104 are also referred to in step S310.
The control unit 104 may be further configured to control the device to be controlled to execute the user control instruction if the voice instruction in the sound information is the user control instruction, and control the device to be controlled to exit the voice recognition state according to the listening time in the voice recognition state. The specific functions and processes of the control unit 104 are also referred to in step S320.
Or, the control unit 104 may be further specifically configured to determine that the device to be controlled cannot recognize the voice instruction in the sound information, that is, determine that the sound information is ambient noise, if the voice instruction in the sound information is not the user control instruction, and control the device to be controlled to exit the voice recognition state according to the listening time in the voice recognition state. The specific functions and processes of the control unit 104 are also referred to in step S330.
For example: waiting for exiting mode and exiting speech recognition mode if no valid speech input is waiting to be collected within a certain time t. Re-entering the speech recognition state requires that a wake-up instruction is first received.
Therefore, under the condition of high environmental noise, if the semantic analysis of the voice command is successful, the device to be controlled is directly controlled to execute the voice command and then is controlled to exit from the voice recognition state according to the set listening time, and if the semantic analysis of the voice command is unsuccessful, the device to be controlled is determined to be the environmental noise and is controlled to exit from the voice recognition state according to the set listening time, so that the device to be controlled exits from the voice recognition state under the condition of high environmental noise, resources are saved, and the device to be controlled is humanized.
More optionally, the controlling unit 104 controls the device to be controlled to exit from the voice recognition state according to the listening time in the voice recognition state, and may include:
the control unit 104 may be further configured to determine whether a new voice command is input within the listening time set in the voice recognition state. The specific functions and processes of the control unit 104 are also referred to in step S410.
The control unit 104 may be further configured to, if a new voice instruction is input within the listening time, continue to acquire the sound information in the environment to which the device to be controlled belongs in the voice recognition state. The specific function and processing of the control unit 104 are also referred to in step S420.
Or, the control unit 104 may be further specifically configured to exit the voice recognition state if no new voice instruction is input within the listening time, so as to continuously acquire a voice wakeup word of a voice service that can be used for waking up the device to be controlled in the environment to which the device to be controlled belongs. The specific functions and processes of the control unit 104 are also referred to in step S430.
Therefore, the voice recognition state is exited under the condition that no new voice command is input within the set listening time, so that the current voice recognition state is exited under the condition of high environmental noise to be awakened again and reentered into the new voice recognition state, the reliability of control is favorably improved, and resources consumed by continuous voice recognition under the condition of high environmental noise are also favorably saved.
In an alternative embodiment, the method may further include: the process of controlling the device to be controlled to enter the voice recognition state according to the voice wake-up word specifically comprises the following steps:
the obtaining unit 102 may be further configured to obtain a voice wakeup word of a voice service that may be used to wake up the device to be controlled in the environment to which the device to be controlled belongs, before obtaining the sound information in the environment to which the device to be controlled belongs in the voice recognition state of the device to be controlled. The specific functions and processes of the acquisition unit 102 are also referred to in step S510.
The control unit 104 may be further configured to control the device to be controlled to enter a voice recognition state according to the voice wakeup word, so as to start a voice service of the device to be controlled. The specific functions and processes of the control unit 104 are also referred to in step S520.
For example: and the intelligent voice equipment receives the voice awakening words and is awakened. And the intelligent voice equipment enters a voice recognition stage, receives voice and recognizes the voice.
Therefore, the device to be controlled is controlled to enter the voice recognition state under the condition that the voice wake-up word in the environment to which the device to be controlled belongs is received, so that the voice service of the device to be controlled is started, and the voice service is convenient and safe to use.
In an alternative embodiment, at least one of the following control modes may be further included.
The first control mode is as follows: the control unit 104 may be further configured to indicate that the device to be controlled is in a first state of the voice recognition state, and/or indicate that the device to be controlled exits from a second state of the voice recognition state. For example: and if the equipment to be controlled is in the voice recognition state, carrying out indicator lamp starting indication or display on the first state of the equipment to be controlled in the voice recognition state. And if the equipment to be controlled exits from the voice recognition state, the indicator lamp is turned off or at least displayed in a second state that the equipment to be controlled exits from the voice recognition state.
The second control mode is as follows: the control unit 104 may be further configured to prompt a state change condition if the device to be controlled sends a state change between a first state in the voice recognition state and a second state in which the device exits the voice recognition state.
For example: and when the intelligent equipment is in a voice recognition state, the indicating lamp is turned on for indication. The exit voice recognition status indicator light is turned off. The voice device sends out a buzzing prompt tone when the state changes.
Therefore, the current state of the equipment to be controlled can be conveniently known by a user through indicating the online state or the offline state of the voice recognition state and prompting the state change between the online state and the offline state of the voice recognition state, and the method is strong in intuition and good in humanization.
Since the processes and functions implemented by the apparatus of this embodiment substantially correspond to the embodiments, principles and examples of the method shown in fig. 1 to 5, the description of this embodiment is not detailed, and reference may be made to the related descriptions in the foregoing embodiments, which are not repeated herein.
Through a large number of tests and verifications, the technical scheme of the invention simplifies the user operation under the condition of not influencing the accuracy rate of voice recognition, intelligently omits the input of user awakening words, and improves the user experience and the humanization degree of interaction.
According to the embodiment of the invention, an air conditioner corresponding to the voice control device is also provided. The air conditioner may include: the voice control device described above.
In an optional embodiment, in order to solve the problem that the control of the wake-up word affects the user experience, the invention provides a voice recognition state control method for a voice air conditioner.
In an optional example, according to the scheme of the invention, the error recognition rate of the voice recognition is judged based on the detected ambient noise, the listening exit time of the voice recognition is intelligently adjusted, the user operation is simplified under the condition of not influencing the accuracy rate of the voice recognition, and the user awakening word input is intelligently omitted. Therefore, by adopting the awakening control method, the awakening words can be omitted in a certain scene, intelligent control can be directly carried out, user experience is optimized, and interaction is more friendly and intelligent.
In an alternative specific example, a specific implementation process of the scheme of the present invention may be exemplarily described with reference to an example shown in fig. 7.
If the intelligent equipment is kept in the voice recognition environment in the noisy environment, the false recognition and the false triggering are easily caused, but the voice false triggering probability is low in the quiet environment. Based on the phenomenon, the intelligent device is in a voice recognition stage after being awakened. And once the environment noise is detected to be large, the voice recognition state is retreated within a certain waiting time, and if the intelligent equipment needs to be awakened again through the voice awakening word under the control of needing control. After the intelligent device is awakened, if the intelligent device is always in a quiet environment, the false recognition is low, the intelligent device can accurately recognize the voice control operation of the user, and the false recognition cannot be generated, the intelligent device keeps a voice recognition state; the voice control does not need to input the voice awakening word again next time.
In an alternative specific example, referring to the example shown in fig. 7, the voice recognition state control method for a voice air conditioner according to the present invention may include:
step 1, the intelligent voice equipment receives the voice awakening words and is awakened.
And 2, the intelligent voice equipment enters a voice recognition stage, receives and recognizes the voice.
For example: in far-field recognition (remote voice control), in order to avoid false awakening, the remote voice recognition needs to be awakened first and then speak a control instruction; such as "Tianmaoling", "playing music", etc. After the device is awakened and has a period of listening time, the existing strategy is that no voice input is input for a period of time, and then the device is considered to exit the voice recognition mode. But the same listening time in quiet environments is not reasonable. Therefore, the scheme of the invention is to adaptively (intelligently) adjust the listening time according to the ambient noise condition, prolong the listening time in a quiet environment and avoid the impatience caused by the frequent input of the awakening words by the user.
For example: and in the process of receiving the voice, detecting the ambient noise decibel (when no speaking sound exists), and if the ambient noise is high, ending the listening time according to the original time length. If the ambient noise is low, the listening time is extended and the recognition is continued until the ambient noise reaches a threshold value. Such as: we set 25 db as the noise threshold of the present invention, the human speech is typically greater than 40 db; we consider 25-40 db to be approximately above the threshold. And the quiet environment is 10-25 decibels.
And 3, collecting voice, detecting the environmental noise, and if the environmental noise exceeds a set threshold decibel. The speech recognition initiates a wait for exit mode. If the environmental noise is lower than the set threshold decibel. The speech recognition turns on the continuous recognition mode.
For example: the collection of the environmental noise can be carried out at any time without judging whether the voice (human voice) is collected or not. Because in the case of far-field identification, the device is always picking up sound.
For example: voice collection: under far-field recognition, voice is collected in real time, and based on an end point detection voice recognition technology of short-time energy and zero crossing rate, the voice energy steep increase and steep decrease is calculated to be speaking voice.
For example: detecting environmental noise: under far-field recognition, voice is collected in real time, and the voice recognition technology based on short-time energy and zero crossing rate calculates that the voice energy is regarded as environmental noise within a certain environmental noise energy range within continuous time.
And 4, waiting for exiting the mode and exiting the voice recognition mode when valid voice input is not acquired within a specific time t. Re-entering the speech recognition state requires that a wake-up instruction is first received.
And 5, continuously identifying the mode and continuously acquiring voice, wherein the voice awakening instruction does not need to be sent out firstly when the user sends out the control voice instruction each time. And switching to the waiting exit mode until the user exits the voice recognition state by using the voice control instruction or the intelligent equipment recognizes that the environmental noise exceeds a preset threshold value.
Step 6, judging environmental noise: if the collected voice can identify the voice control command, other waveform segment inputs of the voice control command are filtered to be environmental noise. If the collected voice can not recognize the voice control command, the voice is all regarded as the environmental noise.
And 7, turning on an indicator light to indicate when the intelligent equipment is in a voice recognition state. The exit voice recognition status indicator light is turned off. The voice device sends out a buzzing prompt tone when the state changes.
Since the processing and functions of the air conditioner of this embodiment are basically corresponding to the embodiments, principles and examples of the apparatus shown in fig. 6, the description of this embodiment is not given in detail, and reference may be made to the related descriptions in the embodiments, which are not described herein again.
Through a large number of tests, the technical scheme of the invention is adopted, and the listening time is adaptively adjusted according to the environment noise condition, so that the listening time is prolonged in a quiet environment, the impatience caused by frequently inputting the awakening words by a user is avoided, the user experience can be optimized, and the control flow is simplified.
According to an embodiment of the present invention, there is also provided a storage medium corresponding to the voice control method. The storage medium may include: the storage medium has stored therein a plurality of instructions; the instructions are used for loading and executing the voice control method by the processor.
Since the processing and functions implemented by the storage medium of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 5, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.
Through a large number of tests, the technical scheme of the invention is adopted, the error recognition rate of voice recognition is judged based on the detection of the noise of the surrounding environment, the listening exit time of voice recognition is intelligently adjusted, the user operation is simplified under the condition of not influencing the accuracy rate of voice recognition, the user awakening word input is intelligently omitted, and the user experience is improved.
According to the embodiment of the invention, an air conditioner corresponding to the voice control method is also provided. The air conditioner may include: a processor for executing a plurality of instructions; a memory to store a plurality of instructions; wherein the instructions are stored in the memory, and loaded by the processor and used for executing the voice control method.
Since the processing and functions of the air conditioner of this embodiment are basically corresponding to the embodiments, principles and examples of the methods shown in fig. 1 to 5, the description of this embodiment is not detailed, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.
After a large number of tests verify that after the intelligent device is awakened, if the intelligent device is always in a quiet environment, the error recognition is low, the intelligent device can accurately recognize the voice control operation of the user, the error recognition cannot be generated, the intelligent device keeps a voice recognition state, the voice awakening word does not need to be input again in the next voice control, and the user experience can be improved.
In summary, it is readily understood by those skilled in the art that the advantageous modes described above can be freely combined and superimposed without conflict.
The above description is only an example of the present invention, and is not intended to limit the present invention, and it is obvious to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (17)

1. A voice control method, comprising:
acquiring sound information in an environment to which the equipment to be controlled belongs in a voice recognition state; the sound information includes: voice commands and/or ambient noise;
determining whether the noise volume of the environmental noise in the sound information exceeds a set noise threshold value;
if the noise volume does not exceed the noise threshold, controlling the equipment to be controlled to be kept in the voice recognition state according to a voice instruction in the voice information; if the noise volume exceeds the noise threshold, controlling the equipment to be controlled to exit the voice recognition state according to a voice command in the sound information and environmental noise;
collecting voice, detecting environmental noise, and if the environmental noise exceeds a set threshold decibel, starting a waiting exit mode by voice recognition; if the environmental noise is lower than the set threshold decibel, starting a continuous recognition mode by voice recognition; and waiting for exiting the mode and exiting the voice recognition mode if valid voice input is not acquired within a specific time t, and entering the voice recognition state again needs to receive a wake-up command firstly.
2. The method according to claim 1, wherein controlling the device to be controlled to be kept in the voice recognition state according to the voice instruction in the sound information comprises:
performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction;
if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction;
or if the voice instruction in the voice information is not the user control instruction, continuing to acquire the voice information in the environment to which the device to be controlled belongs in the voice recognition state.
3. The method of claim 1, wherein controlling the device to be controlled to exit the speech recognition state according to the speech command in the sound message and the environmental noise comprises:
performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction;
if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state;
or if the voice instruction in the sound information is not the user control instruction, determining that the sound information is ambient noise, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state.
4. The method of claim 3, wherein controlling the device to be controlled to exit the speech recognition state according to the listening time in the speech recognition state comprises:
determining whether a new voice command is input within the set listening time in the voice recognition state;
if a new voice command is input within the listening time, continuously acquiring the sound information of the environment to which the equipment to be controlled belongs in the voice recognition state;
or if no new voice command is input in the listening time, exiting the voice recognition state.
5. The method of any one of claims 1-4, further comprising:
acquiring a voice awakening word of a voice service for awakening the equipment to be controlled in the environment to which the equipment to be controlled belongs;
and controlling the equipment to be controlled to enter a voice recognition state according to the voice awakening word so as to start the voice service of the equipment to be controlled.
6. The method of any one of claims 1-4, further comprising:
indicating that the equipment to be controlled is in a first state of the voice recognition state and/or a second state that the equipment to be controlled exits the voice recognition state;
and/or the presence of a gas in the gas,
and if the equipment to be controlled sends state change between the first state in the voice recognition state and the second state in the voice recognition state, prompting the state change condition.
7. The method of claim 5, further comprising:
indicating that the equipment to be controlled is in a first state of the voice recognition state and/or a second state that the equipment to be controlled exits the voice recognition state;
and/or the presence of a gas in the gas,
and if the equipment to be controlled sends state change between the first state in the voice recognition state and the second state in the voice recognition state, prompting the state change condition.
8. A voice control apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring sound information in the environment of the device to be controlled in the voice recognition state of the device to be controlled; the sound information includes: voice commands and/or ambient noise;
the control unit is used for determining whether the noise volume of the environmental noise in the sound information exceeds a set noise threshold value;
the control unit is further configured to control the device to be controlled to be kept in the voice recognition state according to a voice instruction in the sound information if the noise volume does not exceed the noise threshold; the control unit is further used for controlling the equipment to be controlled to exit the voice recognition state according to the voice command in the sound information and the environmental noise if the noise volume exceeds the noise threshold;
collecting voice, detecting environmental noise, and if the environmental noise exceeds a set threshold decibel, starting a waiting exit mode by voice recognition; if the environmental noise is lower than the set threshold decibel, starting a continuous recognition mode by voice recognition; and waiting for exiting the mode and exiting the voice recognition mode if valid voice input is not acquired within a specific time t, and entering the voice recognition state again needs to receive a wake-up command firstly.
9. The apparatus according to claim 8, wherein the control unit controls the device to be controlled to remain in the voice recognition state according to the voice instruction in the sound information, and includes:
performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction;
if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction;
or if the voice instruction in the voice information is not the user control instruction, continuing to acquire the voice information in the environment to which the device to be controlled belongs in the voice recognition state.
10. The apparatus according to claim 8, wherein the control unit controls the device to be controlled to exit the speech recognition state according to the speech command in the sound information and the environmental noise, and comprises:
performing semantic analysis on the voice instruction in the voice information to determine whether the voice instruction in the voice information is a set user control instruction;
if the voice instruction in the sound information is the user control instruction, controlling the equipment to be controlled to execute the user control instruction, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state;
or if the voice instruction in the sound information is not the user control instruction, determining that the sound information is ambient noise, and controlling the equipment to be controlled to exit the voice recognition state according to the listening time in the voice recognition state.
11. The apparatus according to claim 10, wherein the control unit controls the device to be controlled to exit the voice recognition state according to the listening time in the voice recognition state, comprising:
determining whether a new voice command is input within the set listening time in the voice recognition state;
if a new voice command is input within the listening time, continuously acquiring the sound information of the environment to which the equipment to be controlled belongs in the voice recognition state;
or if no new voice command is input in the listening time, exiting the voice recognition state.
12. The apparatus of any one of claims 8-11, further comprising:
the acquiring unit is further configured to acquire a voice wakeup word of a voice service for waking up the device to be controlled in an environment to which the device to be controlled belongs;
the control unit is further configured to control the device to be controlled to enter a voice recognition state according to the voice wake-up word, so as to start a voice service of the device to be controlled.
13. The apparatus of any one of claims 8-11, further comprising:
the control unit is further used for indicating a first state that the equipment to be controlled is in the voice recognition state and/or a second state that the equipment to be controlled exits the voice recognition state;
and/or the presence of a gas in the gas,
the control unit is further configured to prompt a state change condition if the device to be controlled sends a state change between a first state in the voice recognition state and a second state in which the device exits the voice recognition state.
14. The apparatus of claim 12, further comprising:
the control unit is further used for indicating a first state that the equipment to be controlled is in the voice recognition state and/or a second state that the equipment to be controlled exits the voice recognition state;
and/or the presence of a gas in the gas,
the control unit is further configured to prompt a state change condition if the device to be controlled sends a state change between a first state in the voice recognition state and a second state in which the device exits the voice recognition state.
15. An air conditioner, comprising: a speech-controlled apparatus according to any one of claims 8 to 14.
16. A storage medium having a plurality of instructions stored therein; the plurality of instructions for being loaded by a processor and for performing the voice control method of any of claims 1-7.
17. An air conditioner, comprising:
a processor for executing a plurality of instructions;
a memory to store a plurality of instructions;
wherein the instructions are for storage by the memory and for loading and execution by the processor of the voice control method of any of claims 1-7.
CN201811489155.0A 2018-12-06 2018-12-06 Voice control method and device, storage medium and air conditioner Active CN109671426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811489155.0A CN109671426B (en) 2018-12-06 2018-12-06 Voice control method and device, storage medium and air conditioner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811489155.0A CN109671426B (en) 2018-12-06 2018-12-06 Voice control method and device, storage medium and air conditioner

Publications (2)

Publication Number Publication Date
CN109671426A CN109671426A (en) 2019-04-23
CN109671426B true CN109671426B (en) 2021-01-29

Family

ID=66143640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811489155.0A Active CN109671426B (en) 2018-12-06 2018-12-06 Voice control method and device, storage medium and air conditioner

Country Status (1)

Country Link
CN (1) CN109671426B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112530419B (en) * 2019-09-19 2024-05-24 百度在线网络技术(北京)有限公司 Speech recognition control method, device, electronic equipment and readable storage medium
CN112581945A (en) * 2019-09-29 2021-03-30 百度在线网络技术(北京)有限公司 Voice control method and device, electronic equipment and readable storage medium
CN110808030B (en) * 2019-11-22 2021-01-22 珠海格力电器股份有限公司 Voice awakening method, system, storage medium and electronic equipment
CN114999487A (en) * 2019-11-29 2022-09-02 添可智能科技有限公司 Voice interaction method of cleaning equipment and cleaning equipment
CN111192597A (en) * 2019-12-27 2020-05-22 浪潮金融信息技术有限公司 Processing method of continuous voice conversation in noisy environment
CN111243577B (en) * 2020-03-27 2022-04-19 四川虹美智能科技有限公司 Voice interaction method and device
CN111651135B (en) * 2020-04-27 2021-05-25 珠海格力电器股份有限公司 Sound awakening method and device, storage medium and electrical equipment
CN111681655A (en) * 2020-05-21 2020-09-18 北京声智科技有限公司 Voice control method and device, electronic equipment and storage medium
CN112233673A (en) * 2020-10-10 2021-01-15 广东美的厨房电器制造有限公司 Control method of kitchen system, and computer-readable storage medium
CN112233676B (en) * 2020-11-20 2024-07-23 深圳市欧瑞博科技股份有限公司 Intelligent device awakening method and device, electronic device and storage medium
CN114578955A (en) * 2020-11-30 2022-06-03 阿里巴巴(中国)有限公司 Gesture control method and device, electronic equipment and computer storage medium
CN112652304B (en) * 2020-12-02 2022-02-01 北京百度网讯科技有限公司 Voice interaction method and device of intelligent equipment and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140358552A1 (en) * 2013-05-31 2014-12-04 Cirrus Logic, Inc. Low-power voice gate for device wake-up
WO2015149216A1 (en) * 2014-03-31 2015-10-08 Intel Corporation Location aware power management scheme for always-on- always-listen voice recognition system
CN103986839A (en) * 2014-05-30 2014-08-13 深圳市中兴移动通信有限公司 Method for automatically setting contextual model and mobile terminal
CN104820556A (en) * 2015-05-06 2015-08-05 广州视源电子科技股份有限公司 Method and device for waking up voice assistant
CN105261368B (en) * 2015-08-31 2019-05-21 华为技术有限公司 A kind of voice awakening method and device
CN105451137A (en) * 2015-12-25 2016-03-30 广东欧珀移动通信有限公司 User equipment wake-up method and device
CN107145329A (en) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 Apparatus control method, device and smart machine

Also Published As

Publication number Publication date
CN109671426A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109671426B (en) Voice control method and device, storage medium and air conditioner
CN110060685B (en) Voice wake-up method and device
CN110428810B (en) Voice wake-up recognition method and device and electronic equipment
US8972252B2 (en) Signal processing apparatus having voice activity detection unit and related signal processing methods
US20180082684A1 (en) Voice Control User Interface with Progressive Command Engagement
US9026444B2 (en) System and method for personalization of acoustic models for automatic speech recognition
CN104580699B (en) Acoustic control intelligent terminal method and device when a kind of standby
CN110349579B (en) Voice wake-up processing method and device, electronic equipment and storage medium
CN109697981B (en) Voice interaction method, device, equipment and storage medium
CN109637531B (en) Voice control method and device, storage medium and air conditioner
CN105009204A (en) Speech recognition power management
CN110751948A (en) Voice recognition method, device, storage medium and voice equipment
CN111768783A (en) Voice interaction control method, device, electronic equipment, storage medium and system
CN109686368B (en) Voice wake-up response processing method and device, electronic equipment and storage medium
CA3164079A1 (en) Smart-device-orientated feedback awaking method and smart device thereof
CN111862965A (en) Awakening processing method and device, intelligent sound box and electronic equipment
CN109859752A (en) Voice control method, device, storage medium and voice joint control system
WO2019007247A1 (en) Human-machine conversation processing method and apparatus, and electronic device
US12062361B2 (en) Wake word method to prolong the conversational state between human and a machine in edge devices
CN111261143B (en) Voice wakeup method and device and computer readable storage medium
CN109686372B (en) Resource playing control method and device
CN112002315A (en) Voice control method and device, electrical equipment, storage medium and processor
CN111768604B (en) Remote controller control method, remote controller and electrical equipment
CN112712799A (en) Method, device, equipment and storage medium for acquiring false trigger voice information
CN113096651A (en) Voice signal processing method and device, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant