CN111048079A - Man-machine conversation method, system, electronic device and storage medium - Google Patents
Man-machine conversation method, system, electronic device and storage medium Download PDFInfo
- Publication number
- CN111048079A CN111048079A CN201910957296.9A CN201910957296A CN111048079A CN 111048079 A CN111048079 A CN 111048079A CN 201910957296 A CN201910957296 A CN 201910957296A CN 111048079 A CN111048079 A CN 111048079A
- Authority
- CN
- China
- Prior art keywords
- unit
- control
- equipment
- information
- mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013507 mapping Methods 0.000 claims abstract description 40
- 238000000605 extraction Methods 0.000 claims abstract description 30
- 239000000284 extract Substances 0.000 claims abstract description 8
- 238000004148 unit process Methods 0.000 claims abstract description 8
- 230000000875 corresponding effect Effects 0.000 claims description 33
- 230000003993 interaction Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 6
- 230000001276 controlling effect Effects 0.000 claims description 3
- 230000003139 buffering effect Effects 0.000 claims 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
The invention discloses a man-machine conversation method, which is applied to a man-machine conversation system, wherein the man-machine conversation system comprises a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, and the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit are respectively interacted with the control unit; the method comprises the following steps: the voice recognition unit receives voice information and converts the voice information into character information for recognition; the analysis unit processes the text information and judges whether the text information belongs to the household control category, if not, the last step is returned; the mapping unit maps the character information to the corresponding control action; and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result.
Description
Technical Field
The invention relates to the field of smart home, in particular to a man-machine conversation method, a man-machine conversation system, electronic equipment and a storage medium.
Background
At present, the current home industry has been rapidly changed towards the intelligent direction, and a user can not only control home equipment through a manual control remote controller or a switch any more, and the current trend is to control the equipment in a natural language mode through man-machine conversation, so that the user experience is greatly improved.
Patent literature (application No. CN201810883624.0, publication No. CN109166576A) discloses a smart home control system based on voice control, which includes: the system comprises a cloud server, a terminal mobile phone, a Bluetooth module, a controller and an execution module, wherein the terminal mobile phone is provided with a voice acquisition module and a voice recognition module, and the voice acquisition module is connected with the voice recognition module; the controller is respectively connected with the Bluetooth module and the execution module, and the execution module comprises a household appliance and a sensor; and the terminal mobile phone is respectively in wireless connection with the cloud server and the Bluetooth module. The invention simplifies the manual operation of a user by controlling the household appliances and the sensors by utilizing the voice control technology. The user speaks corresponding speech information, can realize unified intelligent control domestic appliance and sensor, and convenient operation is swift, improves user experience and practicality. The method is applicable to the intelligent home control system. However, this approach shows the disadvantages of the prior art as follows:
1. man-machine conversations lacking device names cannot be handled.
2. A single sentence controlling multiple devices simultaneously cannot be processed accurately.
3. Some everyday life wordings cannot be handled, such as: "Su Wei Shi".
4. The control intention of the user cannot be really understood, and error control is easily caused, such as: how the temperature of the air conditioner is adjusted down?
Disclosure of Invention
In order to overcome the disadvantages of the prior art, an object of the present invention is to provide a method, a system, an electronic device and a storage medium for human-machine interaction, which can solve the problem of home control.
One of the purposes of the invention is realized by adopting the following technical scheme:
a man-machine conversation method is applied to a man-machine conversation system, the man-machine conversation system comprises a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, and the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit are respectively interacted with the control unit; the method comprises the following steps:
a voice recognition step: the voice recognition unit receives voice information and converts the voice information into character information for recognition;
a voice analysis step: the analysis unit processes the text information and judges whether the text information belongs to the household control category, if not, the last step is returned;
a mapping step: the mapping unit maps the character information to the corresponding control action;
equipment extraction: and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result.
Further, in the device extraction step, if the control action involves a plurality of devices, the control unit interacts with a person through the voice recognition unit and determines the action implementation device.
Further, the man-machine conversation method further comprises a caching step: the control unit stores the action name, the attribute name and the attribute value of the corresponding action.
Further, the man-machine interaction method also comprises a control feedback step: and after the action is finished, the control unit feeds back the control result to the user.
Further, in the device extracting step: the control unit performs context filling on the text information to improve the information integrity.
Further, in context filling, the attribute name is matched to the device and filled.
Further, in the voice analysis step, the analysis unit divides the text information into a time sentence, a place sentence, a reason sentence, and a numerical sentence.
A man-machine conversation system comprises a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, wherein the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit are respectively interacted with the control unit; the voice recognition unit receives voice information and converts the voice information into character information for recognition; the analysis unit processes the text information and judges whether the text information belongs to the household control category; the mapping unit maps the character information to the corresponding control action; and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result.
An electronic device, comprising: a processor;
a memory; and a program, wherein the program is stored in the memory and configured to be executed by the processor, the program comprising instructions for performing a human-machine dialog method.
A computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor for a man-machine interaction method.
Compared with the prior art, the invention has the beneficial effects that:
1. the voice recognition unit receives voice information and converts the voice information into character information for recognition; the analysis unit processes the text information and judges whether the text information belongs to the household control category, if not, the last step is returned; the mapping unit maps the character information to the corresponding control action; the equipment extraction unit extracts corresponding equipment and control instructions according to corresponding control actions of the mapping unit, the control unit controls the corresponding equipment to execute the control actions and feed back a control result, syntactic analysis is used, and accurate identification and better human-computer interaction experience of a home control text are achieved by combining a dialogue manager and sentence mapping;
2. the interactive capability of the man-machine conversation is improved through the mapping unit and the syntactic analysis.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of a man-machine interaction method according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When a component is referred to as being "disposed on" another component, it can be directly on the other component or intervening components may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, a human-computer conversation method is applied to a human-computer conversation system, the human-computer conversation system includes a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, and the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit interact with the control unit respectively; the method comprises the following steps:
a voice recognition step: the voice recognition unit receives voice information and converts the voice information into character information for recognition;
a voice analysis step: the analysis unit processes the text information and judges whether the text information belongs to the household control category, if not, the last step is returned; specifically, in the voice analysis step, the analysis unit divides the text information into a time sentence, a place sentence, a reason sentence, and a numerical sentence. In practical application, sentence types can be obtained through the syntactic analysis module, and the summary is summarized into 9 sentence types, namely, wyh, what, how, what, and whether, home control. And for the sentence types which are not controlled by the home, the system directly gives dialogue feedback and jumps out of the flow.
A mapping step: the mapping unit maps the character information to the corresponding control action; the mapping unit is used for processing the sentence without explicitly specifying how to control the household equipment. Such as "the weather is too hot". By means of the mapping unit, sentences may be mapped to certain specific control actions, such as "weather is too hot" may be mapped to "adjust the temperature of the air conditioner to 25 degrees". Specifically, in this step, the corresponding control action is converted into an action name, an attribute name, and an attribute value.
Equipment extraction: and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result. In the device extraction step: the control unit performs context filling on the text information to improve the information integrity. Specifically, in context filling, the attribute name is matched with the device and filled. And performing integrity check on the extracted data, and filling in a context filling module. For example, if the device name is missing ("temperature adjusted to 28 degrees"), the module is checked for padding. The specific process is to check whether the equipment controlled in the latest N rounds of conversations has the temperature attribute, and if so, the equipment operated most recently from the current conversation is selected as the equipment needing to be controlled at this time. If not, searching whether equipment with the temperature attribute exists in the database, and if so, selecting the equipment as the equipment controlled at this time. The method uses syntactic analysis, combines with a dialogue manager and sentence mapping, accurately identifies the home control text and has better human-computer interaction experience, and the interaction capability of human-computer dialogue is improved.
A session confirmation step, located between the mapping step and the device extraction step: when the control actions of the mapping unit relate to a plurality of devices, the control unit interacts with the person through the speech recognition unit and determines the action-enforcing device. For devices for which control is not explicitly specified, but multiple devices in the database simultaneously support the action to be performed by the current statement. The user is asked back which device the controlled device is.
A caching step: the control unit stores the action name, the attribute name and the attribute value of the corresponding action.
And a control feedback step: and after the action is finished, the control unit feeds back the control result to the user.
A man-machine conversation system comprises a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, wherein the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit are respectively interacted with the control unit; the voice recognition unit receives voice information and converts the voice information into character information for recognition; the analysis unit processes the text information and judges whether the text information belongs to the household control category; the mapping unit maps the character information to the corresponding control action; and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result.
An electronic device, comprising: a processor;
a memory; and a program, wherein the program is stored in the memory and configured to be executed by the processor, the program comprising instructions for performing a human-machine dialog method.
A computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor for a man-machine interaction method. The mode is novel, and the suitability is strong, the facilitate promotion.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.
Claims (10)
1. A man-machine conversation method is applied to a man-machine conversation system, the man-machine conversation system comprises a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, and the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit are respectively interacted with the control unit; the method is characterized by comprising the following steps:
a voice recognition step: the voice recognition unit receives voice information and converts the voice information into character information for recognition;
a voice analysis step: the analysis unit processes the text information and judges whether the text information belongs to the household control category, if not, the last step is returned;
a mapping step: the mapping unit maps the character information to the corresponding control action;
equipment extraction: and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result.
2. The human-computer interaction method of claim 1, wherein in the device extraction step, if the control action involves a plurality of devices, the control unit interacts with the person through the voice recognition unit and determines the action-performing device.
3. The human-computer interaction method of claim 1, wherein the human-computer interaction method further comprises a buffering step of: the control unit stores the action name, the attribute name and the attribute value of the corresponding action.
4. The human-computer interaction method of claim 1, wherein the human-computer interaction method further comprises the step of controlling feedback: and after the action is finished, the control unit feeds back the control result to the user.
5. A human-computer dialog method as claimed in claim 1, characterized in that in the device extraction step: the control unit performs context filling on the text information to improve the information integrity.
6. A human-computer dialog method as claimed in claim 5, characterized in that: when the context is filled, the attribute name is matched with the device and filled.
7. A human-computer dialog method as claimed in claim 1, characterized in that: in the voice analysis step, the analysis unit divides the text information into a time sentence, a place sentence, a reason sentence and a numerical sentence.
8. A man-machine conversation system is characterized by comprising a voice recognition unit, an analysis unit, a mapping unit, an equipment extraction unit and a control unit, wherein the voice recognition unit, the analysis unit, the mapping unit and the equipment extraction unit are respectively interacted with the control unit; the voice recognition unit receives voice information and converts the voice information into character information for recognition; the analysis unit processes the text information and judges whether the text information belongs to the household control category; the mapping unit maps the character information to the corresponding control action; and the equipment extraction unit extracts corresponding equipment and control instructions according to the corresponding control actions of the mapping unit, and the control unit controls the corresponding equipment to execute the control actions and feed back a control result.
9. An electronic device, characterized by comprising: a processor;
a memory; and a program, wherein the program is stored in the memory and configured to be executed by the processor, the program comprising instructions for carrying out the method of any one of claims 1-7.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program is executed by a processor for performing the method according to any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910957296.9A CN111048079A (en) | 2019-10-09 | 2019-10-09 | Man-machine conversation method, system, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910957296.9A CN111048079A (en) | 2019-10-09 | 2019-10-09 | Man-machine conversation method, system, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111048079A true CN111048079A (en) | 2020-04-21 |
Family
ID=70232233
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910957296.9A Pending CN111048079A (en) | 2019-10-09 | 2019-10-09 | Man-machine conversation method, system, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111048079A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112151037A (en) * | 2020-09-23 | 2020-12-29 | 江苏小梦科技有限公司 | Man-machine conversation system based on embedded software |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105388772A (en) * | 2015-12-04 | 2016-03-09 | 重庆财信合同能源管理有限公司 | Indoor intelligent control system and method based on voice recognition |
CN108337139A (en) * | 2018-01-29 | 2018-07-27 | 广州索答信息科技有限公司 | Home appliance voice control method, electronic equipment, storage medium and system |
CN108694942A (en) * | 2018-04-02 | 2018-10-23 | 浙江大学 | A kind of smart home interaction question answering system based on home furnishings intelligent service robot |
CN110246496A (en) * | 2019-07-01 | 2019-09-17 | 珠海格力电器股份有限公司 | Audio recognition method, system, computer equipment and storage medium |
-
2019
- 2019-10-09 CN CN201910957296.9A patent/CN111048079A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105388772A (en) * | 2015-12-04 | 2016-03-09 | 重庆财信合同能源管理有限公司 | Indoor intelligent control system and method based on voice recognition |
CN108337139A (en) * | 2018-01-29 | 2018-07-27 | 广州索答信息科技有限公司 | Home appliance voice control method, electronic equipment, storage medium and system |
CN108694942A (en) * | 2018-04-02 | 2018-10-23 | 浙江大学 | A kind of smart home interaction question answering system based on home furnishings intelligent service robot |
CN110246496A (en) * | 2019-07-01 | 2019-09-17 | 珠海格力电器股份有限公司 | Audio recognition method, system, computer equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112151037A (en) * | 2020-09-23 | 2020-12-29 | 江苏小梦科技有限公司 | Man-machine conversation system based on embedded software |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107276864B (en) | Method, device and system for controlling household appliances by intelligent voice equipment | |
CN107146622B (en) | Refrigerator, voice interaction system, method, computer device and readable storage medium | |
CN107370649B (en) | Household appliance control method, system, control terminal and storage medium | |
CN106328148B (en) | Natural voice recognition method, device and system based on local and cloud hybrid recognition | |
CN106782526B (en) | Voice control method and device | |
CN107644638B (en) | Audio recognition method, device, terminal and computer readable storage medium | |
CN109710727B (en) | System and method for natural language processing | |
CN105161106A (en) | Voice control method of intelligent terminal, voice control device and television system | |
CN105446159A (en) | Intelligent household system and data processing method thereof | |
CN107688329B (en) | Intelligent home control method and intelligent home control system | |
EP3157003B1 (en) | Terminal control method and device, voice control device and terminal | |
CN108121528A (en) | Sound control method, device, server and computer readable storage medium | |
CN105391730A (en) | Information feedback method, device and system | |
CN110060679B (en) | Whole-course voice control interaction method and system | |
CN109508167A (en) | The display device and method of display device are controlled in speech recognition system | |
CN107479400A (en) | Control method, device, home appliance and the readable storage medium storing program for executing of home appliance | |
CN113534676A (en) | Intelligent household appliance control method and device, computer equipment and storage medium | |
CN105653709A (en) | Intelligent home voice text control method | |
CN111312253A (en) | Voice control method, cloud server and terminal equipment | |
CN106205622A (en) | Information processing method and electronic equipment | |
CN116483980A (en) | Man-machine interaction method, device and system | |
CN113096653A (en) | Personalized accent voice recognition method and system based on artificial intelligence | |
CN111048079A (en) | Man-machine conversation method, system, electronic device and storage medium | |
CN105376416A (en) | Communication terminal control method and device | |
CN107390598B (en) | Device control method, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200421 |