CN110136707B - Man-machine interaction system for multi-equipment autonomous decision making - Google Patents

Man-machine interaction system for multi-equipment autonomous decision making Download PDF

Info

Publication number
CN110136707B
CN110136707B CN201910323610.8A CN201910323610A CN110136707B CN 110136707 B CN110136707 B CN 110136707B CN 201910323610 A CN201910323610 A CN 201910323610A CN 110136707 B CN110136707 B CN 110136707B
Authority
CN
China
Prior art keywords
module
equipment
information
submodule
terminal equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910323610.8A
Other languages
Chinese (zh)
Other versions
CN110136707A (en
Inventor
李霄寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN201910323610.8A priority Critical patent/CN110136707B/en
Publication of CN110136707A publication Critical patent/CN110136707A/en
Application granted granted Critical
Publication of CN110136707B publication Critical patent/CN110136707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides a man-machine interaction system for carrying out multi-equipment autonomous decision, which is characterized in that a plurality of equipment intermediary modules acquire confidence data of each corresponding terminal equipment about voice signals from the outside, generate confidence comprehensive information about the confidence data according to the confidence data and first necessary information about the terminal equipment in a knowledge base module, finally generate a control feedback signal for controlling the terminal equipment through a decision center module according to the confidence comprehensive information and second necessary information about the terminal equipment in the knowledge base module, and send the control feedback signal to one of the terminal equipment according to a preset rule so as to control the terminal equipment to carry out adaptive response operation.

Description

Man-machine interaction system for multi-equipment autonomous decision making
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a human-computer interaction system for performing multi-device autonomous decision making.
Background
With the development of artificial intelligence technology, the artificial intelligence technology has gradually entered into the lives and works of people. The user can implement different control operations on different terminal devices by means of artificial intelligence technology, and the control operations are obviously different from the traditional control operation means. Generally, the traditional control operation means is realized based on a direct contact control mode such as key control and the like implemented by a user on the terminal device, and at present, when the control operation is implemented on the terminal device by means of an artificial intelligence technology, the user needs to send a voice control command to the terminal device to drive the terminal device to make adaptive working state switching. The terminal equipment can switch the working state according to the voice control command from the user, and mainly carries out voice paraphrasing on the voice control command through an artificial intelligence technology so as to obtain the voice intention meaning about the user actually contained in the voice control command. The voice interaction mode for performing voice paraphrasing based on the corresponding voice control command is a main interaction mode adopted by the existing man-machine interaction system, and the voice interaction mode can improve the control convenience and accuracy of users on different terminal devices to the maximum extent.
Because the voice interaction mode is widely applied to the control operation of different terminal devices, in the process of voice interaction between a user and a single terminal device in a certain space, the user needs to wake up the terminal device by calling the name of the terminal device and then perform a series of subsequent interactions with the terminal device, so that the voice control operation of the terminal device is realized. However, in an actual application scenario, there may not be only one terminal device with a voice control function in the same space, and when there are multiple terminal devices with a voice interaction control function in the same space, a user needs to set a different name for each terminal device, otherwise, there is a case that the user yels a name, and multiple different terminal devices respond at the same time. In order to overcome the problem, a user needs to set different names for different terminal devices, but the method needs the user to simultaneously memorize the matching relationship between different terminal devices and the set names, which is very inconvenient for the user; in addition, when different terminal devices exist in different spaces, the existing voice interaction mode cannot enable a user to perform voice control operation on the terminal device in another space in one space.
Disclosure of Invention
The invention provides a human-computer interaction system for multi-device autonomous decision making, which is characterized in that a plurality of device intermediary modules acquire confidence data of each corresponding terminal device about voice signals from the outside, generate confidence comprehensive information about the confidence data according to the confidence data and first necessary information about the terminal device in a knowledge base module, finally generate a control feedback signal for controlling the terminal device according to the confidence comprehensive information and second necessary information about the terminal device in the knowledge base module through a decision center module, and send the control feedback signal to one terminal device according to a preset rule so as to control the terminal device to make adaptive response operation. Therefore, the human-computer interaction system for carrying out the multi-device autonomous decision carries out comprehensive analysis and judgment on information provided by different terminal devices through the confidence coefficient analysis module, so that under the condition that the different terminal devices are awakened by voice signals, the most appropriate terminal device is determined to be awakened by the voice signals, and corresponding feedback response roles are given to the awakened terminal devices, so that the voice control experience of a user is greatly improved; in addition, the man-machine interaction system for carrying out the multi-device autonomous decision also determines the corresponding working mode of the terminal device in the knowledge base module by combining the voice intention of the user in the voice signal and the respective operation function information of the terminal device through the decision center module, thereby completing the voice control function which can not be realized by the existing voice interaction mode.
The invention provides a human-computer interaction system for multi-equipment autonomous decision making, which is characterized in that:
the human-computer interaction system for carrying out the multi-device autonomous decision comprises a plurality of device intermediary modules, a confidence coefficient analysis module, a decision center module and a knowledge base module; wherein the content of the first and second substances,
the equipment intermediary modules are used for being in one-to-one corresponding connection with the terminal equipment, and each equipment intermediary module is used for acquiring a voice signal from the outside so as to calculate confidence data of the terminal equipment correspondingly connected with the equipment intermediary module about the voice signal;
the confidence coefficient analysis module is used for generating comprehensive confidence coefficient information about all the confidence coefficient data according to the first necessary information stored by the knowledge base module;
the decision center module is used for sending a control feedback signal to one of the equipment intermediary modules according to a preset rule according to second necessary information and the comprehensive information of the confidence coefficient stored in the knowledge base module so as to control the terminal equipment corresponding to one of the equipment intermediary modules to make adaptive response operation;
further, each of the plurality of equipment intermediary modules comprises a voice signal receiving sub-module, a voice signal awakening sub-module and a voice signal analyzing sub-module; wherein
The voice signal receiving sub-module comprises a microphone array, and the microphone array is used for receiving voice signals from the outside;
the voice signal awakening submodule is used for awakening the corresponding terminal equipment according to the voice signal from the outside;
the voice signal analysis submodule is used for analyzing the voice signal from the outside so as to acquire voice paraphrase information about the voice signal from the outside;
further, the waking up operation of the voice signal waking up sub-module on the corresponding terminal device specifically includes,
the voice signal awakening sub-module extracts a characteristic keyword from the voice signal from the outside and performs matching processing on the characteristic keyword and an awakening word to which the corresponding terminal equipment belongs; wherein the content of the first and second substances,
if the characteristic keyword is matched with the awakening word, the voice signal awakening sub-module executes awakening operation on corresponding terminal equipment;
if the characteristic keyword is not matched with the awakening word, the voice signal awakening sub-module does not execute awakening operation on the corresponding terminal equipment;
further, each of the plurality of equipment intermediary modules comprises a wakeup word detection engine submodule and a confidence coefficient calculation submodule;
the awakening word detection engine submodule is used for generating awakening word detection information about the terminal equipment in the voice signal from the outside;
the confidence coefficient calculation submodule is used for generating confidence coefficient data corresponding to each of all the terminal devices according to the voice paraphrasing information, the device function information corresponding to each of the plurality of terminal devices and the awakening word detection information;
further, each of the plurality of equipment mediation modules comprises an equipment function information acquisition submodule; wherein the content of the first and second substances,
the device function information acquisition submodule is used for acquiring the device function information corresponding to each terminal device;
the equipment function information acquisition submodule at least comprises an audio playing function determination unit, a video playing function determination unit, an illumination function determination unit, a temperature control function determination unit, a motion function determination unit or a cleaning function determination unit;
the audio playing function determining unit is used for determining whether the terminal equipment has an audio playing function or not so as to form part of the equipment function information;
the video playing function determining unit is used for determining whether the terminal equipment has a video playing function or not so as to form part of the equipment function information;
the lighting function determining unit is used for determining whether the terminal equipment has a lighting function or not so as to form part of the equipment function information;
the temperature control function determining unit is used for determining whether the terminal equipment has a temperature control function or not so as to form part of the equipment function information;
the motion function determining unit is used for determining whether the terminal equipment has a motion displacement function or not so as to form part of the equipment function information;
the cleaning function determining unit is used for determining whether the terminal equipment has cleaning energy or not so as to form part of the equipment function information;
further, the confidence coefficient analysis module comprises a first clock signal generation submodule, a confidence coefficient data receiving submodule, a first necessary information acquisition submodule and a confidence coefficient comprehensive calculation submodule; wherein the content of the first and second substances,
the first clock signal generation submodule is used for generating a first clock signal;
the confidence data receiving submodule is used for receiving the confidence data from all the equipment intermediary modules according to the first clock signal;
the first necessary information acquisition submodule is used for acquiring the first necessary information corresponding to all the terminal equipment from the knowledge base module, wherein the first necessary information at least comprises information about the self operation function and the working state of the terminal equipment;
the confidence comprehensive calculation submodule is used for calculating to obtain the confidence comprehensive information according to the first necessary information and the confidence data;
further, the decision center module comprises a second necessary information acquisition submodule and a feedback signal generation submodule; wherein
The second necessary information acquisition submodule is used for acquiring second necessary information corresponding to all the terminal devices from the knowledge base module, wherein the second necessary information at least comprises content information of the voice signal, external user intention information contained in the voice signal, type information of the terminal devices, name information of the terminal devices or operation function information of the terminal devices;
the feedback signal generation submodule is used for generating the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
further, the decision center module comprises a second necessary information acquisition sub-module, a feedback signal generation sub-module and a terminal equipment designation sub-module; wherein the content of the first and second substances,
the second necessary information acquisition sub-module is used for acquiring distance information between each terminal device and an external user from the knowledge base module to serve as the second necessary information;
the feedback signal generation submodule is used for generating the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
the terminal equipment appointing submodule is used for sending the control feedback signal to an equipment mediation module corresponding to the terminal equipment with the minimum distance to the outside user so that the equipment mediation module can implement adaptive feedback operation according to the control feedback signal;
further, the decision center module comprises a second necessary information acquisition sub-module, a feedback signal generation sub-module and a terminal equipment designation sub-module; wherein the content of the first and second substances,
the second necessary information acquisition submodule is used for acquiring the second necessary information from the knowledge base module;
the feedback signal generation submodule is used for generating the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
the terminal equipment appointing submodule is used for determining one of a plurality of terminal equipment as target terminal equipment according to a preset selection rule and sending the control feedback signal to an equipment mediation module corresponding to the target terminal equipment so that the equipment mediation module can implement adaptive feedback operation according to the control feedback signal;
further, the device mediation module being capable of performing an adaptive feedback operation based on the control feedback signal may specifically include,
the equipment mediation module can instruct the corresponding terminal equipment to execute voice interaction feedback operation matched with the voice signal from the outside according to the control feedback signal; alternatively, the first and second electrodes may be,
the equipment mediation module can execute the working state switching operation matched with the voice signal from the outside according to the terminal equipment corresponding to the control feedback signal indicator.
Compared with the prior art, the man-machine interaction system for carrying out the multi-device autonomous decision is characterized in that the plurality of device intermediary modules acquire confidence data of each corresponding terminal device about a voice signal from the outside, generate confidence comprehensive information about the confidence data according to the confidence data and first necessary information about the terminal device in the knowledge base module, and finally generate a control feedback signal for controlling the terminal device through the decision center module according to the confidence comprehensive information and second necessary information about the terminal device in the knowledge base module, and send the control feedback signal to one of the terminal devices according to a preset rule so as to control the terminal device to carry out adaptive response operation. Therefore, the human-computer interaction system for carrying out the multi-device autonomous decision carries out comprehensive analysis and judgment on information provided by different terminal devices through the confidence coefficient analysis module, so that under the condition that the different terminal devices are awakened by voice signals, the most appropriate terminal device is determined to be awakened by the voice signals, and corresponding feedback response roles are given to the awakened terminal devices, so that the voice control experience of a user is greatly improved; in addition, the man-machine interaction system for carrying out the multi-device autonomous decision also determines the corresponding working mode of the terminal device in the knowledge base module by combining the voice intention of the user in the voice signal and the respective operation function information of the terminal device through the decision center module, thereby completing the voice control function which can not be realized by the existing voice interaction mode.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a human-computer interaction system for performing multi-device autonomous decision making according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic structural diagram of a human-computer interaction system for performing a multi-device autonomous decision according to an embodiment of the present invention. The human-computer interaction system for multi-device autonomous decision making may include, but is not limited to, a number of device mediation modules, a confidence analysis module, a decision center module, and a knowledge base module.
Preferably, the device intermediary modules are used for performing one-to-one corresponding connection with the terminal devices, and each device intermediary module is used for acquiring a voice signal from the outside, so as to calculate confidence data of the terminal device correspondingly connected with the device intermediary module about the voice signal;
preferably, the confidence coefficient analysis module is used for generating the confidence coefficient comprehensive information about all the confidence coefficient data according to the first necessary information stored by the knowledge base module;
preferably, the decision center module is configured to send a control feedback signal to one of the multiple device broker modules according to a preset rule according to the second necessary information and the comprehensive information of the confidence level stored in the knowledge base module, so as to control the terminal device corresponding to the one device broker module to perform an adaptive response operation;
preferably, each of the number of device mediation modules may include, but is not limited to, a voice signal receiving sub-module, a voice signal wake-up sub-module, and a voice signal analysis sub-module.
Preferably, the voice signal receiving sub-module comprises a microphone array for receiving a voice signal from the outside;
preferably, the voice signal wake-up sub-module is configured to wake up a corresponding terminal device according to the voice signal from the outside;
preferably, the speech signal analysis sub-module is configured to analyze the speech signal from the outside to obtain speech paraphrase information about the speech signal from the outside.
Preferably, the waking up the terminal device by the voice signal waking up sub-module specifically includes,
the voice signal awakening sub-module extracts a characteristic keyword from the voice signal from the outside and performs matching processing on the characteristic keyword and an awakening word to which the corresponding terminal equipment belongs; wherein the content of the first and second substances,
if the feature keyword is matched with the awakening word, the voice signal awakening sub-module executes awakening operation on the corresponding terminal equipment;
and if the characteristic keyword is not matched with the awakening word, the voice signal awakening sub-module does not execute awakening operation on the corresponding terminal equipment.
Preferably, each of the number of device mediation modules may include, but is not limited to, a wake word detection engine sub-module and a confidence calculation sub-module;
preferably, the awakening word detection engine submodule is configured to generate awakening word detection information about the terminal device in the voice signal from the outside;
preferably, the confidence coefficient calculation sub-module is configured to generate the confidence coefficient data corresponding to each of all the terminal devices according to the speech paraphrasing information, the device function information corresponding to each of the plurality of terminal devices, and the wakeup word detection information.
Preferably, each of the several device mediation modules may include, but is not limited to, a device function information acquisition sub-module;
preferably, the device function information obtaining sub-module is configured to obtain the device function information corresponding to each terminal device;
preferably, the device function information obtaining sub-module at least comprises an audio playing function determining unit, a video playing function determining unit, an illumination function determining unit, a temperature control function determining unit, a motion function determining unit or a cleaning function determining unit;
preferably, the audio playing function determining unit is configured to determine whether the terminal device has an audio playing function, so as to form a part of the device function information;
preferably, the video playing function determining unit is configured to determine whether the terminal device has a video playing function, so as to form a part of the device function information;
preferably, the illumination function determination unit is configured to determine whether the terminal device has an illumination function, thereby forming a part of the device function information;
preferably, the temperature control function determining unit is configured to determine whether the terminal device has a temperature control function, so as to form a part of the device function information;
preferably, the motion function determining unit is configured to determine whether the terminal device has a motion displacement function, thereby forming a part of the device function information;
preferably, the cleaning function determining unit is configured to determine whether the terminal device has cleaning capability, thereby forming part of the device function information.
Preferably, the confidence level analyzing module may include, but is not limited to, a first clock signal generating sub-module, a confidence level data receiving sub-module, a first necessary information obtaining sub-module, and a confidence level comprehensive calculating sub-module;
preferably, the first clock signal generation submodule is configured to generate a first clock signal;
preferably, the confidence data receiving submodule is configured to receive the confidence data from all the device mediation modules according to the first clock signal;
preferably, the first necessary information obtaining sub-module is configured to obtain the first necessary information corresponding to all the terminal devices from the knowledge base module, where the first necessary information at least includes information about the operating functions and operating states of the terminal devices themselves;
preferably, the confidence degree comprehensive calculation sub-module is configured to calculate the confidence degree comprehensive information according to the first necessary information and the confidence degree data.
Preferably, the decision center module includes, but is not limited to, a second necessary information acquisition sub-module and a feedback signal generation sub-module;
preferably, the second necessary information obtaining sub-module is configured to obtain the second necessary information corresponding to all the terminal devices from the knowledge base module, where the second necessary information at least includes content information of the voice signal, external user intention information included in the voice signal, type information of the terminal device, name information of the terminal device, or operation function information of the terminal device;
preferably, the feedback signal generation submodule is configured to generate the control feedback signal according to the second necessary information and the confidence level integrated information.
Preferably, the decision center module may include, but is not limited to, a second necessary information obtaining sub-module, a feedback signal generating sub-module, and a terminal device specifying sub-module;
preferably, the second necessary information obtaining sub-module is configured to obtain, from the knowledge base module, information about distances between each of all terminal devices and external users as the second necessary information;
preferably, the feedback signal generation submodule is configured to generate the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
preferably, the terminal device designating sub-module is configured to send the control feedback signal to the device mediation module corresponding to the terminal device with the smallest distance to the external user, so that the device mediation module can perform an adaptive feedback operation according to the control feedback signal.
Preferably, the decision center module may include, but is not limited to, a second necessary information obtaining sub-module, a feedback signal generating sub-module, and a terminal device specifying sub-module;
preferably, the second necessary information obtaining sub-module is used for obtaining the second necessary information from the knowledge base module;
preferably, the feedback signal generation submodule is configured to generate the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
preferably, the terminal device designating sub-module is configured to determine one of the plurality of terminal devices as a target terminal device according to a preset selection rule, and send the control feedback signal to the device mediation module corresponding to the target terminal device, so that the device mediation module can implement an adaptive feedback operation according to the control feedback signal.
Preferably, the device mediation module being capable of performing an adaptive feedback operation based on the control feedback signal specifically includes,
the equipment intermediary module can instruct the corresponding terminal equipment to execute voice interaction feedback operation matched with the voice signal from the outside according to the control feedback signal; alternatively, the first and second electrodes may be,
the equipment mediation module can execute the working state switching operation matched with the voice signal from the outside according to the terminal equipment corresponding to the control feedback signal indicator.
It can be seen from the above embodiments that, in the human-computer interaction system for performing multi-device autonomous decision, the plurality of device intermediary modules acquire confidence data of each corresponding terminal device with respect to a voice signal from the outside, generate confidence comprehensive information with respect to the confidence data according to the confidence data and first necessary information with respect to the terminal device in the knowledge base module, and finally generate a control feedback signal for controlling the terminal device through the decision center module according to the confidence comprehensive information and second necessary information with respect to the terminal device in the knowledge base module, and send the control feedback signal to one of the terminal devices according to a preset rule, so as to control the terminal device to make an adaptive response operation. Therefore, the human-computer interaction system for carrying out the multi-device autonomous decision carries out comprehensive analysis and judgment on information provided by different terminal devices through the confidence coefficient analysis module, so that under the condition that the different terminal devices are awakened by voice signals, the most appropriate terminal device is determined to be awakened by the voice signals, and corresponding feedback response roles are given to the awakened terminal devices, so that the voice control experience of a user is greatly improved; in addition, the man-machine interaction system for carrying out the multi-device autonomous decision also determines the corresponding working mode of the terminal device in the knowledge base module by combining the voice intention of the user in the voice signal and the respective operation function information of the terminal device through the decision center module, thereby completing the voice control function which can not be realized by the existing voice interaction mode.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A human-computer interaction system for making multi-device autonomous decisions, characterized by:
the human-computer interaction system for carrying out the multi-device autonomous decision comprises a plurality of device intermediary modules, a confidence coefficient analysis module, a decision center module and a knowledge base module; wherein the content of the first and second substances,
the equipment intermediary modules are used for being in one-to-one corresponding connection with the terminal equipment, and each equipment intermediary module is used for acquiring a voice signal from the outside so as to calculate confidence data of the terminal equipment correspondingly connected with the equipment intermediary module about the voice signal;
the confidence coefficient analysis module is used for generating comprehensive confidence coefficient information about all the confidence coefficient data according to first necessary information stored in the knowledge base module, wherein the first necessary information at least comprises information about the self operation function and the working state of the terminal equipment;
the decision center module is used for sending a control feedback signal to one of the equipment intermediary modules according to a preset rule according to second necessary information and the comprehensive information of the confidence coefficient stored in the knowledge base module so as to control the terminal equipment corresponding to the one equipment intermediary module to make adaptive response operation, wherein the second necessary information at least comprises content information of the voice signal, external user intention information contained in the voice signal, type information of the terminal equipment, name information of the terminal equipment or operation function information of the terminal equipment.
2. The human-computer interaction system for making multi-device autonomic decisions of claim 1, wherein: each of the equipment intermediary modules comprises a voice signal receiving submodule, a voice signal awakening submodule and a voice signal analyzing submodule; wherein
The voice signal receiving sub-module comprises a microphone array, and the microphone array is used for receiving voice signals from the outside;
the voice signal awakening submodule is used for awakening the corresponding terminal equipment according to the voice signal from the outside;
the voice signal analysis submodule is used for analyzing the voice signal from the outside so as to obtain the voice paraphrasing information of the voice signal from the outside.
3. The human-computer interaction system for making multi-device autonomic decisions of claim 2, wherein:
the voice signal awakening sub-module specifically comprises the steps that the voice signal awakening sub-module extracts feature keywords from the voice signals from the outside and performs matching processing on the feature keywords and awakening words to which the corresponding terminal equipment belongs; wherein the content of the first and second substances,
if the characteristic keyword is matched with the awakening word, the voice signal awakening sub-module executes awakening operation on corresponding terminal equipment;
and if the characteristic keyword is not matched with the awakening word, the voice signal awakening sub-module does not execute awakening operation on the corresponding terminal equipment.
4. The human-computer interaction system for making multi-device autonomic decisions of claim 2, wherein: each of the plurality of equipment intermediary modules comprises a wakeup word detection engine submodule and a confidence coefficient calculation submodule;
the awakening word detection engine submodule is used for generating awakening word detection information about the terminal equipment in the voice signal from the outside;
the confidence coefficient calculation submodule is used for generating confidence coefficient data corresponding to each of all the terminal devices according to the voice paraphrasing information, the device function information corresponding to each of the plurality of terminal devices and the awakening word detection information;
each of the plurality of equipment intermediary modules comprises an equipment function information acquisition submodule; wherein the content of the first and second substances,
the device function information acquisition submodule is used for acquiring the device function information corresponding to each terminal device;
the equipment function information acquisition submodule at least comprises an audio playing function determination unit, a video playing function determination unit, an illumination function determination unit, a temperature control function determination unit, a motion function determination unit or a cleaning function determination unit;
the audio playing function determining unit is used for determining whether the terminal equipment has an audio playing function or not so as to form part of the equipment function information;
the video playing function determining unit is used for determining whether the terminal equipment has a video playing function or not so as to form part of the equipment function information;
the lighting function determining unit is used for determining whether the terminal equipment has a lighting function or not so as to form part of the equipment function information;
the temperature control function determining unit is used for determining whether the terminal equipment has a temperature control function or not so as to form part of the equipment function information;
the motion function determining unit is used for determining whether the terminal equipment has a motion displacement function or not so as to form part of the equipment function information;
the cleaning function determining unit is used for determining whether the terminal equipment has cleaning energy or not, so that part of the equipment function information is formed.
5. The human-computer interaction system for making multi-device autonomic decisions of claim 1, wherein: the confidence coefficient analysis module comprises a first clock signal generation submodule, a confidence coefficient data receiving submodule, a first necessary information acquisition submodule and a confidence coefficient comprehensive calculation submodule; wherein the content of the first and second substances,
the first clock signal generation submodule is used for generating a first clock signal;
the confidence data receiving submodule is used for receiving the confidence data from all the equipment intermediary modules according to the first clock signal;
the first necessary information acquisition sub-module is used for acquiring the first necessary information corresponding to all the terminal devices from the knowledge base module;
and the confidence comprehensive calculation submodule is used for calculating to obtain the confidence comprehensive information according to the first necessary information and the confidence data.
6. The human-computer interaction system for making multi-device autonomic decisions of claim 1, wherein: the decision center module comprises a second necessary information acquisition submodule and a feedback signal generation submodule; wherein
The second necessary information acquisition submodule is used for acquiring the second necessary information corresponding to all the terminal devices from the knowledge base module;
and the feedback signal generation submodule is used for generating the control feedback signal according to the second necessary information and the confidence degree comprehensive information.
7. The human-computer interaction system for making multi-device autonomic decisions of claim 1, wherein: the decision center module comprises a second necessary information acquisition sub-module, a feedback signal generation sub-module and a terminal equipment designation sub-module; wherein the content of the first and second substances,
the second necessary information acquisition sub-module is used for acquiring distance information between each terminal device and an external user from the knowledge base module to serve as the second necessary information;
the feedback signal generation submodule is used for generating the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
the terminal device appointing submodule is used for sending the control feedback signal to the device mediation module corresponding to the terminal device with the minimum distance to the outside user, so that the device mediation module can implement adaptive feedback operation according to the control feedback signal.
8. The human-computer interaction system for making multi-device autonomic decisions of claim 1, wherein: the decision center module comprises a second necessary information acquisition sub-module, a feedback signal generation sub-module and a terminal equipment designation sub-module; wherein the content of the first and second substances,
the second necessary information acquisition submodule is used for acquiring the second necessary information from the knowledge base module;
the feedback signal generation submodule is used for generating the control feedback signal according to the second necessary information and the confidence degree comprehensive information;
the terminal equipment appointing submodule is used for determining one of a plurality of terminal equipment as target terminal equipment according to a preset selection rule and sending the control feedback signal to an equipment mediation module corresponding to the target terminal equipment, so that the equipment mediation module can implement adaptive feedback operation according to the control feedback signal.
9. The human-computer interaction system for making multi-device autonomic decisions as claimed in claims 7 or 8, wherein: the device mediation module being capable of performing an adaptive feedback operation based on the control feedback signal may specifically include,
the equipment mediation module can instruct the corresponding terminal equipment to execute voice interaction feedback operation matched with the voice signal from the outside according to the control feedback signal; or the equipment mediation module can execute the working state switching operation matched with the voice signal from the outside according to the terminal equipment corresponding to the control feedback signal indicator.
CN201910323610.8A 2019-04-22 2019-04-22 Man-machine interaction system for multi-equipment autonomous decision making Active CN110136707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910323610.8A CN110136707B (en) 2019-04-22 2019-04-22 Man-machine interaction system for multi-equipment autonomous decision making

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910323610.8A CN110136707B (en) 2019-04-22 2019-04-22 Man-machine interaction system for multi-equipment autonomous decision making

Publications (2)

Publication Number Publication Date
CN110136707A CN110136707A (en) 2019-08-16
CN110136707B true CN110136707B (en) 2021-03-02

Family

ID=67570731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910323610.8A Active CN110136707B (en) 2019-04-22 2019-04-22 Man-machine interaction system for multi-equipment autonomous decision making

Country Status (1)

Country Link
CN (1) CN110136707B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578474A (en) * 2013-10-25 2014-02-12 小米科技有限责任公司 Method, device and equipment for voice control
CN104598192A (en) * 2014-12-29 2015-05-06 联想(北京)有限公司 Information processing method and electronic equipment
CN106030699A (en) * 2014-10-09 2016-10-12 谷歌公司 Hotword detection on multiple devices
CN107004410A (en) * 2014-10-01 2017-08-01 西布雷恩公司 Voice and connecting platform
CN109215649A (en) * 2018-09-12 2019-01-15 北京盛世辉科技有限公司 A kind of remote control device
CN109243431A (en) * 2017-07-04 2019-01-18 阿里巴巴集团控股有限公司 A kind of processing method, control method, recognition methods and its device and electronic equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945672B (en) * 2012-09-29 2013-10-16 深圳市国华识别科技开发有限公司 Voice control system for multimedia equipment, and voice control method
CN105529028B (en) * 2015-12-09 2019-07-30 百度在线网络技术(北京)有限公司 Speech analysis method and apparatus
CN107657949A (en) * 2017-04-14 2018-02-02 深圳市人马互动科技有限公司 The acquisition methods and device of game data
CN107316643B (en) * 2017-07-04 2021-08-17 科大讯飞股份有限公司 Voice interaction method and device
CN107240398B (en) * 2017-07-04 2020-11-17 科大讯飞股份有限公司 Intelligent voice interaction method and device
US11282528B2 (en) * 2017-08-14 2022-03-22 Lenovo (Singapore) Pte. Ltd. Digital assistant activation based on wake word association
JP6844472B2 (en) * 2017-08-24 2021-03-17 トヨタ自動車株式会社 Information processing device
CN108337362A (en) * 2017-12-26 2018-07-27 百度在线网络技术(北京)有限公司 Voice interactive method, device, equipment and storage medium
CN108683574B (en) * 2018-04-13 2020-12-08 青岛海信智慧家居系统股份有限公司 Equipment control method, server and intelligent home system
CN108847219B (en) * 2018-05-25 2020-12-25 台州智奥通信设备有限公司 Awakening word preset confidence threshold adjusting method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103578474A (en) * 2013-10-25 2014-02-12 小米科技有限责任公司 Method, device and equipment for voice control
CN107004410A (en) * 2014-10-01 2017-08-01 西布雷恩公司 Voice and connecting platform
CN106030699A (en) * 2014-10-09 2016-10-12 谷歌公司 Hotword detection on multiple devices
CN104598192A (en) * 2014-12-29 2015-05-06 联想(北京)有限公司 Information processing method and electronic equipment
CN109243431A (en) * 2017-07-04 2019-01-18 阿里巴巴集团控股有限公司 A kind of processing method, control method, recognition methods and its device and electronic equipment
CN109215649A (en) * 2018-09-12 2019-01-15 北京盛世辉科技有限公司 A kind of remote control device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
协同人机交互机制的研究与设计;刘要华等;《计算机工程与设计》;20140228;全文 *

Also Published As

Publication number Publication date
CN110136707A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN107644642B (en) Semantic recognition method and device, storage medium and electronic equipment
CN107360327B (en) Speech recognition method, apparatus and storage medium
CN111223497B (en) Nearby wake-up method and device for terminal, computing equipment and storage medium
CN108735209B (en) Wake-up word binding method, intelligent device and storage medium
CN110472130B (en) Reducing the need for manual start/end points and trigger phrases
CN108920156A (en) Application program prediction model method for building up, device, storage medium and terminal
CN110047485B (en) Method and apparatus for recognizing wake-up word, medium, and device
CN104090652A (en) Voice input method and device
CN108055405B (en) Terminal and method for awakening same
CN110827818A (en) Control method, device, equipment and storage medium of intelligent voice equipment
CN110890093A (en) Intelligent device awakening method and device based on artificial intelligence
CN112735418B (en) Voice interaction processing method, device, terminal and storage medium
US20210151039A1 (en) Method and apparatus for speech interaction, and computer storage medium
CN108055617B (en) Microphone awakening method and device, terminal equipment and storage medium
US20200125603A1 (en) Electronic device and system which provides service based on voice recognition
WO2021212388A1 (en) Interactive communication implementation method and device, and storage medium
CN112634897A (en) Equipment awakening method and device, storage medium and electronic device
CN115810356A (en) Voice control method, device, storage medium and electronic equipment
CN114360510A (en) Voice recognition method and related device
CN110136707B (en) Man-machine interaction system for multi-equipment autonomous decision making
CN109901810A (en) A kind of man-machine interaction method and device for intelligent terminal
CN109377993A (en) Intelligent voice system and its voice awakening method and intelligent sound equipment
WO2023246558A1 (en) Semantic understanding method and apparatus, and medium and device
WO2023246036A1 (en) Control method and apparatus for speech recognition device, and electronic device and storage medium
WO2018023523A1 (en) Motion and emotion recognizing home control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 101, 1st floor, building 1, Xisanqi building materials City, Haidian District, Beijing 100096

Applicant after: Yunzhisheng Intelligent Technology Co.,Ltd.

Address before: No.101, 1st floor, building 1, Xisanqi building materials City, Haidian District, Beijing

Applicant before: BEIJING UNISOUND INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant