CN117086871B - Open robot interaction control system - Google Patents

Open robot interaction control system Download PDF

Info

Publication number
CN117086871B
CN117086871B CN202311106704.2A CN202311106704A CN117086871B CN 117086871 B CN117086871 B CN 117086871B CN 202311106704 A CN202311106704 A CN 202311106704A CN 117086871 B CN117086871 B CN 117086871B
Authority
CN
China
Prior art keywords
interaction
module
information
intensity
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311106704.2A
Other languages
Chinese (zh)
Other versions
CN117086871A (en
Inventor
陈桥
吴平志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhongke Shengu Technology Development Co ltd
Original Assignee
Hefei Zhongke Shengu Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Zhongke Shengu Technology Development Co ltd filed Critical Hefei Zhongke Shengu Technology Development Co ltd
Priority to CN202311106704.2A priority Critical patent/CN117086871B/en
Publication of CN117086871A publication Critical patent/CN117086871A/en
Application granted granted Critical
Publication of CN117086871B publication Critical patent/CN117086871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses an open robot interaction control system, which relates to the technical field of robots and comprises a knowledge graph construction module, an interaction triggering module, a gain adjustment module and an object recognition module; the knowledge graph construction module is used for analyzing the interaction history data to obtain the relationship among a plurality of interaction objects; establishing an interaction knowledge graph based on the interaction knowledge graph; the interaction triggering module is used for monitoring the sound intensity of the external environment in real time through the microphone so as to judge whether the volume meets the starting state or not; when the opening state is met, the gain adjustment module is used for performing gain adjustment on the input interactive information, so that the definition of the interactive information is improved, and the data identification precision is further improved; the object recognition module is used for recognizing and analyzing the interaction information to determine the interaction intention; matching with a plurality of interaction objects in the interaction knowledge graph to determine target objects; the control center controls the robot to execute interaction intention aiming at the target object through the control signal; and the man-machine interaction efficiency is improved.

Description

Open robot interaction control system
Technical Field
The invention relates to the technical field of robots, in particular to an open type robot interaction control system.
Background
With the development of science and technology, various types of robots are continuously promoted to be new, the application range of the robots is continuously expanded from the professional field to the daily work and life field of people, and the robots play an increasingly important role in intelligent control. The requirements of people on robots are not only mechanical actions which can be simply and repeatedly performed, but also intelligent robots which have anthropomorphic answers, autonomy and interaction with other robots, and man-machine interaction becomes an important factor for determining the development of the intelligent robots;
in the prior art, various functional units are mainly arranged in the robot body, and the robot receives an instruction sent by a user through means such as remote control and the like, and completes a task to be applied according to the instruction content; however, the range of the voice pickup module in the robot for picking up the voice signal is limited, the voice recognition sensitivity is low, topic information is determined mainly according to topic heat, and the accuracy of input data recognition and the fitting degree of the topic information cannot be ensured, so that the interaction efficiency and accuracy are caused; based on the defects, the invention provides an open robot interaction control system.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides an open robot interaction control system.
To achieve the above objective, an embodiment according to a first aspect of the present invention provides an open robot interaction control system, which includes a knowledge graph construction module, an interaction triggering module, a gain adjustment module, an object recognition module, and an interaction tracking module;
the knowledge graph construction module is used for analyzing the interaction history data to obtain a plurality of interaction objects and the relationship among the interaction objects; establishing an interaction knowledge graph based on the interaction objects and the relations among the interaction objects;
the interaction triggering module is used for monitoring the sound intensity of the external environment in real time through a microphone arranged in the intelligent terminal, and calculating to obtain an energy standard index Cs; judging whether the volume meets the starting state or not; when the starting state is met, the microphone converts the acoustic signal into an electric signal, and the NAND gate realizes the high-low level conversion of the logic circuit;
the interactive triggering module is connected with the gain adjusting module; when the starting state is met, the gain adjusting module is used for carrying out gain adjustment on the input interaction information and adjusting the vowel intensity of the interaction signal to be between an intensity valley value QG and an intensity peak value QF;
the gain adjusting module is used for sending the interaction information after gain adjustment to the control center; the control center is used for driving the object recognition module to recognize and analyze the interaction information to determine the interaction intention; matching with a plurality of interaction objects in the interaction knowledge graph to determine a target object;
the object recognition module is used for feeding back the interaction intention and the target object to the intelligent terminal, and a user sends a confirmation signal to the control center through the intelligent terminal; and the control center generates a control signal according to the confirmation signal or directly according to the interaction intention, controls the robot to execute the interaction intention aiming at the target object through the control signal, and feeds back an execution result to the intelligent terminal.
Further, the specific steps of the interaction triggering module include:
converting the sound signal into a digital signal, collecting the periodic energy value of the digital signal according to a preset interval, and marking the periodic energy value as NTi; the periodic energy value is a value obtained by accumulating and averaging the energy of the received continuous multiple bit data;
establishing a graph of the change of the periodic energy value NTi along with time, and comparing the periodic energy value NTi with a preset energy threshold; if the NTi is larger than the preset energy threshold, the corresponding curve segment is intercepted in the corresponding curve graph for marking, and the curve segment is marked as a standard curve segment;
counting the number of curve segments reaching the standard to Pz in a preset time period; integrating all standard reaching curve segments with time to obtain standard reaching reference area M1; calculating an energy standard reaching index Cs by using a formula Cs=Pz×a1+M1×a2, wherein a1 and a2 are preset coefficient factors;
comparing the energy standard reaching index Cs with a set threshold; if the energy standard reaching index Cs is more than or equal to the set threshold value, the interaction information input exists, and the received noise is not generated; at this time, the on state is satisfied;
if the energy standard reaching index Cs is smaller than the set threshold, no interactive information is input, and noise is received; the on state is not satisfied at this time.
Further, the specific steps of the gain adjustment module are as follows:
s1: converting the input interaction information into a digital signal and filtering; acquiring the intensity information of each vowel in the interactive signal to obtain a vowel intensity information group Qm;
s2: calculating an intensity peak QF and an intensity valley QG according to the received m vowel intensity information; the mi is more than or equal to 7; the method comprises the following specific steps:
calculating an average value Qz of m vowel intensity information according to an average value calculation formula;
traversing the vowel intensity information group to obtain a maximum value and a minimum value of the vowel intensity and marking the maximum value and the minimum value as QKmax and QKmin respectively; calculating to obtain an intensity peak QF by using a formula QF=QKmax+ (QKmax-Qz) x f; wherein f is a preset equalization coefficient;
calculating to obtain an intensity valley QG by using a formula QG=QKmin- (Qz-QKmin) xf;
s3: acquiring the m+1th vowel intensity information and marking as Q (m+1); if QG is less than Q (m+1) < QF, generating a normal signal; otherwise, generating an adjusting signal;
the gain adjusting module receives the adjusting signal and then adjusts the gain of the interaction signal through the programmable gain amplifying circuit, so that the vowel intensity of the interaction signal is adjusted to be between an intensity valley value QG and an intensity peak value QF; and so on.
Further, the specific steps of the knowledge graph construction module include:
extracting the stored interaction history data; the interaction history data comprise interaction data of the robot and a plurality of interaction objects and interaction data among a plurality of interaction objects;
identifying a robot or a plurality of intelligent devices in the interaction history data as an interaction object; identifying interaction relations in the interaction history data as relations among interaction objects;
constructing and acquiring the interaction knowledge graph through a knowledge graph construction technology; updating the interaction knowledge graph according to the updated interaction data at regular intervals; wherein the interaction data is consistent with the content attribute of the interaction history data.
Further, the specific identifying step of the object identifying module includes:
analyzing the workflow of the interactive object, and extracting the working content of the interactive object from the workflow;
matching the working content with the interaction intention to obtain a characteristic matching coefficient; the method specifically comprises the following steps: extracting keywords of the working content and the interaction intention to obtain the coincidence degree of the keywords, namely a characteristic matching coefficient;
and marking the interactive object with the largest characteristic matching coefficient as a target object.
Further, the system also comprises an interaction tracking module; when the robot executes the interaction task, the interaction tracking module is used for further judging whether a breaking command exists or not;
if no interrupt command exists currently, the task continues until the task is completed;
if the interrupt command exists currently, suspending the task and replying to related problems inquired by the user, and simultaneously saving the nodes of the task; after the user inquiry is finished, the task is continued.
Further, the system also comprises an image acquisition module; when a breaking command exists, the robot determines the identity of a user through an image acquisition module and an interaction triggering module;
extracting facial image information of the user from the acquired image information, and analyzing the facial image information to determine the identity of the user; extracting voice information of a user from the collected voice information, and carrying out voiceprint recognition on the voice information to determine the identity of the user; the title content is added to the dialog.
Further, extracting facial image information of a user from the collected image information, and analyzing the facial image information to determine emotion of the user; and extracting voice information of the user from the collected voice information, and carrying out voiceprint recognition on the voice information to determine emotion of the user.
Compared with the prior art, the invention has the beneficial effects that:
1. the knowledge graph construction module is used for analyzing the interaction history data to obtain a plurality of interaction objects and the relationship among the interaction objects; establishing an interaction knowledge graph based on the interaction objects and the relations among the interaction objects; the interaction triggering module is used for monitoring the sound intensity of the external environment in real time through a microphone arranged in the intelligent terminal, and calculating to obtain an energy standard index Cs; judging whether the volume meets the starting state or not; when the starting state is met, the gain adjusting module is used for carrying out gain adjustment on the input interaction information and adjusting the vowel intensity of the interaction signal to be between an intensity valley value QG and an intensity peak value QF; the definition of the interaction information is improved, and the data identification accuracy is further improved;
2. the object recognition module is used for recognizing and analyzing the interaction information to determine the interaction intention; matching with a plurality of interaction objects in the interaction knowledge graph to determine a target object; firstly, analyzing a workflow of an interactive object, and extracting the working content of the interactive object from the workflow; then matching the working content with the interaction intention to obtain a characteristic matching coefficient; finally, marking the interaction object with the largest characteristic matching coefficient as a target object, and feeding back the interaction intention and the target object to the intelligent terminal; the control center controls the robot to execute interaction intention aiming at the target object according to the confirmation of the intelligent terminal; the interactive intention and the target object can be accurately determined, and smooth execution of the interactive task is ensured; and the man-machine interaction efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system block diagram of an open robot interactive control system according to the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, an open robot interaction control system includes a knowledge graph construction module, an interaction triggering module, a gain adjustment module, a control center, an image acquisition module, an object recognition module and an interaction tracking module;
the knowledge graph construction module is used for analyzing the interaction history data to obtain a plurality of interaction objects and the relationship among the interaction objects; establishing an interaction knowledge graph based on the interaction objects and the relations among the interaction objects; the method specifically comprises the following steps:
extracting stored interaction history data; the interaction history data comprises interaction data of the robot and a plurality of interaction objects and interaction data among a plurality of interaction objects;
in this embodiment, the interaction history data stores the working content and the corresponding intelligent device related to the whole system, analyzes the robot and the intelligent device related to the interaction history data as interaction objects, and extracts interactions between the interaction objects from the interaction history data as relationships, for example: the robot and the intelligent device in the fault detection of the robot on the intelligent device are interaction objects, and the fault detection can be used as a unidirectional relation from the robot to the intelligent device. It should be noted that, a certain relationship exists between each intelligent device, if the function of a certain intelligent device is to detect the fault of other intelligent devices, a unidirectional relationship exists between the intelligent device and other intelligent devices;
identifying robots or a plurality of intelligent devices in the interaction history data as interaction objects; identifying interaction relations in the interaction history data as relations between the interaction objects;
constructing and obtaining an interaction knowledge graph through a knowledge graph construction technology; updating the interaction knowledge graph according to the updated interaction data at regular intervals; wherein, the content attribute of the interaction data is consistent with the content attribute of the interaction history data;
the interaction triggering module is used for monitoring the sound intensity of the external environment in real time through a microphone arranged in the intelligent terminal and judging whether the volume meets the starting state or not; the method comprises the following specific steps:
converting the sound signal into a digital signal, collecting the periodic energy value of the digital signal according to a preset interval, and marking the periodic energy value as NTi; the periodic energy value is a value obtained by accumulating and averaging the energy of the received continuous multiple bit data;
establishing a graph of the change of the periodic energy value NTi along with time, and comparing the periodic energy value NTi with a preset energy threshold; if the NTi is larger than the preset energy threshold, the corresponding curve segment is intercepted in the corresponding curve graph for marking, and the curve segment is marked as a standard curve segment;
counting the number of curve segments reaching the standard to Pz in a preset time period; integrating all standard reaching curve segments with time to obtain standard reaching reference area M1; calculating an energy standard reaching index Cs by using a formula Cs=Pz×a1+M1×a2, wherein a1 and a2 are preset coefficient factors;
comparing the energy standard reaching index Cs with a set threshold; if the energy standard reaching index Cs is more than or equal to the set threshold value, the interaction information input exists, and the received noise is not generated; at this time, the on state is satisfied;
if the energy standard reaching index Cs is smaller than the set threshold, no interactive information is input, and noise is received; at this time, the on state is not satisfied; when the starting state is met, the microphone converts the acoustic signal into an electric signal, and the NAND gate realizes the high-low level conversion of the logic circuit;
the interactive triggering module is connected with the gain adjusting module; when the opening state is met, the gain adjusting module is used for adjusting the gain of the input interaction information, and the specific steps are as follows:
s1: converting the input interaction information into a digital signal and filtering; acquiring the intensity information of each vowel in the interactive signal to obtain a vowel intensity information group Qm;
s2: calculating an intensity peak QF and an intensity valley QG according to the received m vowel intensity information; the mi is more than or equal to 7; the method comprises the following specific steps:
calculating an average value Qz of m vowel intensity information according to an average value calculation formula;
traversing the vowel intensity information group to obtain a maximum value and a minimum value of the vowel intensity and marking the maximum value and the minimum value as QKmax and QKmin respectively; calculating to obtain an intensity peak QF by using a formula QF=QKmax+ (QKmax-Qz) x f; wherein f is a preset equalization coefficient;
calculating to obtain an intensity valley QG by using a formula QG=QKmin- (Qz-QKmin) xf;
s3: acquiring the m+1th vowel intensity information and marking as Q (m+1); if QG is less than Q (m+1) < QF, generating a normal signal; otherwise, generating an adjusting signal;
the gain adjusting module receives the adjusting signal and then adjusts the gain of the interaction signal through the programmable gain amplifying circuit, so that the vowel intensity of the interaction signal is adjusted to be between an intensity valley value QG and an intensity peak value QF; and so on; the definition of the interaction information is improved, and the data identification accuracy is further improved;
the gain adjusting module is used for sending the interaction information after gain adjustment to the control center; the control center is used for driving the object recognition module to recognize and analyze the interaction information to determine the interaction intention; matching with a plurality of interaction objects in the interaction knowledge graph to determine a target object;
the specific recognition steps of the object recognition module comprise:
analyzing the workflow of the interactive object, and extracting the working content of the interactive object from the workflow;
matching the working content with the interaction intention to obtain a characteristic matching coefficient; the method specifically comprises the following steps: extracting keywords of the working content and the interaction intention to obtain the coincidence degree of the keywords, namely a characteristic matching coefficient;
marking the interactive object with the largest characteristic matching coefficient as a target object;
the object recognition module is used for feeding back the interaction intention and the target object to the intelligent terminal, and a user sends a confirmation signal to the control center through the intelligent terminal;
the control center generates a control signal according to the confirmation signal or directly according to the interaction intention, controls the robot to execute the interaction intention aiming at the target object through the control signal, and feeds back an execution result to the intelligent terminal;
according to the method, firstly, an interaction knowledge graph is established according to interaction historical data of each robot, and then interaction information sent by an intelligent terminal is identified, matched and determined to obtain interaction intention; acquiring a target object corresponding to the interaction intention through interaction knowledge graph matching, and generating a control signal according to the interaction intention to execute the interaction intention of the robot for the target object; the invention can accurately determine the interaction intention and the target object, and ensure that the interaction task is successfully executed; the man-machine interaction efficiency is improved;
when the robot executes the interaction task, the interaction tracking module is used for further judging whether a breaking command exists or not; if no interrupt command exists currently, the task continues until the task is completed;
if the interrupt command exists currently, suspending the task and replying to related problems inquired by the user, and simultaneously saving the nodes of the task; after the user inquiry is finished, continuing the task;
the further technical proposal is that: when a breaking command exists, the robot determines the identity of a user through the image acquisition module and the interaction triggering module;
extracting facial image information of a user from the collected image information, and analyzing the facial image information to determine the identity of the user; extracting voice information of a user from the collected voice information, and carrying out voiceprint recognition on the voice information to determine the identity of the user; adding a title content in the dialogue;
facial image information of the user is extracted from the collected image information, the facial image information is analyzed to determine emotion of the user, voice information of the user is extracted from the collected voice information, and voice recognition is performed on the voice information to determine emotion of the user.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas which are obtained by acquiring a large amount of data and performing software simulation to obtain the closest actual situation, and preset parameters and preset thresholds in the formulas are set by a person skilled in the art according to the actual situation or are obtained by simulating a large amount of data.
The working principle of the invention is as follows:
the open robot interaction control system comprises a knowledge graph construction module, a control module and a control module, wherein the knowledge graph construction module is used for analyzing interaction historical data to obtain a plurality of interaction objects and relations among the interaction objects when in operation; establishing an interaction knowledge graph based on the interaction objects and the relations among the interaction objects; the interaction triggering module is used for monitoring the sound intensity of the external environment in real time through a microphone arranged in the intelligent terminal, and calculating to obtain an energy standard index Cs; judging whether the volume meets the starting state or not; when the starting state is met, the gain adjusting module is used for carrying out gain adjustment on the input interaction information and adjusting the vowel intensity of the interaction signal to be between an intensity valley value QG and an intensity peak value QF; the definition of the interaction information is improved, and the data identification accuracy is further improved;
the object recognition module is used for recognizing and analyzing the interaction information to determine the interaction intention; analyzing the workflow of the interactive object, and extracting the working content of the interactive object from the workflow; matching the working content with the interaction intention to obtain a characteristic matching coefficient; marking the interactive object with the largest characteristic matching coefficient as a target object; feeding back the interaction intention and the target object to the intelligent terminal; the control center controls the robot to execute interaction intention aiming at the target object according to the confirmation of the intelligent terminal; the interactive intention and the target object can be accurately determined, and smooth execution of the interactive task is ensured; and the man-machine interaction efficiency is improved.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (5)

1. The open robot interaction control system is characterized by comprising a knowledge graph construction module, an interaction triggering module, a gain adjustment module, an object identification module and an interaction tracking module;
the knowledge graph construction module is used for analyzing the interaction history data to obtain a plurality of interaction objects and the relationship among the interaction objects; establishing an interaction knowledge graph based on the interaction objects and the relations among the interaction objects; the specific working steps comprise:
extracting the stored interaction history data; the interaction history data comprise interaction data of the robot and a plurality of interaction objects and interaction data among a plurality of interaction objects;
identifying a robot or a plurality of intelligent devices in the interaction history data as an interaction object; identifying interaction relations in the interaction history data as relations among interaction objects;
constructing and acquiring the interaction knowledge graph through a knowledge graph construction technology;
updating the interaction knowledge graph according to the updated interaction data at regular intervals; wherein, the content attribute of the interaction data is consistent with the content attribute of the interaction history data;
the interaction triggering module is used for monitoring the sound intensity of an external environment in real time through a microphone arranged in the intelligent terminal, and calculating to obtain an energy standard index Cs so as to judge whether the volume meets the starting state; the specific working steps comprise:
converting the sound signal into a digital signal, collecting the periodic energy value of the digital signal according to a preset interval, and marking the periodic energy value as NTi; the periodic energy value is a value obtained by accumulating and averaging the energy of the received continuous multiple bit data;
establishing a graph of the change of the periodic energy value NTi along with time, and comparing the periodic energy value NTi with a preset energy threshold; if the NTi is larger than the preset energy threshold, the corresponding curve segment is intercepted in the corresponding curve graph for marking, and the curve segment is marked as a standard curve segment;
counting the number of curve segments reaching the standard to Pz in a preset time period; integrating all standard reaching curve segments with time to obtain standard reaching reference area M1; calculating an energy standard reaching index Cs by using a formula Cs=Pz×a1+M1×a2, wherein a1 and a2 are preset coefficient factors;
comparing the energy standard reaching index Cs with a set threshold; if the energy standard reaching index Cs is more than or equal to the set threshold, the interaction information input exists, the received noise is not received, and the starting state is met at the moment;
if the energy standard reaching index Cs is smaller than the set threshold, no interactive information is input, noise is received, and the starting state is not met at the moment;
when the starting state is met, the microphone converts the acoustic signal into an electric signal, and the NAND gate realizes the high-low level conversion of the logic circuit;
the interactive triggering module is connected with the gain adjusting module; when the starting state is met, the gain adjusting module is used for carrying out gain adjustment on the input interaction information and adjusting the vowel intensity of the interaction signal to be between an intensity valley value QG and an intensity peak value QF;
the gain adjusting module is used for sending the interaction information after gain adjustment to the control center; the control center is used for driving the object recognition module to recognize and analyze the interaction information to determine the interaction intention; matching with a plurality of interaction objects in the interaction knowledge graph to determine a target object;
the specific identification steps of the object identification module comprise:
analyzing the workflow of the interactive object, and extracting the working content of the interactive object from the workflow;
matching the working content with the interaction intention to obtain a characteristic matching coefficient; the method specifically comprises the following steps:
extracting keywords of the working content and the interaction intention to obtain the coincidence degree of the keywords, namely a characteristic matching coefficient; marking the interactive object with the largest characteristic matching coefficient as a target object;
the object recognition module is used for feeding back the interaction intention and the target object to the intelligent terminal, and a user sends a confirmation signal to the control center through the intelligent terminal;
and the control center generates a control signal according to the confirmation signal or directly according to the interaction intention, controls the robot to execute the interaction intention aiming at the target object through the control signal, and feeds back an execution result to the intelligent terminal.
2. An open robot interactive control system according to claim 1, characterized in that the gain adjustment module comprises the specific steps of:
s1: converting the input interaction information into a digital signal and filtering; acquiring the intensity information of each vowel in the interactive signal to obtain a vowel intensity information group Qm;
s2: calculating an intensity peak QF and an intensity valley QG according to the received m vowel intensity information; the mi is more than or equal to 7; the method comprises the following specific steps:
calculating an average value Qz of m vowel intensity information according to an average value calculation formula;
traversing the vowel intensity information group to obtain a maximum value and a minimum value of the vowel intensity and marking the maximum value and the minimum value as QKmax and QKmin respectively; calculating to obtain an intensity peak QF by using a formula QF=QKmax+ (QKmax-Qz) x f; wherein f is a preset equalization coefficient;
calculating to obtain an intensity valley QG by using a formula QG=QKmin- (Qz-QKmin) xf;
s3: acquiring the m+1th vowel intensity information and marking as Q (m+1); if QG is less than Q (m+1) < QF, generating a normal signal; otherwise, generating an adjusting signal;
the gain adjusting module receives the adjusting signal and then adjusts the gain of the interaction signal through the programmable gain amplifying circuit, so that the vowel intensity of the interaction signal is adjusted to be between an intensity valley value QG and an intensity peak value QF; and so on.
3. An open robot interactive control system according to claim 1, further comprising an interactive tracking module; when the robot executes the interaction task, the interaction tracking module is used for further judging whether a breaking command exists or not;
if no interrupt command exists currently, the task continues until the task is completed;
if the interrupt command exists currently, suspending the task and replying to related problems inquired by the user, and simultaneously saving the nodes of the task; after the user inquiry is finished, the task is continued.
4. An open robot interactive control system according to claim 3, further comprising an image acquisition module; when a breaking command exists, the robot determines the identity of a user through an image acquisition module and an interaction triggering module;
extracting facial image information of the user from the acquired image information, and analyzing the facial image information to determine the identity of the user; extracting voice information of a user from the collected voice information, and carrying out voiceprint recognition on the voice information to determine the identity of the user; the title content is added to the dialog.
5. The open robot interactive control system according to claim 4, wherein facial image information of a user is extracted from the collected image information, and the facial image information is analyzed to determine emotion of the user; and extracting voice information of the user from the collected voice information, and carrying out voiceprint recognition on the voice information to determine emotion of the user.
CN202311106704.2A 2023-08-30 2023-08-30 Open robot interaction control system Active CN117086871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311106704.2A CN117086871B (en) 2023-08-30 2023-08-30 Open robot interaction control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311106704.2A CN117086871B (en) 2023-08-30 2023-08-30 Open robot interaction control system

Publications (2)

Publication Number Publication Date
CN117086871A CN117086871A (en) 2023-11-21
CN117086871B true CN117086871B (en) 2024-02-06

Family

ID=88780120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311106704.2A Active CN117086871B (en) 2023-08-30 2023-08-30 Open robot interaction control system

Country Status (1)

Country Link
CN (1) CN117086871B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105845135A (en) * 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 Sound recognition system and method for robot system
CN111104523A (en) * 2019-12-20 2020-05-05 西南交通大学 Audio-visual cooperative learning robot based on voice assistance and learning method
CN111581348A (en) * 2020-04-28 2020-08-25 辽宁工程技术大学 Query analysis system based on knowledge graph
CN111858892A (en) * 2020-07-24 2020-10-30 中国平安人寿保险股份有限公司 Voice interaction method, device, equipment and medium based on knowledge graph
CN113920226A (en) * 2021-09-30 2022-01-11 北京有竹居网络技术有限公司 User interaction method and device, storage medium and electronic equipment
CN115831117A (en) * 2022-11-22 2023-03-21 中国工商银行股份有限公司 Entity identification method, entity identification device, computer equipment and storage medium
CN116205294A (en) * 2022-12-23 2023-06-02 苏州大学 Knowledge base self-updating method and device for robot social contact and robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180096078A (en) * 2017-02-20 2018-08-29 엘지전자 주식회사 Module type home robot
US11583997B2 (en) * 2018-09-20 2023-02-21 Sony Group Corporation Autonomous robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105845135A (en) * 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 Sound recognition system and method for robot system
CN111104523A (en) * 2019-12-20 2020-05-05 西南交通大学 Audio-visual cooperative learning robot based on voice assistance and learning method
CN111581348A (en) * 2020-04-28 2020-08-25 辽宁工程技术大学 Query analysis system based on knowledge graph
CN111858892A (en) * 2020-07-24 2020-10-30 中国平安人寿保险股份有限公司 Voice interaction method, device, equipment and medium based on knowledge graph
CN113920226A (en) * 2021-09-30 2022-01-11 北京有竹居网络技术有限公司 User interaction method and device, storage medium and electronic equipment
CN115831117A (en) * 2022-11-22 2023-03-21 中国工商银行股份有限公司 Entity identification method, entity identification device, computer equipment and storage medium
CN116205294A (en) * 2022-12-23 2023-06-02 苏州大学 Knowledge base self-updating method and device for robot social contact and robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
常开志.基于知识图谱的人机交互智能问答方法研究.湖南大学硕士学位论文.2022,全文. *
王磊.基于语义的人机交互系统设计.西北大学专业学位硕士学位论文.2021,全文. *

Also Published As

Publication number Publication date
CN117086871A (en) 2023-11-21

Similar Documents

Publication Publication Date Title
EP3522153B1 (en) Voice control system, wakeup method and wakeup apparatus therefor, electrical appliance and co-processor
US20210383795A1 (en) Voice recognition method and apparatus, and air conditioner
CN111128157B (en) Wake-up-free voice recognition control method for intelligent household appliance, computer readable storage medium and air conditioner
CN111192574A (en) Intelligent voice interaction method, mobile terminal and computer readable storage medium
CN105739688A (en) Man-machine interaction method and device based on emotion system, and man-machine interaction system
CN112101219B (en) Intention understanding method and system for elderly accompanying robot
CN109300483B (en) Intelligent audio abnormal sound detection method
CN109473119B (en) Acoustic target event monitoring method
CN105009204A (en) Speech recognition power management
WO2019223064A1 (en) Voice control method for sweeping robot and sweeping robot
CN111775151A (en) Intelligent control system of robot
CN114974229A (en) Method and system for extracting abnormal behaviors based on audio data of power field operation
CN116625683A (en) Wind turbine generator system bearing fault identification method, system and device and electronic equipment
CN117086871B (en) Open robot interaction control system
CN113450827A (en) Equipment abnormal condition voiceprint analysis algorithm based on compressed neural network
CN111257589B (en) Wind speed measuring method based on CRFID (cross-reference frequency identification) label
JPH07306692A (en) Speech recognizer and sound inputting device
CN115567336B (en) Wake-free voice control system and method based on smart home
CN110738983A (en) Multi-neural-network model voice recognition method based on equipment working state switching
CN114898527A (en) Wearable old man falling detection system and method based on voice assistance
JP4058031B2 (en) User action induction system and method
CN114571473A (en) Control method and device for foot type robot and foot type robot
CN117370870B (en) Knowledge and data compound driven equipment multi-working condition identification and performance prediction method
CN117273013B (en) Electronic data processing method for stroke records
US20220122593A1 (en) User-friendly virtual voice assistant

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant