CN113467266A - Intelligent control method and system for robot and hall acoustoelectric equipment - Google Patents
Intelligent control method and system for robot and hall acoustoelectric equipment Download PDFInfo
- Publication number
- CN113467266A CN113467266A CN202110846160.8A CN202110846160A CN113467266A CN 113467266 A CN113467266 A CN 113467266A CN 202110846160 A CN202110846160 A CN 202110846160A CN 113467266 A CN113467266 A CN 113467266A
- Authority
- CN
- China
- Prior art keywords
- robot
- hall
- equipment
- intelligent control
- acoustoelectric
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004364 calculation method Methods 0.000 claims abstract description 5
- 230000005236 sound signal Effects 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 18
- 238000007726 management method Methods 0.000 claims description 18
- 238000001514 detection method Methods 0.000 claims description 11
- 230000002452 interceptive effect Effects 0.000 claims description 11
- 230000002093 peripheral effect Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 3
- 230000035807 sensation Effects 0.000 claims description 2
- 230000005693 optoelectronics Effects 0.000 claims 1
- 230000006854 communication Effects 0.000 abstract description 7
- 238000004891 communication Methods 0.000 abstract description 7
- 230000008447 perception Effects 0.000 abstract description 4
- 230000007613 environmental effect Effects 0.000 abstract description 2
- 230000007175 bidirectional communication Effects 0.000 abstract 1
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 238000004092 self-diagnosis Methods 0.000 abstract 1
- 239000000779 smoke Substances 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 206010063385 Intellectualisation Diseases 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 238000013468 resource allocation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001139 pH measurement Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Manipulator (AREA)
Abstract
The invention provides an intelligent control method of a robot and hall acoustoelectric equipment, which comprises the robot and a robot management platform, wherein a hall intelligent equipment control system is in communication connection with the robot management platform through the robot; the robot can be according to speech recognition, turns into the instruction with the speech content and carries out intelligent control to hall equipment, and the robot also can be according to self sensing simultaneously, and perception environmental change further carries out intelligent control to hall equipment. The method can complete logic judgment, conversion processing, functional calculation and bidirectional communication with an external system, and can realize self calibration, self compensation and self diagnosis. Thus, accurate control can be achieved according to the instruction and the environment change. The invention also provides an intelligent control system for the robot and the hall acoustoelectric equipment.
Description
Technical Field
The invention relates to the technical field of intelligent halls, in particular to an intelligent control method and system for a robot and a hall sound-light-electricity device.
Background
For an intelligent hall, various devices in the hall are connected with a robot together through the internet of things technology and the internet technology, so that an intelligent ecological system is realized, the functions of intelligent light control, intelligent air conditioner temperature control, intelligent switch control and the like can be realized through voice, the intelligent hall utilizes the comprehensive wiring technology, the network communication technology, the automatic control technology, the sensing technology, the voice control technology and the intelligent AI technology to construct an intelligent hall management system, the intellectualization, the safety and the convenience of the hall are improved, and the environment-friendly and energy-saving office environment is realized. Compared with the common hall, the intelligent hall not only has the traditional hall layout environment function, but also has the network communication, the equipment automation and the intellectualization, provides the omnibearing information interaction function, and even saves the fund for various energy expenses.
Most of current intelligent halls are intelligently controlled through infrared remote control, however, the intelligent degree of such control system is low, the users can not realize intelligent control on strong current switches of all areas in the whole hall through voice or automatically according to the hall environment, light, an air conditioner, a smoke alarm and air purifier equipment, the purpose of real-time networking dynamic update of voice instructions of the users by means of big data can not be achieved, the hall equipment management can not be intelligently adjusted according to the hall environment, timely voice broadcasting can not be reported to the users, the users can not obtain better intelligent interactive experience, and better convenience can not be brought.
Disclosure of Invention
Aiming at the technical problems, the invention provides a method and a system for intelligently controlling a robot and a hall sound and photoelectric device, and the intelligent degree of a hall sound and photoelectric device control system is improved.
In order to achieve the purpose, the invention adopts the technical scheme that: an intelligent control method for a robot and hall photoelectric equipment, wherein the robot can control the hall photoelectric equipment according to user voice interaction or a robot sensor perception mode; the implementation method comprises the following steps:
the intelligent control system comprises a robot, a robot management platform and an intelligent equipment control system, the hall acoustoelectric equipment is managed by the intelligent equipment control system, and the robot is communicated with the intelligent equipment control management system;
robot voice control: the robot adopts a far-field radio mode, overcomes the defect that the traditional voice control is limited to the field size, can collect and identify the voice of a client, uploads an identification result to the robot management platform, carries out semantic analysis on the identification result by the robot management platform, corresponds the analysis result and an instruction, issues the robot, replies the client according to the friendly instruction and carries out intelligent control on hall equipment. So as to meet the scene requirement of actively controlling the equipment according to the real-time conversation requirement of the client.
Robot perception: the robot receives the sensor data, regularly detects hall environment conditions (such as hall lamp control switch state, brightness value and smoke value of a smoke sensor), is used as an assembly terminal, and calculates the detection value under the current environment and the preset value of the strategy file according to the strategy file updated remotely. And the calculation result is correspondingly matched with the function execution in the strategy file, so that the equipment in the hall is automatically and intelligently controlled, and the current operation is recorded.
When the detected data exceeds a set threshold range, a control instruction is generated, and the robot analyzes the control instruction and transmits the control instruction to an intelligent equipment control system, so that the hall photoelectric equipment is controlled; the robot sensor comprises a temperature sensor, a light sensor, an air pH detector, a laser sensor and the like.
And (3) robot alarming: the robot is used as an assembly terminal to establish communication connection with each device in the hall, and the robot can monitor the state of the current connection device, such as the on-off state of the device and the running state of the device, so that the monitoring and maintenance of the device by workers are facilitated. If running abnormal state is met, the robot detects a configuration file issued by a cloud system through integrated sensors (including a temperature sensor, a light sensor, an air pH detector, a laser sensor and the like), for example: and when the indoor temperature and humidity threshold, the air pH threshold, the smoke sensor threshold and the like of the hall are inconsistent, a camera at the head of the robot shoots a scene picture, and the picture, the abnormal state and the detection data are uploaded to a cloud system for alarm prompt.
The cloud system adopts a WeChat small program architecture, small programs issued can be subscribed through mobile phone WeChat, the cloud system communicates with the robot assembly terminal through Http, issues instructions to update Json messages of the configuration file to the robot, and acquires a monitoring state.
The configuration file includes robot-integrated sensor threshold individual settings and associated settings for a variety of sensors.
More specifically, the robot speech recognition includes the following features:
acquiring an audio signal; carrying out voice recognition on the obtained audio signal to obtain an initial recognition text;
performing word segmentation operation on the initial recognition text to obtain a decomposed word group;
recombining the decomposed phrases to obtain a plurality of recombined sentences;
calculating the probability value of each recombined sentence by using an N-Gram model;
calculating the weight value of each recombined sentence by using a TF-IDF model according to a pre-constructed service dialogue corpus; and calculating the weighted probability value of each recombined sentence according to the probability value and the weighted value of each recombined sentence, and selecting the recombined sentence with the weighted probability value meeting the preset condition as a result identification text. The speech recognition method provided by the invention utilizes the N-Gram model and the TF-IDF model, combines with the service dialogue corpus, analyzes whether the recombined sentences meet the semantic level of service logic, and screens the recombined sentences meeting the requirements from the semantic level to serve as final recognition texts.
Further, the step of recombining the decomposed phrases to obtain a plurality of recombined sentences specifically includes: performing part-of-speech tagging on the decomposed phrases; and recombining to obtain a plurality of recombined sentences according to the part of speech of the decomposed phrases.
Further, after the step of acquiring the audio signal, the method further includes: the robot judges whether the surrounding environment is occupied, and if yes, voice recognition is carried out on the obtained audio signals to obtain an initial recognition text.
Further, when the audio signal is acquired, the sound source position is acquired at the same time, and whether the sound source position is occupied or not is judged; and only when the sound source direction is judged to be a person, carrying out voice recognition on the obtained audio signal to obtain an initial recognition text.
Furthermore, the semantic of the text is identified by the identification result by utilizing a pre-trained semantic identification model, so that a semantic result is obtained.
More preferably, the speech recognition method provided by the present invention uses a microphone array to obtain an audio signal in a preset direction, and uses a spatial filtering characteristic to enhance the audio signal in the preset direction consistent with the sound source direction and suppress noise in other directions. Therefore, noise texts mixed with sounds emitted by other users or machines can be filtered, so that a speech recognition result conforming to the current conversation scene is obtained, the accuracy of speech recognition is improved, and the interaction efficiency and experience between the intelligent robot and the users are improved.
Furthermore, the detection value under the current environment is matched with a plurality of preset values of the strategy file, a unique matching value in the operation instruction set is obtained when double mapping is used, and the problem that a plurality of execution result instructions exist in a plurality of peripheral trigger matching strategy files is mainly solved. The method comprises the following specific steps:
step 1: the robot patrols at the position of a monitoring area, sensor detection values are sent to a middleware service module of the robot through http, socket, serial ports and U port protocols, and the middleware service module stores the collected detection values into a shared cache data set k1 in a key value pair mode; (the robot can also actively move to the alarm area according to the area equipment alarm)
Step 2: the interactive session service of the robot loads the issued policy configuration file json in advance, and loads the trigger integrated sensor condition and the trigger execution result instruction into the self service cache data set k2, wherein the relationship between the trigger peripheral condition and the trigger execution result instruction is 1: N;
and step 3: when the robot actively or passively triggers the alarm, the active alarm means that: the robot is triggered by an integrated sensor; the passive alarm is as follows: configuring regional equipment to alarm and trigger the robot; the interactive session service of the robot calls a shared cache data set k1 in the external management intermediate service module in the step 1 and a policy configuration file json data set k2 in the step 2, and a unique instruction set executed by the robot is obtained by applying a compton pairing function formula, reverse double mapping and other methods.
The kanto pairing function is defined by the formula:
when applying the pairing function to k1 and k2, we often indicate that the number of results is
(k1,k2)。
And 4, step 4: and the interactive session service of the robot obtains a unique execution result instruction, and alarms an administrator and executes acousto-optic electric control in a corresponding area.
The invention also aims to provide an intelligent control system for the robot and the hall acoustoelectric equipment, which can support the active dialogue function through the assistance of the dialogue system and can perform corresponding intelligent control on the hall equipment under a specific scene by utilizing the self-sensing function.
The invention has the following beneficial effects: according to the intelligent control method for the robot and the hall sound-light-electricity equipment, the robot is used as a center control center of a hall, and is in voice interaction with the robot, so that a user can intelligently control the whole hall internet-of-things air conditioner, a lamp control system, a strong-current switch, a smoke alarm system and other equipment through voice, the purpose of real-time networking and updating of voice information of the user by means of a dialogue system is well achieved, better interactive experience is brought to the user, and better convenience is brought.
According to the intelligent control system for the robot and the hall sound-light-electricity equipment, the environment of a hall and the like is sensed through various sensing modules of the robot, such as laser sensing, temperature sensing, air pH sensing, illumination and the like, the intelligent control system automatically performs intelligent control on the whole equipment such as an air conditioner, a lamp control system, a strong-current switch, a smoke alarm system and the like of the internet of things, when no person in the hall is detected, the equipment such as regional lamplight, the air conditioner and the like is turned off, the air conditioner is automatically turned on or turned off according to the indoor temperature of the hall, smoke alarm is performed according to the indoor air quality condition of the hall, the intelligent degree of the hall is improved, the reasonable and effective utilization of the resource allocation of the hall network points can be improved, and more comfortable and more intelligent experience is brought to users.
Drawings
Fig. 1 is a diagram showing a relationship between a robot and a hall sound photoelectric device intelligent control system and a sensor according to an embodiment of the present invention.
Fig. 2 is a flowchart of an intelligent control method for the robot and the hall acoustoelectric device according to the embodiment of the invention.
Fig. 3 is a structural diagram of an intelligent control system for robots and hall acoustoelectric devices according to an embodiment of the present invention.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following embodiments and accompanying drawings.
The method and the system for intelligently controlling the robot and the hall acoustoelectric equipment in the embodiment have the advantages that the robot is used as a hall control center, and the robot and the hall acoustoelectric equipment, including but not limited to lamplight, air conditioners, strong current switches, smoke alarms and the like, are controlled in a voice interaction or robot sensor perception mode.
Example 1:
acquiring an audio signal; carrying out voice recognition on the obtained audio signal to obtain an initial recognition text; performing word segmentation operation on the initial recognition text to obtain a decomposed word group; recombining the decomposed phrases to obtain a plurality of recombined sentences; calculating the probability value of each recombined sentence by using an N-Gram model; calculating the weight value of each recombined sentence by using a TF-IDF model according to a pre-constructed service dialogue corpus; and calculating the weighted probability value of each recombined sentence according to the probability value and the weighted value of each recombined sentence, and selecting the recombined sentence with the weighted probability value meeting the preset condition as a result identification text. In some embodiments, the step of recombining the decomposed phrases to obtain a plurality of recombined sentences specifically includes: performing part-of-speech tagging on the decomposed phrases; and recombining to obtain a plurality of recombined sentences according to the part of speech of the decomposed phrases. In some embodiments, after the step of acquiring the audio signal, the method further comprises: the robot judges whether a person exists, and if yes, voice recognition is carried out on the obtained audio signal to obtain an initial recognition text. In some embodiments, when acquiring audio signals, the sound source bearing may also be acquired; the step of judging whether a person exists specifically comprises the following steps: and judging whether the sound source direction is occupied, and performing voice recognition on the acquired audio signal only when the sound source direction is occupied to obtain an initial recognition text. In some embodiments, the recognition result recognizes the semantics of the text by using a pre-trained semantic recognition model, and a semantic result is obtained.
The speech recognition method provided by this embodiment performs word segmentation and reassembly on an initial recognition text based on speech recognition on an audio signal, performs semantic level analysis on whether an reassembly statement conforms to business logic by using an N-Gram model and a TF-IDF model in combination with a business dialogue corpus, and screens out an reassembly statement that conforms to requirements as a final recognition text.
The scene in which the intelligent robot interacts with the user generally has a large amount of environmental noise. In order to avoid acquiring a large number of invalid audio signals, the intelligent robot may acquire an audio signal in a preset direction. Specifically, a directional radio device, such as a microphone array, may be used to obtain an audio signal in a preset direction, and the spatial filtering characteristic is used to enhance the audio signal in the preset direction and suppress noise in other directions.
In this embodiment, the directional radio device may be a six-microphone array, and may acquire the sound source direction while acquiring the audio signal. The intelligent robot may be provided with an interaction direction, such as a certain angle range directly in front of the intelligent robot, being the interaction direction. The preset direction of the audio signal is obtained, which may be the same as the interaction direction.
Example 2:
the robot senses the environment condition of the hall through a sensor of the robot, monitors according to a preset strategy, carries out voice broadcast on hall personnel and a corresponding dredging effect under the condition that a monitored value meets a scene condition, and can control hall equipment according to the scene, so that the environment, the scene and the equipment linkage are realized, the hall network can be helped to improve the reasonable and effective utilization of network resource allocation, and the effects of effectively saving resources and the like are realized.
In addition, the robot monitors the temperature and the smoke concentration in the current environment through a sensor, performs emergency treatment, guides personnel evacuation routes through a movable function of the robot, and simultaneously switches off the hall air switch through network communication control.
Specifically, as shown in fig. 1, the robot detects hall environments (light, air temperature, air pH value, presence or absence of people) at regular time through a temperature sensor, a light sensor, an air pH detector and a laser sensor, and as an assembly terminal, calculates a detection value under the current environment and a preset value of a policy file according to a remotely updated policy configuration file, and obtains a unique operation instruction set matching value when double mapping is used. The method mainly solves the problem that a plurality of execution result instructions exist in a multi-peripheral trigger matching strategy file, and key steps are realized in codes:
step 1: the robot automatically patrols to the position of a monitoring area through an integrated peripheral comprising a temperature sensor, an infrared camera, an annular 6-microphone voice recognition board, a light sensation sensor, an air pH detector, a laser sensor and the like, a peripheral management middleware service program is collected, the peripheral sensor is sent to a robot core control linux system through an http protocol, a socket protocol, a serial port protocol and a U port protocol, and the middleware service stores the collected peripheral data key value pair form into a shared cache data set k 1. (the robot can also actively move to the alarm area according to the area equipment alarm)
Step 2: the robot core controls interactive session service in the liunx system, loads the issued policy configuration file json in advance, and loads the triggering peripheral condition and the triggering execution result instruction into the self service cache data set k2, wherein the relationship between the triggering peripheral condition and the triggering execution result instruction is 1: N.
And step 3: when the robot actively or passively triggers the alarm, the active alarm means that: the robot is triggered by an integrated peripheral; the passive alarm is as follows: the configuration area device alerts the robot of the trigger. And (3) calling a shared cache data set k1 in the external management intermediate service program in the step 1 and a strategy configuration file json data set k2 in the step 2 by the interactive session service in the robot core control linux system, and obtaining an instruction set uniquely executed by the robot by using a Kanto pairing function formula, reverse double mapping and other methods.
The kanto pairing function is defined by the formula:
when applying the pairing function to k1 and k2, we often indicate that the number of results is
(k1,k2).
And 4, step 4: and the interactive session service in the robot core control liunx system obtains a unique execution result instruction, and alarms an administrator and executes acousto-optic electric control in a corresponding area.
As shown in fig. 2, the calculation result is matched with the function execution in the policy file, so as to automatically and intelligently control the devices in the hall, such as: and the current actual temperature is higher than the preset temperature value in the strategy configuration file, the robot carries out instruction control on the air conditioning equipment in communication establishment, the air conditioning temperature modulates the preset temperature value, and the current operation is recorded.
Example 3:
as shown in fig. 3, hall audio-optical devices are managed by the intelligent device control system, and the robot communicates with the intelligent device control management system, so that all devices in the hall and the robot form a whole, and resource sharing among the devices can be realized.
A system for intelligently controlling a robot and a hall sound and photoelectric device comprises the robot, a robot management platform, a big data dialogue system, an intelligent device control system and the hall sound and photoelectric self-service device, wherein the hall intelligent device control system is connected with and controlled by the hall self-service device; the hall intelligent device control system is in communication connection with the robot management platform through the robot (the robot can receive the instruction issued by the robot management platform, and then transmits the related instruction information to the intelligent device control system, so as to specifically control respective devices).
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical solution according to the technical idea of the present invention falls within the protection scope of the present invention.
Claims (9)
1. An intelligent control method for a robot and hall acoustoelectric equipment is characterized in that: the intelligent hall sound and photoelectric equipment control system comprises a robot, a robot management platform and an intelligent equipment control system, wherein hall sound and photoelectric equipment is managed by the intelligent equipment control system, and the robot is communicated with the intelligent equipment control management system;
the robot collects and identifies the voice of a client in a far-field radio mode, the identification result is uploaded to a robot management platform, the robot management platform carries out semantic analysis on the identification result, the analysis result corresponds to an instruction and issues the robot, and the robot controls hall equipment according to the instruction;
the robot receives the sensor data, regularly detects the hall environment condition as an assembly terminal, and calculates the detection value under the current environment and the preset value of the strategy file according to the remotely updated strategy file; the calculation result is correspondingly matched with the function execution in the strategy file, so that the equipment in the hall is automatically and intelligently controlled;
the robot monitors the state of the currently connected hall equipment, and when the robot detects that the state is inconsistent with a configuration file issued by a cloud system through an integrated sensor if the robot runs in an abnormal state, a camera at the head of the robot shoots a scene picture, and the picture, the abnormal state and detection data are uploaded to a cloud system for alarm prompt;
the cloud system communicates with the robot through Http, issues an instruction to update a Json message of the configuration file to the robot, and acquires a monitoring state;
the configuration file includes robot-integrated sensor threshold individual settings and associated settings for a variety of sensors.
2. The intelligent control method for the robot and the hall acoustoelectric equipment according to claim 1, wherein the robot speech recognition: acquiring an audio signal; carrying out voice recognition on the obtained audio signal to obtain an initial recognition text;
performing word segmentation operation on the initial recognition text to obtain a decomposed word group;
recombining the decomposed phrases to obtain a plurality of recombined sentences;
calculating the probability value of each recombined sentence by using an N-Gram model;
calculating the weight value of each recombined sentence by using a TF-IDF model according to a pre-constructed service dialogue corpus; and calculating the weighted probability value of each recombined sentence according to the probability value and the weighted value of each recombined sentence, and selecting the recombined sentence with the weighted probability value meeting the preset condition as a result identification text.
3. The intelligent control method for the robot and the hall acoustoelectric equipment according to claim 2, wherein the step of recombining the decomposed phrases to obtain a plurality of recombined sentences specifically comprises: performing part-of-speech tagging on the decomposed phrases; and recombining to obtain a plurality of recombined sentences according to the part of speech of the decomposed phrases.
4. The intelligent control method of the robot and the hall acoustoelectric equipment according to claim 2 or 3, characterized in that: after the audio signal is acquired, the method further comprises the following steps: the robot judges whether the surrounding environment is occupied, and if yes, voice recognition is carried out on the obtained audio signals to obtain an initial recognition text.
5. The intelligent control method for the robot and the hall acoustoelectric equipment according to claim 4, characterized in that: when the audio signal is acquired, the sound source direction is acquired at the same time, and whether the sound source direction is occupied or not is judged; and only when the sound source direction is judged to be a person, carrying out voice recognition on the obtained audio signal to obtain an initial recognition text.
6. The intelligent control method for the robot and the hall acoustoelectric equipment according to claim 5, wherein the intelligent control method comprises the following steps: the microphone array is used for acquiring audio signals in a preset direction, and the audio signals in the preset direction and the sound source direction are enhanced by utilizing the spatial filtering characteristic, so that the noise in other directions is suppressed.
7. The intelligent control method for the robot and the hall acoustoelectric equipment according to claim 1, characterized in that: the method comprises the following steps of performing matching calculation on a detection value under the current environment and a plurality of preset values of a strategy file, and obtaining a unique matching value in an operation instruction set when double mapping is used, wherein the specific steps comprise:
step 1, a robot patrols at a monitoring area, sensor detection values are sent to a middleware service module of the robot through http, socket, a serial port and a U port protocol, and the middleware service module stores the collected detection values into a shared cache data set k1 in a key value pair mode;
step 2, interactive session service of the robot loads issued policy configuration files json in advance, and loads trigger integrated sensor conditions and trigger execution result instructions into a self service cache data set k2, wherein the relationship between the trigger peripheral conditions and the trigger execution result instructions is 1: N;
step 3, when the robot actively or passively triggers an alarm, the active alarm means that: the robot is triggered by an integrated sensor; the passive alarm is as follows: configuring regional equipment to alarm and trigger the robot; the interactive session service of the robot calls a shared cache data set k1 in the external management intermediate service module in the step 1 and a policy configuration file json data set k2 in the step 2, and a unique instruction set executed by the robot is obtained by applying a Kanto pairing function formula, reverse double mapping and other methods;
the kanto pairing function is defined by the formula:
when applying the pairing function to k1 and k2, we often indicate that the number of results is:
(k1,k2).;
and 4, the interactive session service of the robot obtains a unique execution result instruction, and alarms an administrator and executes acousto-optic electric control in a corresponding area.
8. The utility model provides an intelligent control system of robot and hall reputation optoelectronic equipment which characterized in that: the intelligent control method of the robot and the hall acoustoelectric equipment according to any one of claims 1 to 7 is applied.
9. The intelligent control system of robot and hall acoustoelectric equipment according to claim 8, wherein: the robot sensor comprises a temperature sensor, a light sensation sensor, an air pH detector and a laser sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110846160.8A CN113467266B (en) | 2021-07-26 | 2021-07-26 | Intelligent control method and system for robot and hall acousto-optic and electric equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110846160.8A CN113467266B (en) | 2021-07-26 | 2021-07-26 | Intelligent control method and system for robot and hall acousto-optic and electric equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113467266A true CN113467266A (en) | 2021-10-01 |
CN113467266B CN113467266B (en) | 2024-03-01 |
Family
ID=77882471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110846160.8A Active CN113467266B (en) | 2021-07-26 | 2021-07-26 | Intelligent control method and system for robot and hall acousto-optic and electric equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113467266B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106406119A (en) * | 2016-11-15 | 2017-02-15 | 福州大学 | Service robot based on voice interaction, cloud technology and integrated intelligent home monitoring |
CN107199572A (en) * | 2017-06-16 | 2017-09-26 | 山东大学 | A kind of robot system and method based on intelligent auditory localization and Voice command |
CN110246190A (en) * | 2019-06-10 | 2019-09-17 | 南京奥拓电子科技有限公司 | A kind of robot interactive method that more technologies are realized |
CN110297429A (en) * | 2019-07-02 | 2019-10-01 | 南京奥拓电子科技有限公司 | A kind of method and system of robot and bank outlets' hall device interoperability |
CN110390540A (en) * | 2018-04-18 | 2019-10-29 | 南京奥拓电子科技有限公司 | Based on face, semanteme and the service of the robot of motion control and Precision Marketing Method |
CN111399388A (en) * | 2020-03-27 | 2020-07-10 | 江苏安全技术职业学院 | Artificial intelligence house environmental control system based on big data |
CN111844039A (en) * | 2020-07-23 | 2020-10-30 | 上海上实龙创智能科技股份有限公司 | Wisdom space system based on robot control |
-
2021
- 2021-07-26 CN CN202110846160.8A patent/CN113467266B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106406119A (en) * | 2016-11-15 | 2017-02-15 | 福州大学 | Service robot based on voice interaction, cloud technology and integrated intelligent home monitoring |
CN107199572A (en) * | 2017-06-16 | 2017-09-26 | 山东大学 | A kind of robot system and method based on intelligent auditory localization and Voice command |
CN110390540A (en) * | 2018-04-18 | 2019-10-29 | 南京奥拓电子科技有限公司 | Based on face, semanteme and the service of the robot of motion control and Precision Marketing Method |
CN110246190A (en) * | 2019-06-10 | 2019-09-17 | 南京奥拓电子科技有限公司 | A kind of robot interactive method that more technologies are realized |
CN110297429A (en) * | 2019-07-02 | 2019-10-01 | 南京奥拓电子科技有限公司 | A kind of method and system of robot and bank outlets' hall device interoperability |
CN111399388A (en) * | 2020-03-27 | 2020-07-10 | 江苏安全技术职业学院 | Artificial intelligence house environmental control system based on big data |
CN111844039A (en) * | 2020-07-23 | 2020-10-30 | 上海上实龙创智能科技股份有限公司 | Wisdom space system based on robot control |
Non-Patent Citations (2)
Title |
---|
程胜: "视觉传感集成体系中不同图像间映射匹配关系", 光电子!激光, pages 706 - 709 * |
郑 伟,等: "多传感器融合的因果映射网故障诊断方法", 系统仿真学报, vol. 20, no. 21, pages 5944 - 5946 * |
Also Published As
Publication number | Publication date |
---|---|
CN113467266B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11354089B2 (en) | System and method for dialog interaction in distributed automation systems | |
US9805575B2 (en) | Smart LED lighting system and monitoring method thereof | |
CN107139184A (en) | A kind of intelligent robot of accompanying and attending to | |
US20130100268A1 (en) | Emergency detection and response system and method | |
CN204613728U (en) | A kind of long-range unattended transforming of industry spot | |
CN105371426A (en) | Intelligent air purifying system and use method thereof | |
KR101815482B1 (en) | Indoor environmental control system and control method for automatic | |
CN111844039A (en) | Wisdom space system based on robot control | |
CN110660189A (en) | Multifunctional fire alarm based on narrowband Internet of things and alarm method | |
CN107368044A (en) | A kind of real-time control method of intelligent electric appliance, system | |
CN205754618U (en) | Elevator intelligent monitoring camera and elevator monitoring system | |
CN113467266A (en) | Intelligent control method and system for robot and hall acoustoelectric equipment | |
CN211842015U (en) | Household dialogue robot based on multi-microphone fusion | |
CN112859697A (en) | Warehouse monitoring system based on AI environmental perception | |
Abdallah et al. | Smart assistant robot for smart home management | |
CN206223179U (en) | A kind of old man uses location navigation button | |
CN110017573A (en) | A kind of automation HVAC system | |
CN109803013B (en) | Weak interaction system based on artificial intelligence and control method thereof | |
CN109062148A (en) | A kind of meeting intelligence control system | |
WO2018151972A1 (en) | A evacuating system and evacuating method | |
JP6792733B1 (en) | Robot communication systems, communication robots, and remote monitoring methods | |
CN106406113A (en) | Vehicle applied remote control system for smart household appliances | |
CA2792621A1 (en) | Emergency detection and response system and method | |
KR100945064B1 (en) | Ubiquitous system for integrated management and control of elevator | |
KR20220032662A (en) | System for providing emergency alarming service using voice message |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |