CN115019799A - Man-machine interaction method and system based on long voice recognition - Google Patents

Man-machine interaction method and system based on long voice recognition Download PDF

Info

Publication number
CN115019799A
CN115019799A CN202210931371.6A CN202210931371A CN115019799A CN 115019799 A CN115019799 A CN 115019799A CN 202210931371 A CN202210931371 A CN 202210931371A CN 115019799 A CN115019799 A CN 115019799A
Authority
CN
China
Prior art keywords
instruction
information
cleaning
scanning
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210931371.6A
Other languages
Chinese (zh)
Inventor
李昕
李钦清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210931371.6A priority Critical patent/CN115019799A/en
Publication of CN115019799A publication Critical patent/CN115019799A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention relates to the field related to intelligent home furnishing, and discloses a man-machine interaction method and a man-machine interaction system based on long voice recognition.

Description

Man-machine interaction method and system based on long voice recognition
Technical Field
The invention relates to the field related to smart home, in particular to a man-machine interaction method and system based on long voice recognition.
Background
With the rapid development of computer technology, the field and variety of smart devices are more and more abundant, for example, smart home devices which are widely used, most smart homes can be controlled in various ways such as voice, touch, terminal mobile devices and the like, so as to achieve more convenient control.
However, the voice intelligent device in the prior art, especially the intelligent cleaning robot, after the operation control is separated from the manual operation at any time, the set circulation type operation mode does not have high intelligence to deal with various real-time situations, for example, when the owner leaves, the pet alone at home may overturn various objects and cause the objects to be splashed and broken, etc., while the existing intelligent cleaning robot cannot respond to the situation and perform the cleaning treatment, and the splashed objects may not be cleaned due to adhesion for a long time, or a large amount of peculiar smell is generated, etc.
Disclosure of Invention
The invention aims to provide a man-machine interaction method and a man-machine interaction system based on long speech recognition, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a human-computer interaction system based on long speech recognition comprises:
the standby awakening module is used for monitoring and acquiring sound signals in real time through a sound sensor, distinguishing the types of the sound signals, guiding to execute a voice recognition program if the distinguished types are voices, and guiding to execute a positioning scanning program if the distinguished types are non-voice contents, wherein the sound signals comprise a plurality of sound data and azimuth data corresponding to the sound data, and the azimuth data are used for representing the acquisition azimuth of the sound data;
a speech recognition module for executing a speech recognition program, the speech recognition program comprising the steps of: recognizing the sound signal to obtain text information, recognizing and obtaining instruction keywords according to the text information, and establishing a behavior guide instruction according to the instruction keywords, wherein the behavior guide instruction comprises a conventional instruction and a temporary instruction, and the temporary instruction is used for guiding and executing a positioning scanning program;
a positioning and scanning module for executing a positioning and scanning program, the positioning and scanning program comprising the steps of: acquiring the azimuth data in the sound signal, generating a sector scanning area according to the azimuth data, performing coverage scanning on the sector scanning area, and acquiring cleaning object information, wherein the cleaning object information is used for representing the type, size and distribution of foreign matters;
and the execution cleaning module is used for analyzing the cleaning object information according to a preset cleaning execution scheme, generating a cleaning scheme and outputting the cleaning scheme, wherein the cleaning scheme is used for controlling the robot to operate.
As a further scheme of the invention: the standby awakening module comprises a positioning auxiliary unit;
the positioning auxiliary unit is used for detecting the monitoring direction of the sound signal through sensing equipment when the sound signal is monitored, and acquiring sensing information corresponding to the sound signal, wherein the sensing information is used for assisting the generation of the azimuth data, and the sensing information represents the existence of a biological source and a heat source.
As a further scheme of the invention: the positioning and scanning module comprises an environment simulation unit, and the environment simulation unit specifically comprises:
the environment memory subunit is used for updating regional environment information after the robot responds to the conventional instruction or the temporary instruction and executes the control robot, wherein the regional environment information is used for representing the object distribution condition of the robot cleaning region;
the environment comparison subunit is used for comparing the scanning information with the area environment information to acquire the cleaning object information when the sector scanning area is scanned and covered or the conventional instruction is responded;
and the information feedback subunit is used for acquiring the image data of the cleaning object and outputting the image data through a local area network.
As a further scheme of the invention: the text information comprises an execution object and instruction content, the instruction content corresponds to a preset instruction execution library, the instruction content comprises the conventional instruction and the temporary instruction, and the conventional instruction comprises instruction content and an instruction execution area.
As a further scheme of the invention: the voice recognition module specifically comprises:
the segmentation unit is used for carrying out segmentation identification on the sound signal through a preset break time threshold value to generate a plurality of segmentation marks;
the conversion unit is used for identifying and converting the sound signal to obtain text information, and the text information comprises a plurality of segmentation marks;
and the identification unit is used for identifying the text information through a preset object identification library and an instruction identification library, acquiring instruction keywords and establishing a behavior guide instruction.
The embodiment of the invention aims to provide a man-machine interaction method based on long voice recognition, which comprises the following steps:
monitoring and acquiring sound signals in real time through a sound sensor, distinguishing the types of the sound signals, guiding to execute a voice recognition program if the distinguishing type is voice, and guiding to execute a positioning scanning program if the distinguishing type is non-voice content, wherein the sound signals comprise a plurality of sound data and azimuth data corresponding to the sound data, and the azimuth data are used for representing the acquiring azimuth of the sound data;
executing a speech recognition program, said speech recognition program comprising the steps of: recognizing the sound signal to obtain text information, recognizing and obtaining instruction keywords according to the text information, and establishing a behavior guide instruction according to the instruction keywords, wherein the behavior guide instruction comprises a conventional instruction and a temporary instruction, and the temporary instruction is used for guiding and executing a positioning scanning program;
executing a positioning and scanning program, wherein the positioning and scanning program comprises the following steps: acquiring the azimuth data in the sound signal, generating a sector scanning area according to the azimuth data, performing coverage scanning on the sector scanning area, and acquiring cleaning object information, wherein the cleaning object information is used for representing the type, size and distribution of foreign matters;
and analyzing the cleaning object information according to a preset cleaning execution scheme, generating a cleaning scheme and outputting the cleaning scheme, wherein the cleaning scheme is used for controlling the operation of the robot.
As a further scheme of the invention: the method comprises the following steps:
when the sound signal is monitored, detecting the monitoring direction of the sound signal through sensing equipment, and acquiring sensing information corresponding to the sound signal, wherein the sensing information is used for assisting the generation of the azimuth data, and the sensing information represents the existence of a biological source and a heat source.
As a still further scheme of the invention: the step of performing coverage scanning on the sector scanning area to obtain the information of the cleaning object specifically comprises:
when the robot responds to the conventional instruction or the temporary instruction and executes a control robot, updating regional environment information, wherein the regional environment information is used for representing the object distribution condition of the cleaning region of the robot;
when the sector scanning area is scanned and covered or the conventional instruction is responded, comparing scanning information with area environment information to obtain cleaning object information;
and acquiring image data of the cleaning object, and outputting the image data through a local area network.
Compared with the prior art, the invention has the beneficial effects that: through the cooperation setting of relevant module, obtained cleaner robot more various and intelligent control mode, can realize in time dropping the foreign matter that the condition produced such as spilling and spill article under the condition of no artifical active control and in time judge and clear effect, effectually avoided prior art cleaner robot because of can't even discover and handle the peculiar smell diffusion that dirty leads to and can't conveniently clear away the scheduling problem after firm when using.
Drawings
FIG. 1 is a block diagram of a human-computer interaction system based on long speech recognition.
Fig. 2 is a block diagram of an environment simulation unit in a man-machine interactive system based on long speech recognition.
FIG. 3 is a block diagram of a speech recognition module in a human-computer interaction system based on long speech recognition.
FIG. 4 is a flow chart of a human-computer interaction method based on long speech recognition.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific embodiments of the present invention is provided in connection with specific embodiments.
As shown in fig. 1, a human-computer interaction system based on long speech recognition provided for an embodiment of the present invention includes:
the standby awakening module 100 is configured to monitor and acquire a sound signal in real time through a sound sensor, identify the type of the sound signal, guide execution of a voice recognition program if the type of the identification is voice, guide execution of a positioning scanning program if the type of the identification is non-voice content, the sound signal includes a plurality of sound data and orientation data corresponding to the sound data, and the orientation data is used for representing the acquisition orientation of the sound data.
A speech recognition module 300 for executing a speech recognition procedure, said speech recognition procedure comprising the steps of: the voice signal is recognized to obtain text information, instruction keywords are recognized and obtained according to the text information, behavior guide instructions are established according to the instruction keywords, the behavior guide instructions comprise conventional instructions and temporary instructions, and the temporary instructions are used for guiding execution of a positioning scanning program.
A positioning and scanning module 500, configured to execute a positioning and scanning procedure, where the positioning and scanning procedure includes the steps of: acquiring the azimuth data in the sound signal, generating a sector scanning area according to the azimuth data, performing coverage scanning on the sector scanning area, and acquiring cleaning object information, wherein the cleaning object information is used for representing the type, size and distribution of foreign matters.
And the cleaning execution module 700 is configured to analyze the cleaning object information according to a preset cleaning execution scheme, generate and output a cleaning scheme, where the cleaning scheme is used to control the operation of the robot.
In the embodiment, a human-computer interaction system based on long voice recognition is provided, and particularly, a control system of an intelligent household cleaning robot based on voice or voice recognition is provided, so that more various control modes are provided, and high-efficiency cleaning execution force can be achieved even if people are absent, in the prior art, most cleaning robots perform cleaning work based on a regular timing mode, or manually perform temporary instruction issuing to perform coverage cleaning, and in the rest of time, other cleaning work cannot be performed, but in daily use, when an adult is not at home, children and the like may still exist in the home, falling and splashing of objects and the like may be caused in activities, in the prior art, the cleaning scheme cannot respond, so that the dirt is finally difficult to clean, peculiar smell is diffused or the floor carpet and the like are damaged for a long time, the intelligent area scanning system is based on the forms of sound monitoring and the like, intelligent area scanning processing capacity is achieved, and when an article falls, cleaning can be performed in response at the first time; the conventional instruction here is to control the robot to perform routine cleaning work, such as regularly patrolling to clean the coverage of the room, and in specific use, the robot is always in a monitoring state, so when a sound signal is acquired, it can be distinguished as a normal voice or other sudden sounds, if the sound signal is a normal voice, the same as the conventional execution mode is performed, a cleaning task required by voice recognition is performed after a wakeup vocabulary is detected, and if the sound signal is a sudden sound, scanning is performed according to the sound emission direction to confirm whether foreign matters exist or not and cleaning is performed.
As another preferred embodiment of the present invention, the standby wakeup module 100 includes a positioning auxiliary unit;
the positioning auxiliary unit is used for detecting the monitoring direction of the sound signal through sensing equipment when the sound signal is monitored, and acquiring sensing information corresponding to the sound signal, wherein the sensing information is used for assisting the generation of the azimuth data, and the sensing information represents the existence of a biological source and a heat source.
In the embodiment, the standby wakeup module 100 is further described, because in practical use, in view of other factors such as the intensity of sound, the transmission direction of sound may not be accurately acquired during monitoring, and accurate position data determination cannot be performed, so that the auxiliary positioning is performed by means of thermal induction and the like, and in most cases, the situation that the object falls and splashes to cause dirt is a person or a living being, so that the determination of the position and the position can be assisted in this way.
As shown in fig. 2, as another preferred embodiment of the present invention, the positioning and scanning module 500 includes an environment simulation unit 510, where the environment simulation unit 510 specifically includes:
and the environment memory subunit 511 is configured to update the regional environment information after the robot responds to the normal instruction or the temporary instruction and executes the control robot, where the regional environment information is used to represent the object distribution in the robot cleaning region.
And the environment comparison subunit 512 is configured to compare the scanning information with the area environment information to obtain the cleaning object information when the sector scanning area is scanned and covered or the conventional instruction is responded to.
And an information feedback subunit 513, configured to acquire image data of the cleaning object, and output the image data through a local area network.
In this embodiment, the environment simulation unit 510 is introduced and further described according to functional actions, and its main action is that the cleaning robot scans and records the environment (mainly referring to the cleaning surface) in each work, so that the distribution environment of the cleaning surface in the cleaning area can be obtained, and therefore, when a difference is found in the next scanning from the previous area, it indicates that the difference may be a foreign object, and the determination is performed, so that the scanning acquisition of the cleaning object can be effectively assisted, the efficiency is improved, the difference image is sent to the control end through the local area network, and it can also be determined whether the cleaning process is needed through the manual determination assistance for determining the foreign object.
As another preferred embodiment of the present invention, the text information includes an execution object and an instruction content, the instruction content corresponds to a preset instruction execution library, the instruction content includes the normal instruction and the temporary instruction, and the normal instruction includes an instruction content and an instruction execution area.
Further, as shown in fig. 3, the speech recognition module 300 specifically includes:
a dividing unit 301, configured to perform division recognition on the sound signal by using a preset break time threshold, and generate a plurality of division marks.
A conversion unit 302, configured to perform recognition and conversion on the sound signal, and obtain text information, where the text information includes a plurality of segmentation markers.
The identifying unit 303 is configured to identify the text information through a preset object identification library and an instruction identification library, acquire an instruction keyword, and establish a behavior guidance instruction.
In this embodiment, the text information is described in detail, and the cleaning robot does not need to completely understand the content of the voice, and only needs to acquire the cleaning work to be executed through the voice, so only the related key vocabulary needs to be recognized, when the key vocabulary is acquired for the voice, the sentence division is performed through the discontinuous time threshold, and the misjudgment of the cleaning instruction caused by the mixed recognition of the voice content of two sentences is avoided, for example, two sentences correspond to two cleaning devices to execute different cleaning works.
As shown in fig. 4, the present invention also provides a long speech recognition-based human-computer interaction method, which includes the steps of:
s200, monitoring and acquiring sound signals in real time through a sound sensor, distinguishing the types of the sound signals, guiding to execute a voice recognition program if the distinguished types are voices, and guiding to execute a positioning scanning program if the distinguished types are non-voice contents, wherein the sound signals comprise a plurality of sound data and direction data corresponding to the sound data, and the direction data are used for representing the acquiring directions of the sound data.
S400, executing a voice recognition program, wherein the voice recognition program comprises the following steps: the voice signal is recognized to obtain text information, instruction keywords are recognized and obtained according to the text information, behavior guide instructions are established according to the instruction keywords, the behavior guide instructions comprise conventional instructions and temporary instructions, and the temporary instructions are used for guiding execution of a positioning scanning program.
S600, executing a positioning scanning program, wherein the positioning scanning program comprises the following steps: acquiring the azimuth data in the sound signal, generating a sector scanning area according to the azimuth data, performing coverage scanning on the sector scanning area, and acquiring cleaning object information, wherein the cleaning object information is used for representing the type, size and distribution of foreign matters.
And S800, analyzing the cleaning object information according to a preset cleaning execution scheme, generating a cleaning scheme and outputting the cleaning scheme, wherein the cleaning scheme is used for controlling the robot to operate.
As another preferred embodiment of the present invention, it comprises the steps of:
when the sound signal is monitored, detecting the monitoring direction of the sound signal through sensing equipment, and acquiring sensing information corresponding to the sound signal, wherein the sensing information is used for assisting the generation of the azimuth data, and the sensing information represents the existence of a biological source and a heat source.
As another preferred embodiment of the present invention, the step of performing coverage scanning on the sector scanning area to acquire information of a cleaning object specifically includes:
and after the robot responds to the conventional instruction or the temporary instruction and executes control of the robot, updating regional environment information, wherein the regional environment information is used for representing the object distribution condition of the cleaning region of the robot.
When the sector scanning area is scanned and covered or the conventional command is responded, the scanning information is compared with the area environment information to obtain the cleaning object information.
Acquiring image data of a cleaning object, and outputting the image data through a local area network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A human-computer interaction system based on long speech recognition, comprising:
the standby awakening module is used for monitoring and acquiring sound signals in real time through a sound sensor, distinguishing the types of the sound signals, guiding to execute a voice recognition program if the distinguished types are voices, and guiding to execute a positioning scanning program if the distinguished types are non-voice contents, wherein the sound signals comprise a plurality of sound data and azimuth data corresponding to the sound data, and the azimuth data are used for representing the acquisition azimuth of the sound data;
a speech recognition module for executing a speech recognition program, the speech recognition program comprising the steps of: recognizing the sound signal to obtain text information, recognizing and obtaining instruction keywords according to the text information, and establishing a behavior guide instruction according to the instruction keywords, wherein the behavior guide instruction comprises a conventional instruction and a temporary instruction, and the temporary instruction is used for guiding and executing a positioning scanning program;
a positioning and scanning module for executing a positioning and scanning program, the positioning and scanning program comprising the steps of: acquiring the azimuth data in the sound signal, generating a sector scanning area according to the azimuth data, performing coverage scanning on the sector scanning area, and acquiring cleaning object information, wherein the cleaning object information is used for representing the type, size and distribution of foreign matters;
and the execution cleaning module is used for analyzing the cleaning object information according to a preset cleaning execution scheme, generating a cleaning scheme and outputting the cleaning scheme, wherein the cleaning scheme is used for controlling the robot to operate.
2. The human-computer interaction system based on long voice recognition of claim 1, wherein the standby wake-up module comprises a positioning assistance unit;
and the positioning auxiliary unit is used for detecting the monitoring direction of the sound signal through sensing equipment when the sound signal is monitored, and acquiring sensing information corresponding to the sound signal, wherein the sensing information is used for assisting the generation of the azimuth data, and the sensing information represents the existence of a biological source and a heat source.
3. The human-computer interaction system based on long speech recognition of claim 2, wherein the location scanning module comprises an environment simulation unit, and the environment simulation unit specifically comprises:
the environment memory subunit is used for updating regional environment information after the robot responds to the conventional instruction or the temporary instruction and executes the control robot, wherein the regional environment information is used for representing the object distribution condition of the robot cleaning region;
the environment comparison subunit is used for comparing the scanning information with the area environment information to acquire the cleaning object information when the sector scanning area is scanned and covered or the conventional instruction is responded;
and the information feedback subunit is used for acquiring the image data of the cleaning object and outputting the image data through a local area network.
4. The human-computer interaction system based on the long speech recognition is characterized in that the text information comprises an execution object and instruction content, the instruction content corresponds to a preset instruction execution library, the instruction content comprises the normal instruction and the temporary instruction, and the normal instruction comprises instruction content and an instruction execution area.
5. The human-computer interaction system based on long speech recognition of claim 4, wherein the speech recognition module specifically comprises:
the segmentation unit is used for carrying out segmentation identification on the sound signal through a preset break time threshold value to generate a plurality of segmentation marks;
the conversion unit is used for identifying and converting the sound signal to obtain text information, and the text information comprises a plurality of segmentation marks;
and the identification unit is used for identifying the text information through a preset object identification library and an instruction identification library, acquiring instruction keywords and establishing a behavior guide instruction.
6. A man-machine interaction method based on long voice recognition is characterized by comprising the following steps:
monitoring and acquiring sound signals in real time through a sound sensor, distinguishing the types of the sound signals, guiding to execute a voice recognition program if the distinguishing type is voice, and guiding to execute a positioning scanning program if the distinguishing type is non-voice content, wherein the sound signals comprise a plurality of sound data and azimuth data corresponding to the sound data, and the azimuth data are used for representing the acquiring azimuth of the sound data;
executing a speech recognition program, said speech recognition program comprising the steps of: recognizing the sound signal to obtain text information, recognizing and obtaining instruction keywords according to the text information, and establishing a behavior guide instruction according to the instruction keywords, wherein the behavior guide instruction comprises a conventional instruction and a temporary instruction, and the temporary instruction is used for guiding and executing a positioning scanning program;
executing a positioning and scanning program, wherein the positioning and scanning program comprises the following steps: acquiring the azimuth data in the sound signal, generating a sector scanning area according to the azimuth data, performing coverage scanning on the sector scanning area, and acquiring cleaning object information, wherein the cleaning object information is used for representing the type, size and distribution of foreign matters;
and analyzing the cleaning object information according to a preset cleaning execution scheme, generating a cleaning scheme and outputting the cleaning scheme, wherein the cleaning scheme is used for controlling the operation of the robot.
7. The human-computer interaction method based on long speech recognition according to claim 6, characterized by comprising the steps of:
when the sound signal is monitored, detecting the monitoring direction of the sound signal through sensing equipment, and acquiring sensing information corresponding to the sound signal, wherein the sensing information is used for assisting the generation of the azimuth data, and the sensing information represents the existence of a biological source and a heat source.
8. The human-computer interaction method based on long speech recognition according to claim 7, wherein the step of performing coverage scanning on the sector scanning area to obtain the information of the cleaning object specifically comprises:
when the robot responds to the conventional instruction or the temporary instruction and executes a control robot, updating regional environment information, wherein the regional environment information is used for representing the object distribution condition of the cleaning region of the robot;
when the sector scanning area is scanned and covered or the conventional instruction is responded, comparing scanning information with area environment information to obtain cleaning object information;
acquiring image data of a cleaning object, and outputting the image data through a local area network.
CN202210931371.6A 2022-08-04 2022-08-04 Man-machine interaction method and system based on long voice recognition Pending CN115019799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210931371.6A CN115019799A (en) 2022-08-04 2022-08-04 Man-machine interaction method and system based on long voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210931371.6A CN115019799A (en) 2022-08-04 2022-08-04 Man-machine interaction method and system based on long voice recognition

Publications (1)

Publication Number Publication Date
CN115019799A true CN115019799A (en) 2022-09-06

Family

ID=83066207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210931371.6A Pending CN115019799A (en) 2022-08-04 2022-08-04 Man-machine interaction method and system based on long voice recognition

Country Status (1)

Country Link
CN (1) CN115019799A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504546A (en) * 2008-12-12 2009-08-12 北京科技大学 Children robot posture tracking apparatus
CN104414590A (en) * 2013-08-23 2015-03-18 Lg电子株式会社 Robot cleaner and method for controlling a robot cleaner
CN105411491A (en) * 2015-11-02 2016-03-23 中山大学 Home intelligent cleaning system and method based on environment monitoring
CN106328132A (en) * 2016-08-15 2017-01-11 歌尔股份有限公司 Voice interaction control method and device for intelligent equipment
CN106970906A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of semantic analysis being segmented based on sentence
CN109998421A (en) * 2018-01-05 2019-07-12 艾罗伯特公司 Mobile clean robot combination and persistence drawing
CN110123199A (en) * 2018-02-08 2019-08-16 东芝生活电器株式会社 Self-propelled electric dust collector
CN110772177A (en) * 2018-07-27 2020-02-11 松下电器(美国)知识产权公司 Information processing method, information processing apparatus, and recording medium
CN210414558U (en) * 2019-06-24 2020-04-28 中德智能(广州)光学科技有限公司 Robot
CN111643010A (en) * 2020-05-26 2020-09-11 深圳市杉川机器人有限公司 Cleaning robot control method and device, cleaning robot and storage medium
CN112890681A (en) * 2019-11-19 2021-06-04 珠海市一微半导体有限公司 Voice control method, Bluetooth headset, cleaning robot and control system
CN113080768A (en) * 2019-12-23 2021-07-09 佛山市云米电器科技有限公司 Sweeper control method, sweeper control equipment and computer readable storage medium
CN113226667A (en) * 2018-12-26 2021-08-06 三星电子株式会社 Cleaning robot and method for performing task thereof

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504546A (en) * 2008-12-12 2009-08-12 北京科技大学 Children robot posture tracking apparatus
CN104414590A (en) * 2013-08-23 2015-03-18 Lg电子株式会社 Robot cleaner and method for controlling a robot cleaner
CN105411491A (en) * 2015-11-02 2016-03-23 中山大学 Home intelligent cleaning system and method based on environment monitoring
CN106970906A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of semantic analysis being segmented based on sentence
CN106328132A (en) * 2016-08-15 2017-01-11 歌尔股份有限公司 Voice interaction control method and device for intelligent equipment
CN109998421A (en) * 2018-01-05 2019-07-12 艾罗伯特公司 Mobile clean robot combination and persistence drawing
CN110123199A (en) * 2018-02-08 2019-08-16 东芝生活电器株式会社 Self-propelled electric dust collector
CN110772177A (en) * 2018-07-27 2020-02-11 松下电器(美国)知识产权公司 Information processing method, information processing apparatus, and recording medium
CN113226667A (en) * 2018-12-26 2021-08-06 三星电子株式会社 Cleaning robot and method for performing task thereof
CN210414558U (en) * 2019-06-24 2020-04-28 中德智能(广州)光学科技有限公司 Robot
CN112890681A (en) * 2019-11-19 2021-06-04 珠海市一微半导体有限公司 Voice control method, Bluetooth headset, cleaning robot and control system
CN113080768A (en) * 2019-12-23 2021-07-09 佛山市云米电器科技有限公司 Sweeper control method, sweeper control equipment and computer readable storage medium
CN111643010A (en) * 2020-05-26 2020-09-11 深圳市杉川机器人有限公司 Cleaning robot control method and device, cleaning robot and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钱思等: "一种新型智能清洁机器人测控系统的设计", 《机械与电子》 *

Similar Documents

Publication Publication Date Title
CN107919121B (en) Control method and device of intelligent household equipment, storage medium and computer equipment
CN107908116B (en) Voice control method, intelligent home system, storage medium and computer equipment
CN109559742B (en) Voice control method, system, storage medium and computer equipment
CN107808669B (en) Voice control method, intelligent home system, storage medium and computer equipment
Litman et al. Automatic detection of poor speech recognition at the dialogue level
CN104102181B (en) Intelligent home control method, device and system
CN106845624A (en) The multi-modal exchange method relevant with the application program of intelligent robot and system
CN104123939A (en) Substation inspection robot based voice interaction control method
WO2019148491A1 (en) Human-computer interaction method and device, robot, and computer readable storage medium
JP2017010518A (en) Control system, method, and device for intelligent robot based on artificial intelligence
CN109373518B (en) Air conditioner and voice control device and voice control method thereof
CN111599361A (en) Awakening method and device, computer storage medium and air conditioner
US11393490B2 (en) Method, apparatus, device and computer-readable storage medium for voice interaction
CN110738994A (en) Control method, device, robot and system for smart homes
CN113611305A (en) Voice control method, system, device and medium in autonomous learning home scene
CN106845628A (en) The method and apparatus that robot generates new command by internet autonomous learning
JPWO2018087971A1 (en) MOBILE BODY CONTROL DEVICE AND MOBILE BODY CONTROL PROGRAM
CN112767939A (en) Intelligent device awakening method and device, computer device and storage medium
CN110516568A (en) A kind of more contextual data management methods of colleges and universities based on recognition of face and system
KR20110003811A (en) Interactable robot
CN106681323A (en) Interactive output method used for robot and the robot
CN115019799A (en) Man-machine interaction method and system based on long voice recognition
CN109434827B (en) Companion robot control method, system, mobile terminal and storage medium
CN107247923A (en) A kind of instruction identification method, device, storage device, mobile terminal and electrical equipment
CN112634897B (en) Equipment awakening method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220906