CN113386147A - Voice system based on object recognition - Google Patents

Voice system based on object recognition Download PDF

Info

Publication number
CN113386147A
CN113386147A CN202110437514.3A CN202110437514A CN113386147A CN 113386147 A CN113386147 A CN 113386147A CN 202110437514 A CN202110437514 A CN 202110437514A CN 113386147 A CN113386147 A CN 113386147A
Authority
CN
China
Prior art keywords
sound
module
information
voiceprint
expelling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110437514.3A
Other languages
Chinese (zh)
Other versions
CN113386147B (en
Inventor
周刚
屠楚明
黄杰
周健
周冰
王聃
尹琪
袁均祥
朱奕琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority to CN202110437514.3A priority Critical patent/CN113386147B/en
Publication of CN113386147A publication Critical patent/CN113386147A/en
Application granted granted Critical
Publication of CN113386147B publication Critical patent/CN113386147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M29/00Scaring or repelling devices, e.g. bird-scaring apparatus
    • A01M29/06Scaring or repelling devices, e.g. bird-scaring apparatus using visual means, e.g. scarecrows, moving elements, specific shapes, patterns or the like
    • A01M29/10Scaring or repelling devices, e.g. bird-scaring apparatus using visual means, e.g. scarecrows, moving elements, specific shapes, patterns or the like using light sources, e.g. lasers or flashing lights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Insects & Arthropods (AREA)
  • Pest Control & Pesticides (AREA)
  • Wood Science & Technology (AREA)
  • Zoology (AREA)
  • Environmental Sciences (AREA)
  • Optics & Photonics (AREA)
  • Birds (AREA)
  • General Health & Medical Sciences (AREA)
  • Catching Or Destruction (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention discloses a voice system based on object recognition, which comprises a first sound acquisition module, a processing module and a second sound acquisition module, wherein the first sound acquisition module is used for acquiring environmental sound information and is connected with the processing module; the marking module is used for marking the sound information of the eviction object or the voice interaction object and is connected with the processing module; the second sound collecting module is used for collecting the marked sound information and is connected with the marking module; according to the invention, the first sound collection module is adopted to collect environmental sounds, the sounds of workers or the sounds of small animals entering the transformer substation are collected in a large range, the collected sounds are tracked through the second sound collection module, so that the sound collection and identification are more accurate, the data processing is faster, and the processing module performs light irradiation according to the movement prediction of the small animals, so that the small animals can be quickly expelled.

Description

Voice system based on object recognition
Technical Field
The invention relates to the technical field of robots, in particular to a voice system based on object recognition.
Background
A Robot (Robot) is a machine device that automatically performs work. The robot can accept human commands, run programs which are arranged in advance, and can also perform actions according to principles formulated by artificial intelligence technology, and the technology of the robot is rapidly developed along with the development of the society, so that the application of the robot is more and more common, and the robot is designed in various ways.
The security problem of power equipment is considered in the national power grid, and the inspection robot is adopted to inspect the substation equipment, so that the inspection efficiency and the security are greatly improved. And at the in-process that the robot patrols and examines, some toy entering transformer substation can appear, seriously influences the operation security of substation equipment, and the route of patrolling and examining of robot is patrolled and examined in the interference simultaneously, leads to patrolling and examining the route skew even, causes certain damage.
The current inspection robot has a certain man-machine interaction function, but from the perspective of voice recognition, different reactions are difficult to be made according to recognition objects, and small animals influencing the inspection route of the robot cannot be expelled.
For example, chinese patent CN201610107985.7 discloses a voice and image composite interactive execution method and system for robot. The robot interaction execution method combines different recognition technologies, exerts respective advantages, makes up respective defects, improves precision and robustness of user command recognition, combines a voice recognition technology and a face detection recognition technology to realize user voice command recognition, and focuses on the fact that a robot cannot make corresponding reactions under the condition that an object is a human and an object is an animal.
Disclosure of Invention
The invention mainly solves the problem that the inspection robot in the prior art can only carry out single man-machine interaction and can not carry out animal expelling; the utility model provides a speech system based on object identification is applied to and patrols and examines the robot, carries out animal expulsion or man-machine speech interaction according to the discernment object for it has more comprehensive function to patrol and examine the robot, further improves the security of substation equipment.
The technical problem of the invention is mainly solved by the following technical scheme: a voice system based on object recognition is applied to a transformer substation inspection robot and comprises a first sound acquisition module, a processing module and a second sound acquisition module, wherein the first sound acquisition module is used for acquiring environmental sound information and is connected with the processing module; the marking module is used for marking the sound information of the eviction object or the voice interaction object and is connected with the processing module; the second sound collecting module is used for collecting the marked sound information and is connected with the marking module; the processing module judges whether object expelling or voice interaction is carried out according to the environmental sound information collected by the first sound collection module, if the object expelling is carried out, the light irradiation module is driven to expel the object through the position of the mark information collected by the second sound collection module, and if the voice interaction is carried out, man-machine voice interaction is carried out through the voice interaction module; the lamplight irradiation module is used for expelling objects through lamplight irradiation and is connected with the processing module; and the voice interaction module is used for performing man-machine voice interaction and is connected with the processing module. Adopt first sound collection module to gather environment sound, follow the sound of gathering the staff or the sound that gets into the toy of transformer substation on a large scale, follow the processing of trail to the sound of gathering through second sound collection module for sound collection and discernment are more accurate, and data processing is faster, utilizes light to shine the module and realizes the accurate expulsion to the toy.
Preferably, the first sound collection module comprises a first sound pickup and a first signal filtering module, the input end of the first sound pickup collects environmental sound information, the output end of the first sound pickup is connected with the input end of the first signal filtering module, the output end of the first signal filtering module is connected with the processing module, and the first signal filtering module is used for filtering environmental noise. The sound source information that first sound collection module gathered is more, and when transmitting processing module, data processing is slow, filters the ambient noise in advance through first signal filtering module for processing module's data processing speed improves work efficiency.
Preferably, the second sound collection module comprises a second sound collector and a second signal filtering module, the input end of the second sound collector collects the marked sound information, the output end of the second sound collector is connected with the input end of the second signal filtering module, the output end of the second signal filtering module is connected with the marking module, the marking module compares the sound wave signal transmitted by the second filtering module with the marked sound wave signal, if the transmitted sound wave signal is consistent with the marked sound wave signal, the transmitted sound wave signal is transmitted to the processing module, and otherwise, the second sound collector is controlled to collect the marked sound information again. The second signal filtering module filters other sound wave signals except the marked sound wave signals, and the sound wave signals are further confirmed and compared by the marking module and then transmitted to the processing module, so that the data processing pressure of the processing module is reduced, and meanwhile, the collection of the sound wave signals is more accurate.
Preferably, the method for judging whether the processing module runs object eviction or voice interaction specifically comprises the following steps:
s1: establishing a voiceprint recognition table of human and animals;
s2: acquiring environmental sound information acquired by a first sound acquisition module, and extracting human voiceprint information and animal voiceprint information in the environmental sound information;
s3: the voice interaction is carried out if the voiceprint information of the human and the animal is extracted from the environment sound by combining the voiceprint recognition tables of the human and the animal, the object expelling is carried out if the voiceprint information of the animal is extracted from the environment sound and the voiceprint information of the human and the animal cannot be extracted from the environment sound, and no action is generated if the voiceprint information of the human and the voiceprint information of the animal cannot be extracted from the environment sound. And judging according to the voiceprint characteristic difference of the human and the animal, quickly identifying whether human voice exists in the environmental voice, and further judging object expelling or voice interaction.
Preferably, in step S3, the method further includes, before the voice interaction is performed after the voiceprint information of the person is extracted from the environmental sound:
s31: establishing a normal voiceprint identification table and an abnormal voiceprint identification table of authorized personnel;
s32: if the extracted voiceprint information of the person exists in a normal voiceprint recognition table or an abnormal voiceprint recognition table of the authorized person, voice interaction is carried out, and otherwise, no action is generated. When the voiceprint information of the person is detected, the voiceprint information of whether the person is authorized or not needs to be detected, the intelligence and the safety of the inspection robot are improved, and meanwhile, the safety of the substation equipment is also guaranteed.
Preferably, the abnormal voiceprint in the abnormal voiceprint recognition table comprises voiceprint information when the authorized person coughs, voiceprint information when the authorized person runs a nose and voiceprint information when tonsils of the authorized person are inflamed. Because the voiceprint of a person can be changed under the influence of various factors, and the change fluctuation of the voiceprint is large when coughing, rhinorrhea and tonsil inflammation occur, the voiceprint is collected and recorded as an abnormal voiceprint section to form an abnormal voiceprint recognition table, and the accuracy of voiceprint recognition is improved.
Preferably, the method for expelling the object run in the processing module specifically includes the following steps:
a1: acquiring the coordinates of a first sound acquisition module and the coordinates of a second sound acquisition module;
a2: acquiring a time T1 when the first sound collection module collects a first sound wave signal of the expulsion object;
a3: acquiring a time T2 when the second sound acquisition module acquires a first sound wave signal of the expulsion object;
a4: positioning the expulsion object according to the time T1 of collecting the first sound wave signal of the expulsion object, the time T2 of collecting the first sound wave signal of the expulsion object, the propagation speed of sound waves in the air, the coordinates of the first sound collection module and the coordinates of the second sound collection module;
a5: judging the species type of the expelling object according to the voiceprint information of the expelling object collected by the first sound collection module, and acquiring the instantaneous moving speed V of the expelling object according to the species type of the expelling object;
a6: acquiring the sound wave signal for the second time on the expelling object after the time period t, and calculating the moving distance of the expelling object in the time period t;
a7: and predicting the moving distance and direction of the evicted object according to the calculation result of the step A6, and performing irradiation eviction by a light irradiation module. According to the current position information of the small animals, the moving destination of the small animals at the next stage is predicted, so that the light irradiation module can accurately irradiate the small animals, and the small animals are expelled.
Preferably, the method for predicting the moving direction of the evicted object in step a7 is:
Figure BDA0003033726290000031
wherein the content of the first and second substances,
Figure BDA0003033726290000032
in order to be a direction vector of the prediction,
Figure BDA0003033726290000033
is a direction vector of the repellent object far away from the natural enemy, gamma is a bearing coefficient of the repellent object facing the fear of the natural enemy,
Figure BDA0003033726290000034
a direction vector representing the approach of the evicted object to the food, and θ is a bearing coefficient of the evicted object to the food temptation.
Preferably, the voice interaction module comprises a display screen and a loudspeaker, and the display screen and the loudspeaker are both connected with the processing module. Set up loudspeaker, when carrying out the speech interaction, staff's sound input carries out speech output from loudspeaker from the input of second sound collection module, carries out figure, characters or video output through the display screen, when being in the object expulsion, also can play the natural enemy sound of expulsion object through loudspeaker, carries out supplementary object expulsion.
The invention has the beneficial effects that: based on the speech recognition result, it selects and the selection of object expulsion to patrol and examine the robot and can carry out the speech interaction, make it has more comprehensive function to patrol and examine the robot, further improve substation equipment's security, adopt first sound collection module to gather environmental sound, from the sound of gathering staff or the sound of the toy that gets into the transformer substation in on a large scale, follow tracks to the processing through the sound of second sound collection module to gathering, it is more accurate to make sound collection and discernment, data processing is quicker, processing module carries out light according to the movement prediction of toy and shines, can expel the toy rapidly.
Drawings
Fig. 1 is a connection block diagram of a speech system of an embodiment of the present invention.
Fig. 2 is a flowchart of an object eviction or voice interaction determination method according to an embodiment of the present invention.
In the figure, 1 is a processing module, 2 is a first sound acquisition module, 3 is a marking module, 4 is a second sound acquisition module, 5 is a light irradiation module, and 6 is a voice interaction module.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b): a voice system based on object recognition is applied to a transformer substation inspection robot, and comprises a first sound acquisition module 2, a processing module 1 and a second sound acquisition module 2, wherein the first sound acquisition module is used for acquiring environmental sound information and is connected with the processing module 1; the marking module 3 is used for marking the sound information of the eviction object or the voice interaction object and is connected with the processing module; the second sound acquisition module 4 is used for acquiring the marked sound information and is connected with the marking module; the processing module 1 judges whether object expelling or voice interaction is carried out according to the environmental sound information collected by the first sound collection module, drives the light irradiation module 5 to expel the object through the position of the mark information collected by the second sound collection module if the object expelling is carried out, and carries out man-machine voice interaction through the voice interaction module 6 if the voice interaction is carried out; the light irradiation module 5 is used for expelling objects through light irradiation and is connected with the processing module; and the voice interaction module 6 is used for performing man-machine voice interaction and is connected with the processing module.
Adopt first sound collection module to gather environment sound, follow the sound of gathering the staff or the sound that gets into the toy of transformer substation on a large scale, follow the processing of trail to the sound of gathering through second sound collection module for sound collection and discernment are more accurate, and data processing is faster, utilizes light to shine the module and realizes the accurate expulsion to the toy.
The first sound collection module comprises a first sound pickup and a first signal filtering module, the input end of the first sound pickup collects environmental sound information, the output end of the first sound pickup is connected with the input end of the first signal filtering module, the output end of the first signal filtering module is connected with the processing module, and the first signal filtering module is used for filtering environmental noise.
The second sound collection module comprises a second sound pickup and a second signal filtering module, the input end of the second sound pickup collects the marked sound information, the output end of the second sound pickup is connected with the input end of the second signal filtering module, the output end of the second signal filtering module is connected with the marking module, the marking module compares the sound wave signal transmitted by the second filtering module with the marked sound wave signal, if the transmitted sound wave signal is consistent with the marked sound wave signal, the processing module is transmitted, otherwise, the second sound pickup is controlled to collect the marked sound information again.
The voice interaction module comprises a display screen and a loudspeaker, the display screen and the loudspeaker are both connected with the processing module, when voice interaction is carried out, voice input of workers is input from the second voice acquisition module, voice output is carried out from the loudspeaker, and graphic, character or video output is carried out through the display screen.
As shown in fig. 2, the method for judging whether the processing module runs object eviction or voice interaction specifically includes the following steps:
s1: establishing a voiceprint recognition table of human and animals;
s2: acquiring environmental sound information acquired by a first sound acquisition module, and extracting human voiceprint information and animal voiceprint information in the environmental sound information;
s3: combining the voiceprint recognition tables of human and animal, if the voiceprint information of human is extracted from the environment sound, then carrying out voice interaction, if the voiceprint information of animal is extracted from the environment sound and the voiceprint information of human is not extracted, then carrying out object expelling, if the voiceprint information of human and the voiceprint information of animal are not extracted from the environment sound, then no action is generated, and the method also comprises the following steps before voice interaction after the voiceprint information of human is extracted from the environment sound:
s31: establishing a normal voiceprint identification table and an abnormal voiceprint identification table of authorized personnel;
s32: if the extracted voiceprint information of the person exists in a normal voiceprint recognition table or an abnormal voiceprint recognition table of an authorized person, carrying out voice interaction, otherwise, generating no action; the abnormal voiceprints in the abnormal voiceprint recognition table comprise voiceprint information when the authorized person coughs, voiceprint information when the authorized person runs a nose and voiceprint information when tonsils of the authorized person are inflamed.
The method for expelling the object operated in the processing module specifically comprises the following steps:
a1: acquiring the coordinates of a first sound acquisition module and the coordinates of a second sound acquisition module;
a2: acquiring a time T1 when the first sound collection module collects a first sound wave signal of the expulsion object;
a3: acquiring a time T2 when the second sound acquisition module acquires a first sound wave signal of the expulsion object;
a4: positioning the expulsion object according to the time T1 of collecting the first sound wave signal of the expulsion object, the time T2 of collecting the first sound wave signal of the expulsion object, the propagation speed of sound waves in the air, the coordinates of the first sound collection module and the coordinates of the second sound collection module; the specific positioning calculation formula and the positioning method thereof exist in the prior art, and are not described again;
a5: judging the species type of the expelling object according to the voiceprint information of the expelling object collected by the first sound collection module, and acquiring the instantaneous moving speed V of the expelling object according to the species type of the expelling object;
a6: acquiring the sound wave signal for the second time on the expelling object after the time period t, and calculating the moving distance of the expelling object in the time period t;
a7: and predicting the moving distance and direction of the evicted object according to the calculation result of the step A6, irradiating and evicting through a light irradiation module, and playing the natural enemy sound of the evicted object through a loudspeaker to assist in evicting the object.
The prediction method for the moving direction of the evicted object comprises the following steps:
Figure BDA0003033726290000051
wherein the content of the first and second substances,
Figure BDA0003033726290000052
in order to be a direction vector of the prediction,
Figure BDA0003033726290000053
is a direction vector of the repellent object far away from the natural enemy, gamma is a bearing coefficient of the repellent object facing the fear of the natural enemy,
Figure BDA0003033726290000054
a direction vector representing the approach of the evicted object to the food, and θ is a bearing coefficient of the evicted object to the food temptation.
In a specific application process, the voice system is installed on an inspection robot, the inspection robot inspects substation equipment according to an inspection route, a first sound acquisition module acquires environmental sound, the sound of workers or the sound of small animals entering a substation is acquired in a large range, when a processing module detects that the sound of the workers or the sound of the small animals entering the substation appears in the sound acquired by the first sound acquisition module, an object-running expelling or voice interaction judgment method is adopted, when the object-expelling is judged, the detected sound wave is sent to a marking module for marking, a second sound acquisition module carries out follow-up, and the processing module runs the object-expelling method, wherein the majority of the animals entering the substation are small animals, such as mice, rabbits, sparrows and the like, and the moving characteristics are as follows: the moving speed is high, the moving time is short, and if the small animals often leave the original positions when the light irradiation is transmitted according to the positioning positions of the first sound acquisition module and the second sound acquisition module, the movement prediction is carried out according to the moving modes of different types of small animals, the light is irradiated on the prediction points, and the animal expelling is more accurate; when judging for voice interaction, carry out voice interaction through loudspeaker, display screen and second sound collection module and staff for patrol and examine the robot and be more intelligent, the function is more comprehensive, further ensures the equipment safety of transformer substation.
The above-described embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention in any way, and other variations and modifications may be made without departing from the spirit of the invention as set forth in the claims.

Claims (9)

1. The utility model provides a speech system based on object recognition, is applied to the transformer substation and patrols and examines robot, its characterized in that includes:
the first sound acquisition module is used for acquiring environmental sound information and is connected with the processing module;
the marking module is used for marking the sound information of the eviction object or the voice interaction object and is connected with the processing module;
the second sound collecting module is used for collecting the marked sound information and is connected with the marking module;
the processing module judges whether object expelling or voice interaction is carried out according to the environmental sound information collected by the first sound collection module, if the object expelling is carried out, the light irradiation module is driven to expel the object through the position of the mark information collected by the second sound collection module, and if the voice interaction is carried out, man-machine voice interaction is carried out through the voice interaction module;
the lamplight irradiation module is used for expelling objects through lamplight irradiation and is connected with the processing module;
and the voice interaction module is used for performing man-machine voice interaction and is connected with the processing module.
2. The speech system based on object recognition according to claim 1,
the first sound collection module comprises a first sound pickup and a first signal filtering module, wherein the input end of the first sound pickup collects environmental sound information, the output end of the first sound pickup is connected with the input end of the first signal filtering module, the output end of the first signal filtering module is connected with the processing module, and the first signal filtering module is used for filtering environmental noise.
3. An object recognition based speech system according to claim 1 or 2,
the second sound collection module comprises a second sound pickup and a second signal filtering module, the input end of the second sound pickup collects the marked sound information, the output end of the second sound pickup is connected with the input end of the second signal filtering module, the output end of the second signal filtering module is connected with the marking module, the marking module compares the sound wave signal transmitted by the second filtering module with the marked sound wave signal, if the transmitted sound wave signal is consistent with the marked sound wave signal, the transmitted sound wave signal is transmitted to the processing module, otherwise, the second sound pickup is controlled to collect the marked sound information again.
4. An object recognition based speech system according to claim 1 or 2,
the method for judging whether the processing module runs object expelling or voice interaction specifically comprises the following steps:
s1: establishing a voiceprint recognition table of human and animals;
s2: acquiring environmental sound information acquired by a first sound acquisition module, and extracting human voiceprint information and animal voiceprint information in the environmental sound information;
s3: the voice interaction is carried out if the voiceprint information of the human and the animal is extracted from the environment sound by combining the voiceprint recognition tables of the human and the animal, the object expelling is carried out if the voiceprint information of the animal is extracted from the environment sound and the voiceprint information of the human and the animal cannot be extracted from the environment sound, and no action is generated if the voiceprint information of the human and the voiceprint information of the animal cannot be extracted from the environment sound.
5. The speech system based on object recognition according to claim 4,
in step S3, the method further includes, before performing voice interaction after extracting the voiceprint information of the person from the environmental sound:
s31: establishing a normal voiceprint identification table and an abnormal voiceprint identification table of authorized personnel;
s32: if the extracted voiceprint information of the person exists in a normal voiceprint recognition table or an abnormal voiceprint recognition table of the authorized person, voice interaction is carried out, and otherwise, no action is generated.
6. The speech system based on object recognition according to claim 5,
the abnormal voiceprint in the abnormal voiceprint identification table comprises voiceprint information when the authorized person coughs, voiceprint information when the authorized person runs a nose and voiceprint information when tonsils of the authorized person are inflamed.
7. The speech system based on object recognition according to claim 1,
the method for expelling the object operated in the processing module specifically comprises the following steps:
a1: acquiring the coordinates of a first sound acquisition module and the coordinates of a second sound acquisition module;
a2: acquiring a time T1 when the first sound collection module collects a first sound wave signal of the expulsion object;
a3: acquiring a time T2 when the second sound acquisition module acquires a first sound wave signal of the expulsion object;
a4: positioning the expulsion object according to the time T1 of collecting the first sound wave signal of the expulsion object, the time T2 of collecting the first sound wave signal of the expulsion object, the propagation speed of sound waves in the air, the coordinates of the first sound collection module and the coordinates of the second sound collection module;
a5: judging the species type of the expelling object according to the voiceprint information of the expelling object collected by the first sound collection module, and acquiring the instantaneous moving speed V of the expelling object according to the species type of the expelling object;
a6: acquiring the sound wave signal for the second time on the expelling object after the time period t, and calculating the moving distance of the expelling object in the time period t;
a7: and predicting the moving distance and direction of the evicted object according to the calculation result of the step A6, and performing irradiation eviction by a light irradiation module.
8. The speech system based on object recognition according to claim 7,
the method for predicting the moving direction of the evicted object in the step a7 is as follows:
Figure FDA0003033726280000021
wherein the content of the first and second substances,
Figure FDA0003033726280000022
in order to be a direction vector of the prediction,
Figure FDA0003033726280000023
is a direction vector of the repellent object far away from the natural enemy, gamma is a bearing coefficient of the repellent object facing the fear of the natural enemy,
Figure FDA0003033726280000024
a direction vector representing the approach of the evicted object to the food, and θ is a bearing coefficient of the evicted object to the food temptation.
9. The speech system based on object recognition according to claim 1,
the voice interaction module comprises a display screen and a loudspeaker, and the display screen and the loudspeaker are connected with the processing module.
CN202110437514.3A 2021-04-22 2021-04-22 Voice system based on object recognition Active CN113386147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110437514.3A CN113386147B (en) 2021-04-22 2021-04-22 Voice system based on object recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110437514.3A CN113386147B (en) 2021-04-22 2021-04-22 Voice system based on object recognition

Publications (2)

Publication Number Publication Date
CN113386147A true CN113386147A (en) 2021-09-14
CN113386147B CN113386147B (en) 2022-01-14

Family

ID=77616703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110437514.3A Active CN113386147B (en) 2021-04-22 2021-04-22 Voice system based on object recognition

Country Status (1)

Country Link
CN (1) CN113386147B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202819427U (en) * 2012-10-29 2013-03-27 嘉兴电力局 Transformer substation inspection robot intelligent bird driving system
KR20150041452A (en) * 2013-10-08 2015-04-16 김병준 wild animals extermination robot and using Driving method Thereof
CN106297790A (en) * 2016-08-22 2017-01-04 深圳市锐曼智能装备有限公司 The voiceprint service system of robot and service control method thereof
CN205865763U (en) * 2016-06-24 2017-01-11 国家电网公司 Bird system is driven to interactive intelligence based on image detection
JP3219134U (en) * 2018-09-20 2018-11-29 祐介 大平 Animal harm recognition voice recognition and identification device
CN209056279U (en) * 2018-11-01 2019-07-02 四川长虹电子系统有限公司 Video monitoring system based on Application on Voiceprint Recognition
CN209931318U (en) * 2019-04-17 2020-01-14 深圳市安特保电子商务集团有限公司 Driving device
US20200118570A1 (en) * 2017-02-24 2020-04-16 Sony Mobile Communications Inc. Information processing apparatus, information processing method, and computer program
CN210869604U (en) * 2019-10-23 2020-06-30 四川轻化工大学 Bird robot is driven in airport intelligence patrolling and examining
CN112152129A (en) * 2020-09-25 2020-12-29 国网浙江省电力有限公司湖州供电公司 Intelligent safety management and control method and system for transformer substation
CN212241016U (en) * 2020-03-25 2020-12-29 国网河南省电力公司焦作供电公司 Intelligent supervision and inspection robot for transformer substation
CN112447170A (en) * 2019-08-29 2021-03-05 北京声智科技有限公司 Security method and device based on sound information and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202819427U (en) * 2012-10-29 2013-03-27 嘉兴电力局 Transformer substation inspection robot intelligent bird driving system
KR20150041452A (en) * 2013-10-08 2015-04-16 김병준 wild animals extermination robot and using Driving method Thereof
CN205865763U (en) * 2016-06-24 2017-01-11 国家电网公司 Bird system is driven to interactive intelligence based on image detection
CN106297790A (en) * 2016-08-22 2017-01-04 深圳市锐曼智能装备有限公司 The voiceprint service system of robot and service control method thereof
US20200118570A1 (en) * 2017-02-24 2020-04-16 Sony Mobile Communications Inc. Information processing apparatus, information processing method, and computer program
JP3219134U (en) * 2018-09-20 2018-11-29 祐介 大平 Animal harm recognition voice recognition and identification device
CN209056279U (en) * 2018-11-01 2019-07-02 四川长虹电子系统有限公司 Video monitoring system based on Application on Voiceprint Recognition
CN209931318U (en) * 2019-04-17 2020-01-14 深圳市安特保电子商务集团有限公司 Driving device
CN112447170A (en) * 2019-08-29 2021-03-05 北京声智科技有限公司 Security method and device based on sound information and electronic equipment
CN210869604U (en) * 2019-10-23 2020-06-30 四川轻化工大学 Bird robot is driven in airport intelligence patrolling and examining
CN212241016U (en) * 2020-03-25 2020-12-29 国网河南省电力公司焦作供电公司 Intelligent supervision and inspection robot for transformer substation
CN112152129A (en) * 2020-09-25 2020-12-29 国网浙江省电力有限公司湖州供电公司 Intelligent safety management and control method and system for transformer substation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李素云: "《机器自动化:工业机器人及其关键技术研究》", 31 May 2018, 中国原子能出版社 *

Also Published As

Publication number Publication date
CN113386147B (en) 2022-01-14

Similar Documents

Publication Publication Date Title
CN109977813A (en) A kind of crusing robot object localization method based on deep learning frame
CN112858473B (en) Turnout switch blade damage state monitoring method based on feature fusion
CN109300471A (en) Merge place intelligent video monitoring method, the apparatus and system of sound collection identification
CN108055501A (en) A kind of target detection and the video monitoring system and method for tracking
CN102103409A (en) Man-machine interaction method and device based on motion trail identification
CN109657737B (en) Infrared hot-vision technology-based small animal intrusion detection method and system in cabinet
CN112149512A (en) Helmet wearing identification method based on two-stage deep learning
CN111914667B (en) Smoking detection method and device
CN109272017A (en) The vibration signal mode identification method and system of distributed fiberoptic sensor
CN111860203B (en) Abnormal pig identification device, system and method based on image and audio mixing
CN113807314A (en) Millimeter wave radar video fusion method based on micro-Doppler effect
CN113386147B (en) Voice system based on object recognition
Telicko et al. A monitoring system for evaluation of covid-19 infection risk
CN107247923A (en) A kind of instruction identification method, device, storage device, mobile terminal and electrical equipment
CN117133057A (en) Physical exercise counting and illegal action distinguishing method based on human body gesture recognition
CN111951161A (en) Target identification method and system and inspection robot
CN111257890A (en) Fall behavior identification method and device
CN113887634B (en) Electric safety belt detection and early warning method based on improved two-step detection
CN113837044B (en) Organ positioning method based on ambient brightness and related equipment
CN115512270A (en) Blade number detection method and device, electronic equipment and storage medium
CN208257970U (en) Based on the circular coal yard personnel safety guard of Infrared-Visible fusion tracking and the monitoring system of spontaneous combustion
CN112115870A (en) Examination cheating small copy recognition method based on YOLOv3
Zhou et al. Dangerous behavior detection in gas stations based on deep learning
TWI840093B (en) Automatic start device for fresh air fan
CN116192815B (en) Online live broadcast and voice interaction job conference management method for staff members

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant