CN110632600A - Environment identification method and device - Google Patents
Environment identification method and device Download PDFInfo
- Publication number
- CN110632600A CN110632600A CN201910918357.0A CN201910918357A CN110632600A CN 110632600 A CN110632600 A CN 110632600A CN 201910918357 A CN201910918357 A CN 201910918357A CN 110632600 A CN110632600 A CN 110632600A
- Authority
- CN
- China
- Prior art keywords
- sound waves
- reflected
- sound wave
- reflected sound
- interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/02—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
- G01S15/06—Systems determining the position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/02—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
- G01S15/06—Systems determining the position data of a target
- G01S15/08—Systems for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/539—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/02—Indexing codes associated with the analysed material
- G01N2291/028—Material parameters
- G01N2291/0289—Internal structure, e.g. defects, grain size, texture
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/10—Number of transducers
- G01N2291/103—Number of transducers one emitter, two or more receivers
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Signal Processing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The disclosure relates to an environment identification method and device. The intelligent sound box comprises a sound box body, a sound box body and a sound box body. The method comprises the following steps: transmitting a detection sound wave to at least one direction; receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction; the reflected sound waves are analyzed to identify surrounding obstructions. The technical scheme provided by the disclosure is suitable for intelligent equipment, and realizes automatic identification of the shielding objects around the equipment.
Description
Technical Field
The present disclosure relates to the field of intelligent terminals, and in particular, to an environment recognition method and apparatus for an intelligent speaker.
Background
The intelligent sound box interacts with a user through voice instructions. The intelligent sound box is generally internally provided with a plurality of microphones, and the sound source position is calculated according to the change of sound energy through omnidirectional reception; and then, through directionally amplifying the received sound power, clearer sound is obtained for identification.
In a home environment, however, it is likely that the speakers will be placed adjacent to a wall, bookshelf, or the like, or other item. Use the scene that intelligent audio amplifier is very close to the wall as an example, when intelligent audio amplifier pressed close to the wall even trilateral when all having the obstacle, the judgement to sound source position and energy can be influenced to intelligent audio amplifier to the echo that obstacles such as wall caused, and then lead awakening up and the instruction of intelligent audio amplifier to appear the mistake.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an environment recognition method and apparatus.
According to a first aspect of the embodiments of the present disclosure, there is provided an environment recognition method, including:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
Preferably, the step of transmitting the detection sound wave to at least one direction includes:
and sending the detection sound wave to any one or any plurality of directions of front, back, left, right and upper of the target position.
Preferably, the step of receiving the reflected sound wave reflected by the detection sound wave after reaching the reflection interface in each direction includes:
and receiving the reflected sound waves reflected back in different directions by a plurality of microphones.
Preferably, the step of analysing said reflected sound waves to identify surrounding obstructions comprises:
determining the material of a reflecting interface according to the energy of the reflected sound wave;
calculating the position of the reflecting interface;
and judging whether the reflecting interface belongs to a shelter or not according to a preset judgment condition by combining the material of the reflecting interface and the position of the reflecting interface.
Preferably, after the step of analyzing the reflected sound waves and identifying surrounding obstructions, the method further comprises:
and shielding the radio reception result in the direction of the shielding object in the subsequent voice instruction acquisition process.
In a second aspect of the embodiments of the present disclosure, there is provided an environment recognition apparatus including:
the detection sound wave sending module is used for sending detection sound waves to at least one direction;
the reflected sound wave receiving module is used for receiving the reflected sound waves reflected back after the detection sound waves reach the reflecting interface in each direction;
and the shielding object identification module is used for analyzing the reflected sound wave and identifying surrounding shielding objects.
Preferably, the detection sound wave transmitting module is specifically configured to transmit the detection sound wave at least in any one or any multiple directions of front, back, left, right, and top of the target position.
Preferably, the obstruction identification module includes:
the material identification submodule is used for determining the material of the reflection interface according to the energy of the reflected sound wave;
the position calculation submodule is used for calculating the position of the reflecting interface;
and the shielding object judging submodule is used for judging whether the reflecting interface belongs to a shielding object or not according to a preset judging condition by combining the material of the reflecting interface and the position of the reflecting interface.
According to a third aspect of embodiments of the present disclosure, there is provided a computer apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform an environment recognition method, the method comprising:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the terminal sends detection sound waves to at least one direction, receives reflected sound waves reflected by the detection sound waves after reaching the reflection interface in each direction, and then analyzes the reflected sound waves to identify surrounding shelters. The automatic identification of the shielding objects around the equipment is realized, and the problem that the intelligent sound box is influenced by the shielding objects to cause errors in voice identification is solved. By utilizing the sound wave distance sensing technology, the distance and the direction between the surrounding objects and the sound box are calculated, the environment condition is self-adapted, and the algorithm is adjusted to achieve the optimal awakening and identification effects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of environment identification, according to an example embodiment.
FIG. 2 is a flowchart of an exemplary implementation of step 103 in FIG. 1.
FIG. 3 is a block diagram illustrating an environment recognition device according to an example embodiment.
FIG. 4 is a block diagram of an exemplary structure of the obstruction identification module 303 of FIG. 3.
Fig. 5 is a block diagram illustrating an apparatus (a general structure of a mobile terminal) according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an apparatus (general structure of a server) according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In a home environment, it is likely that the speakers will be placed adjacent to a wall, bookshelf, or the like, or other item. Use the scene that intelligent audio amplifier is very close to the wall as an example, when intelligent audio amplifier pressed close to the wall even trilateral when all having the obstacle, the judgement to sound source position and energy can be influenced to intelligent audio amplifier to the echo that obstacles such as wall caused, and then lead awakening up and the instruction of intelligent audio amplifier to appear the mistake.
In order to solve the above problems, embodiments of the present invention provide an environment recognition method and apparatus, which recognize a blocking object that may affect an intelligent sound box, accurately determine an environment where the intelligent sound box is located, solve a problem that the intelligent sound box is affected by the blocking object and makes a mistake in voice recognition, and implement accurate voice recognition.
An embodiment of the present invention provides an environment recognition method, which can be applied to a terminal using a voice command interaction mode, such as a smart speaker, and a flow using the method is shown in fig. 1, and includes the following steps:
In the embodiment of the invention, a sounder special for sending detection sound waves can be added on the terminal, and the original sounder can also be used. The sounder can be a device such as an external sound emitting device.
In this step, the detection sound wave is transmitted at least to any one or any plurality of directions of front, back, left, right and upper of the target position.
The detection sound wave is a specific frequency or a specific frequency band. Preferably, it may be a frequency imperceptible to humans, for example, a frequency of 0-24 Khz.
The operation of sending detection sound wave can be triggered when the intelligent sound box is powered on, and can also be triggered periodically.
And 102, receiving reflected sound waves reflected back after the detected sound waves reach the reflection interface in all directions.
In this step, the reflected sound waves reflected back in different directions can be received by a plurality of microphones. The microphone for receiving the reflected sound wave can be an original microphone for receiving the voice instruction of the intelligent sound box, and a special microphone for detecting the shielding object can also be added.
At least one microphone is respectively arranged on the upper platform, the left platform, the right platform, the front platform and the rear platform of the sound box. Each microphone has a specific sound receiving angle, and the sound source/reflection source can be positioned by combining the sound receiving results of the plurality of microphones.
And 103, analyzing the reflected sound waves and identifying surrounding shelters.
As shown in fig. 2, the steps include:
and step 1031, determining the material of the reflection interface according to the energy of the reflected sound wave.
In this step, the energy of the reflected sound wave is calculated according to the characteristics of the reflected sound wave received by the microphone, such as the wavelength, and the like, so as to determine the material of the reflection interface.
It should be noted that step 1031 and step 1032 have no strict timing relationship, and after the reflected sound wave is received and the data required for calculation is analyzed, the determination of the material and position of the reflection interface can be performed. May be performed sequentially or in parallel depending on the processing power.
And 1033, judging whether the reflecting interface belongs to a shelter or not according to a preset judgment condition by combining the material of the reflecting interface and the position of the reflecting interface.
In the embodiment of the present invention, a determination condition may be preset, for example, when a distance to a reflective interface of a certain material (or a material with a certain density) is within a shielding range (for example, the distance to the smart speaker is less than 20cm), it is determined that a shielding object exists in the direction.
And after the reflected sound wave is detected, comparing and matching with the judgment condition according to the calculated material and position information of the reflected sound wave. And judging that the shielding object exists in a certain direction when the reflection interface characteristics calculated by the reflected sound wave received in the direction meet the judgment condition.
And step 104, shielding the radio reception result in the direction of the shelter in the subsequent voice instruction acquisition process.
In this step, under the condition that it is determined that there is a blocking object in a certain direction, the algorithm of the front-end microphone may be adjusted (for example, masking the sound reception result in the certain direction) to reduce the influence of the echo of the blocking object on the sound recognition function of the smart sound box.
An embodiment of the present invention further provides an environment recognition apparatus, the apparatus having a structure as shown in fig. 3, including:
a detection sound wave transmitting module 301 for transmitting a detection sound wave to at least one direction;
a reflected sound wave receiving module 302, configured to receive reflected sound waves that are reflected back after the detection sound waves reach a reflection interface in each direction;
and the shielding object identification module 303 is used for analyzing the reflected sound wave and identifying the surrounding shielding object.
Preferably, the detection sound wave transmitting module 301 is specifically configured to transmit the detection sound wave at least in any one or any multiple directions of front, back, left, right, and upper directions of the target position.
Preferably, the structure of the obstruction identifying module 303 is as shown in fig. 4, and includes:
a material identification sub-module 3031, configured to determine a material of the reflection interface according to the energy of the reflected sound wave;
a position calculation submodule 3032, configured to calculate a position of the reflection interface;
and a blocking object determining submodule 3033, configured to determine, according to a preset determination condition, whether the reflection interface belongs to a blocking object by combining the material of the reflection interface and the position of the reflection interface.
Preferably, the apparatus further comprises:
a voice recognition module 304 for shielding the reception result of the shelter direction in the subsequent voice instruction collection process
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here. The environment recognition device shown in fig. 3 and 4 can be integrated into a terminal interacting through voice commands, and the terminal can implement corresponding functions.
An embodiment of the present invention further provides a computer apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
Fig. 5 is a block diagram illustrating an apparatus 500 for environment identification, according to an example embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when apparatus 500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the apparatus 500, the sensor assembly 514 may also detect a change in the position of the apparatus 500 or a component of the apparatus 500, the presence or absence of user contact with the apparatus 500, orientation or acceleration/deceleration of the apparatus 500, and a change in the temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of context identification, the method comprising:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
FIG. 6 is a block diagram illustrating an apparatus 600 for an environment, according to an example embodiment. For example, the apparatus 600 may be provided as a server. Referring to fig. 6, the apparatus 600 includes a processing component 622 that further includes one or more processors and memory resources, represented by memory 632, for storing instructions, such as applications, that are executable by the processing component 622. The application programs stored in memory 632 may include one or more modules that each correspond to a set of instructions. Further, the processing component 622 is configured to execute instructions to perform the above-described methods.
The apparatus 600 may also include a power component 626 configured to perform power management of the apparatus 600, a wired or wireless network interface 650 configured to connect the apparatus 600 to a network, and an input/output (I/O) interface 658. The apparatus 600 may operate based on an operating system stored in the memory 632, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
The embodiment of the invention provides an environment identification method and device, wherein a terminal sends detection sound waves to at least one direction, receives reflected sound waves reflected by the detection sound waves after reaching a reflection interface in each direction, and analyzes the reflected sound waves to identify surrounding shelters. The automatic identification of the shielding objects around the equipment is realized, and the problem that the intelligent sound box is influenced by the shielding objects to cause errors in voice identification is solved.
By utilizing the sound wave distance sensing technology, the distance and the direction between the surrounding objects and the sound box are calculated, the environment condition is self-adapted, and the algorithm is adjusted to achieve the optimal awakening and identification effects.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (10)
1. An environment recognition method, comprising:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
2. The environment recognition method of claim 1, wherein the step of transmitting the detection sound wave to at least one direction comprises:
and sending the detection sound wave to any one or any plurality of directions of front, back, left, right and upper of the target position.
3. The environment recognition method of claim 1, wherein the step of receiving reflected sound waves reflected back from the detection sound waves after reaching the reflection interface in each direction comprises:
and receiving the reflected sound waves reflected back in different directions by a plurality of microphones.
4. The environment recognition method of claim 1, wherein analyzing the reflected sound waves to identify surrounding obstructions comprises:
determining the material of a reflecting interface according to the energy of the reflected sound wave;
calculating the position of the reflecting interface;
and judging whether the reflecting interface belongs to a shelter or not according to a preset judgment condition by combining the material of the reflecting interface and the position of the reflecting interface.
5. The environment recognition method of claim 1, wherein after the step of analyzing the reflected sound waves to identify surrounding obstructions, further comprising:
and shielding the radio reception result in the direction of the shielding object in the subsequent voice instruction acquisition process.
6. An environment recognition apparatus, comprising:
the detection sound wave sending module is used for sending detection sound waves to at least one direction;
the reflected sound wave receiving module is used for receiving the reflected sound waves reflected back after the detection sound waves reach the reflecting interface in each direction;
and the shielding object identification module is used for analyzing the reflected sound wave and identifying surrounding shielding objects.
7. The environment recognition apparatus of claim 6,
the detection sound wave sending module is specifically configured to send the detection sound wave at least to any one or any multiple directions of front, back, left, right, and top of a target position.
8. The environment recognition apparatus of claim 1, wherein the obstruction recognition module comprises:
the material identification submodule is used for determining the material of the reflection interface according to the energy of the reflected sound wave;
the position calculation submodule is used for calculating the position of the reflecting interface;
and the shielding object judging submodule is used for judging whether the reflecting interface belongs to a shielding object or not according to a preset judging condition by combining the material of the reflecting interface and the position of the reflecting interface.
9. A computer device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
10. A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of context identification, the method comprising:
transmitting a detection sound wave to at least one direction;
receiving reflected sound waves reflected back after the detected sound waves reach a reflection interface in each direction;
the reflected sound waves are analyzed to identify surrounding obstructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910918357.0A CN110632600B (en) | 2019-09-26 | 2019-09-26 | Environment identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910918357.0A CN110632600B (en) | 2019-09-26 | 2019-09-26 | Environment identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110632600A true CN110632600A (en) | 2019-12-31 |
CN110632600B CN110632600B (en) | 2021-11-23 |
Family
ID=68973156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910918357.0A Active CN110632600B (en) | 2019-09-26 | 2019-09-26 | Environment identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110632600B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150078134A1 (en) * | 2013-09-13 | 2015-03-19 | Hsu-Yung YU | Sonar type object detection system and its implementing method |
CN105190351A (en) * | 2013-03-13 | 2015-12-23 | 英特尔公司 | Sonic-assisted localization of wireless devices |
CN105683777A (en) * | 2013-11-14 | 2016-06-15 | 大众汽车有限公司 | Motor vehicle with occlusion detection for ultrasonic sensors |
CN106775572A (en) * | 2017-03-30 | 2017-05-31 | 联想(北京)有限公司 | Electronic equipment and its control method with microphone array |
CN107533033A (en) * | 2015-05-07 | 2018-01-02 | 深圳市大疆创新科技有限公司 | System and method for detection object |
CN110068851A (en) * | 2019-03-27 | 2019-07-30 | 珍岛信息技术(上海)股份有限公司 | A kind of safety obtains the method and system of mobile terminal locations |
-
2019
- 2019-09-26 CN CN201910918357.0A patent/CN110632600B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105190351A (en) * | 2013-03-13 | 2015-12-23 | 英特尔公司 | Sonic-assisted localization of wireless devices |
US20150078134A1 (en) * | 2013-09-13 | 2015-03-19 | Hsu-Yung YU | Sonar type object detection system and its implementing method |
CN105683777A (en) * | 2013-11-14 | 2016-06-15 | 大众汽车有限公司 | Motor vehicle with occlusion detection for ultrasonic sensors |
CN107533033A (en) * | 2015-05-07 | 2018-01-02 | 深圳市大疆创新科技有限公司 | System and method for detection object |
CN106775572A (en) * | 2017-03-30 | 2017-05-31 | 联想(北京)有限公司 | Electronic equipment and its control method with microphone array |
CN110068851A (en) * | 2019-03-27 | 2019-07-30 | 珍岛信息技术(上海)股份有限公司 | A kind of safety obtains the method and system of mobile terminal locations |
Also Published As
Publication number | Publication date |
---|---|
CN110632600B (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10498873B2 (en) | Screen control method, apparatus, and non-transitory tangible computer readable storage medium | |
US10205817B2 (en) | Method, device and storage medium for controlling screen state | |
US10027785B2 (en) | Method for switching screen state of terminal, terminal thereof, and computer-readable medium thereof | |
US10798483B2 (en) | Audio signal processing method and device, electronic equipment and storage medium | |
US11337173B2 (en) | Method and device for selecting from a plurality of beams | |
CN111314597A (en) | Terminal, focusing method and device | |
US11178501B2 (en) | Methods, devices, and computer-readable medium for microphone selection | |
CN111007462A (en) | Positioning method, positioning device, positioning equipment and electronic equipment | |
CN108989494A (en) | A kind of electronic equipment | |
CN107392160B (en) | Optical fingerprint identification method and device and computer readable storage medium | |
CN113138557B (en) | Household equipment control method and device and storage medium | |
CN111009239A (en) | Echo cancellation method, echo cancellation device and electronic equipment | |
CN110632600B (en) | Environment identification method and device | |
CN107682101B (en) | Noise detection method and device and electronic equipment | |
CN112702514B (en) | Image acquisition method, device, equipment and storage medium | |
CN115407272A (en) | Ultrasonic signal positioning method and device, terminal and computer readable storage medium | |
CN111246009B (en) | Sliding cover type terminal, distance detection method and device and storage medium | |
CN107589861B (en) | Method and device for communication | |
CN112752191A (en) | Audio acquisition method, device and storage medium | |
CN107068031B (en) | Method for controlling screen lightening of intelligent terminal and intelligent terminal | |
CN113391713A (en) | Electronic device, control method for electronic device, and storage medium | |
CN111124175A (en) | Terminal, display processing method, device and storage medium | |
CN113138384B (en) | Image acquisition method and device and storage medium | |
CN113450521B (en) | Method and device for monitoring intruder, electronic equipment and storage medium | |
CN112019677B (en) | Electronic equipment control method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |