CN112868061A - Environment detection method, electronic device and computer-readable storage medium - Google Patents

Environment detection method, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN112868061A
CN112868061A CN201980059247.1A CN201980059247A CN112868061A CN 112868061 A CN112868061 A CN 112868061A CN 201980059247 A CN201980059247 A CN 201980059247A CN 112868061 A CN112868061 A CN 112868061A
Authority
CN
China
Prior art keywords
sound
environment
sound field
target detection
frequency domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980059247.1A
Other languages
Chinese (zh)
Inventor
薛政
吴晟
赵文泉
边云锋
莫品西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112868061A publication Critical patent/CN112868061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

An environment detection method, an electronic device and a computer-readable storage medium, wherein the method comprises: sending out an environment detection sound, and collecting the environment detection sound to obtain a target detection sound (S101); performing feature extraction on the target detection sound to obtain sound field features (S102); the current environment where the electronic device is located is determined according to the sound field characteristics (S103). The method improves the detection accuracy of the environment where the electronic equipment is located.

Description

Environment detection method, electronic device and computer-readable storage medium
Technical Field
The present application relates to the field of electronic devices, and in particular, to an environment detection method, an electronic device, and a computer-readable storage medium.
Background
With the wide application of electronic devices, users have higher and higher requirements on the performance of electronic devices, and especially have harsh requirements on the performance in a severe scene, wherein an underwater scene is a typical case. In an underwater scene, a user generally has requirements on safety and performance, and the user wants that the electronic device can detect that the electronic device is in water and make a safety prompt and protection on one hand, and wants that the device can automatically switch the relevant configuration of the device under water and can normally work under water, such as taking a picture or recording a video and the like on the other hand.
At present, the existing devices are mainly used for detecting underwater scenes by utilizing the physical rules of the underwater scenes, for example, whether Bluetooth or other signals can be normally transmitted or not is detected, or a loudspeaker of electronic equipment is used for making sound, and a microphone is used for receiving the sound, so that the sound path difference and the time difference can be obtained, and the underwater scene detection is carried out based on the sound path difference and the time difference, but signals such as Bluetooth and the like can still be normally transmitted in a shallow water area, the detection result can have deviation, the propagation path from the loudspeaker to the microphone is not only an air conduction path or a water conduction path outside a shell, the sound wave of the loudspeaker can be often transmitted to the microphone through the inside of the equipment, the analysis of the sound path difference and the time difference can be influenced, the underwater scene detection is interfered, and the detection result is. Therefore, how to improve the detection accuracy of the environment where the water electronic equipment is located is an urgent problem to be solved at present.
Disclosure of Invention
Based on this, the application provides an environment detection method, an electronic device and a computer-readable storage medium, aiming at improving the detection accuracy of the environment where the electronic device is located and improving the user experience.
In a first aspect, the present application provides an environment detection method, including:
sending an environment detection sound through a sound generator of electronic equipment, and collecting the environment detection sound through a sound receiver of the electronic equipment to obtain a collected target detection sound;
performing feature extraction on the target detection sound to acquire sound field features of the target detection sound;
determining the current environment of the electronic equipment according to the sound field characteristics, wherein the current environment comprises an above-water environment and an underwater environment.
In a second aspect, the present application further provides an electronic device comprising a sound generator, a sound receiver, and a processor;
the sounder is used for emitting environment detection sound;
the sound receiver is used for collecting the environment detection sound to obtain a collected target detection sound;
the processor is configured to implement the steps of:
acquiring the target detection sound collected by the sound receiver;
performing feature extraction on the target detection sound to acquire sound field features of the target detection sound;
determining the current environment of the electronic equipment according to the sound field characteristics, wherein the current environment comprises an above-water environment and an underwater environment.
In a third aspect, the present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the environment detection method as described above.
The embodiment of the application provides an environment detection method, electronic equipment and a computer readable storage medium, wherein the target detection sound is obtained by collecting environment detection sound, the characteristic extraction is carried out on the target detection sound, the sound field characteristic of the target detection sound is obtained, the current environment of the electronic equipment is determined based on the sound field characteristic, and the current environment of the electronic equipment can be detected because the whole detection process does not relate to the transmission of Bluetooth or other signals and the sound path difference and the time difference of sound, so that the detection accuracy of the environment of the electronic equipment is effectively improved, and the user experience is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating steps of a method for environmental detection according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an acoustic propagation path between a sound generator and a sound receiver in an embodiment of the present application;
FIG. 3 is a schematic flow diagram of sub-steps of the environment detection method of FIG. 1;
FIG. 4 is a flow chart illustrating steps of another method for detecting an environment according to an embodiment of the present application;
fig. 5 is a block diagram schematically illustrating a structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating steps of an environment detection method according to an embodiment of the present application. The environment detection method can be applied to the electronic equipment and is used for detecting the environment of the electronic equipment. The electronic equipment comprises a camera, a mobile phone, a tablet personal computer, a handheld cloud deck and the like.
Specifically, as shown in fig. 1, the environment detection method includes steps S101 to S103.
S101, emitting an environment detection sound through a sound generator of the electronic equipment, and collecting the environment detection sound through a sound receiver of the electronic equipment to obtain a collected target detection sound.
The sound generator of the electronic device includes but is not limited to a loudspeaker and a buzzer, the sound receiver includes but is not limited to a single microphone and a double microphone, the environment detection sound includes but is not limited to motor excitation sound, buzzing sound, prompting sound, voice and music, the motor excitation sound is sound generated when the motor runs, and the buzzing sound is sound generated when the buzzer runs. The environmental detection sound emitted from the sound generator is simultaneously received by the sound receiver through the internal propagation path and the external propagation path, and the internal propagation path is a major portion. Referring to fig. 2, fig. 2 is a schematic view illustrating an acoustic propagation path between a sound generator and a sound receiver according to an embodiment of the present invention, as shown in fig. 2, the sound generator 10 propagates an emitted environmental detection sound to the sound receiver 20 through an internal propagation path 1, and simultaneously propagates the emitted environmental detection sound to the sound receiver 20 through an external propagation path 2.
After the electronic equipment starts the environment detection function, the electronic equipment sends out environment detection sound at intervals of preset time or in real time through the sounder, and the environment detection sound is collected through a sound receiver of the electronic equipment to obtain target detection sound. It should be noted that the preset time may be set based on actual situations, and the present application is not limited to this.
In an embodiment, the sound receiver of the electronic device is turned on while the sounder of the electronic device emits the environmental detection sound, so that the sound receiver is controlled to collect the environmental detection sound emitted by the sounder to obtain the target detection sound, and the sound receiver is turned off while the sounder stops emitting the environmental detection sound, thereby ensuring that the obtained target detection sound is strictly synchronized with the emitted environmental detection sound in time, and facilitating subsequent accurate extraction of acoustic features from the target detection sound.
In an embodiment, when the sound receiver is always in a working state (such as recording or uninterrupted listening), when the sound generator generates the environmental detection sound, the generation time of the environmental detection sound is recorded, when the sound generator stops generating the environmental detection sound, the stop time of the environmental detection sound is recorded, then the corresponding sound segment is extracted from the sound data received by the sound receiver according to the generation time and the stop time, and the sound segment is filtered, so that the target detection sound is obtained, the obtained target detection sound can be ensured to be strictly synchronized with the generated environmental detection sound in time, and the acoustic features can be conveniently and accurately extracted from the target detection sound subsequently.
S102, extracting the characteristics of the target detection sound to obtain the sound field characteristics of the target detection sound.
And after the target detection sound is obtained, carrying out feature extraction on the target detection sound to obtain the sound field features of the target detection sound. In the underwater rigid wall environment, the sound waves emitted by the sound source are continuously reflected and superposed on the wall surface, so that the whole sound field is enhanced; in the water non-rigid wall environment, the sound waves emitted by the sound source are attenuated or even leaked on the non-rigid wall, the attenuation is related to the frequency, the sound field of the water non-rigid wall environment is weaker than that of the underwater rigid wall environment, and the sound field characteristics comprise at least one of time domain amplitude and frequency domain amplitude.
In one embodiment, the target detection sound is corrected so that the corrected target detection sound is time-synchronized with the environmental detection sound; and performing feature extraction on the corrected target detection sound to obtain the sound field features of the target detection sound. By ensuring that the target detection sound is strictly synchronous with the emitted environment detection sound in time, the acoustic features can be conveniently and accurately extracted from the target detection sound subsequently, and therefore the detection accuracy of the environment where the electronic equipment is located can be improved. When the target detection sound is collected, the on-time and the off-time of the sounder and the sound receiver are strictly synchronous, and the target detection sound may not be corrected, or of course, the target detection sound may be corrected, which is not specifically limited in the present application.
In one embodiment, as shown in fig. 3, step S102 includes steps S1021 to S1023.
And S1021, sampling the target detection sound to acquire a multi-frame time domain signal of the target detection sound.
After the target detection sound is acquired, sampling is carried out on the target detection sound, and a multi-frame time domain signal of the target detection sound is obtained. Wherein, each frame of the time domain signal can be represented as:
xl(n)=[x(Ml-N+1),…,x(Ml-1),x(Ml)]l is 0,1, …, L, where N is the frame length, M is the frame shift, L is the frame number, L is the total frame number of the target detection sound, and the value range of N is 0.001fs<N<fsThe value range of M is 0.01N<M<100N,fsFor sampling frequency, the frequency f used for the target detection tonesThe setting may be based on actual conditions, and the present application is not particularly limited to this.
And S1022, performing frequency domain transformation on the multi-frame time domain signal to acquire a multi-frame frequency domain signal.
And performing frequency domain transformation on each frame of time domain signal in the multi-frame time domain signals to obtain a frequency domain signal of each frame of time domain signal, thereby obtaining the multi-frame frequency domain signal. The frequency domain transformation includes, but is not limited to, fourier transforming the time domain signal and wavelet transforming the time domain signal.
In one embodiment, windowing is performed on a multi-frame time domain signal according to a preset window function; and carrying out frequency domain transformation on the multi-frame time domain signal subjected to windowing processing to obtain a multi-frame frequency domain signal. Wherein the preset window function includes at least one of: rectangular windows, sinusoidal windows, hanning windows, hamming windows, and gaussian windows. By windowing the time domain signal, spectral energy leakage may be reduced.
For example, the windowed frame of time domain signal can be represented as: x'l(n)=xl(N) w, where w is a window function of N points, by pairs x'l(n) performing frequency domain transformation to obtain corresponding frequency domain signal of
Figure BDA0002970011640000053
And S1023, determining the sound field characteristics of the target detection sound according to the multi-frame frequency domain signals.
Specifically, a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal are determined according to each frame frequency domain signal in the multi-frame frequency domain signals; fusing a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal in the multi-frame frequency domain signals to obtain a plurality of fused characteristic frequency domain amplitudes; and determining the plurality of fused characteristic frequency domain amplitude values as the sound field characteristic of the target detection sound. And analyzing the frequency domain signal to obtain a plurality of characteristic frequency domain amplitudes corresponding to the frequency domain signal.
The determining mode of the plurality of characteristic frequency domain amplitudes of the frequency domain signal is specifically as follows: computing frequency domain signals
Figure BDA0002970011640000054
Middle corresponding frequency f1,f2…,fmOf a total of m line spectra1,l(n),P2,l(n)…Pm,l(n) wherein,
Figure BDA0002970011640000055
0≤f1<f2<…<fm<fs/2,1≤m<N/2,ajis a weighting coefficient, Hm 'and Lm' satisfy fLm’≥max{0,(fm’+fm’-1)/2)},fHm’≤min{fs/2,(fm’+fm’+1) /2) to obtain m characteristic frequency domain amplitudes S of the frequency domain signall(n)=[P1,l(n),P2,l(n)…Pm,l(n)]。
In an embodiment, according to a plurality of characteristic frequency domain amplitudes corresponding to each frame of frequency domain signal, the plurality of characteristic frequency domain amplitudes corresponding to each frame of frequency domain signal are fused to obtain a fused characteristic frequency domain amplitude corresponding to each frame of frequency domain signal, so as to obtain a plurality of fused characteristic frequency domain amplitudes. The fusion mode of the characteristic frequency domain amplitudes is specifically as follows: and calculating the arithmetic mean, the geometric mean or the median of the characteristic frequency domain amplitude of each frame frequency domain signal according to the plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal, and determining the arithmetic mean, the geometric mean or the median of the characteristic frequency domain amplitude of each frame frequency domain signal as the fusion characteristic frequency domain amplitude corresponding to each frame frequency domain signal. Illustratively, each frame frequency domain signal Xl(n) corresponding characteristic frequency domain amplitudes Sl(n) performing fusion to obtain a plurality of fusion characteristic frequency domain amplitudes Sa(n)=[P1(n),P2(n)…Pm(n)]。
In one embodiment, a target detection tone is sampled to obtain a multi-frame time domain signal of the target detection tone; and determining the sound field characteristics of the target detection sound according to the time domain amplitude of the multi-frame time domain signal. Wherein, each frame of the time domain signal can be represented as:
xl(n)=[x(Ml-N+1),…,x(Ml-1),x(Ml)],l=0,1,…,L,
wherein, N is the frame length, M is the frame shift, L is the frame number, L is the total frame number of the target detection sound, and the value range of N is 0.001fs<N<fsThe value range of M is 0.01N<M<100N,fsIs the sampling frequency. It should be noted that the sampling frequency may be set based on actual situations, and the present application is not limited to this.
Specifically, an average value of a time domain amplitude of each frame of time domain signal in a plurality of frames of time domain signals is obtained; and fusing the average values of the multi-frame time domain signals, and determining the fused result as the sound field characteristic of the target detection sound. The fusion mode of the average value of the multi-frame time domain signals is specifically as follows: and acquiring a weighted fusion coefficient of each frame of time domain signal, and fusing the average values of the multiple frames of time domain signals according to the weighted fusion coefficient of each frame of time domain signal and the average value of the time domain amplitude of each frame of time domain signal to obtain the sound field characteristics of the target detection sound. The sound field characteristics of the target detection sound are determined by extracting the time domain amplitude of the time domain signal, the requirement on the calculation performance of the electronic equipment is low, and the sound field characteristics of the target detection sound can be extracted without a lot of calculation resources.
Illustratively, the average value of the time domain amplitude of the time domain signal of a frame is calculated, i.e. x is calculatedl(n) may be expressed as: ql(n)=mean(abs(xl(n))), wherein abs represents the absolute value operation and mean represents the averaging operation; obtaining the average value Q of the time domain amplitude of each frame of time domain signall(n), L is 1, 2, …, L, then the average value of the time domain amplitude of each frame time domain signal is fused, so as to obtain the sound field characteristic Q of the target detection sounda(n)。
S103, determining the current environment of the electronic equipment according to the sound field characteristics.
After determining the sound field characteristics of the target detection sound, the current environment in which the electronic apparatus is located is determined based on the sound field characteristics. Wherein the current environment comprises an above-water environment and an underwater environment.
In one embodiment, a sound field feature threshold is obtained, the current environment where the electronic equipment is located is determined according to the sound field feature threshold and the sound field feature, and if the sound field feature is larger than the sound field feature threshold, the current environment where the electronic equipment is located is determined to be an underwater environment; and if the sound field characteristic is smaller than or equal to the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is a water environment. Wherein the sound field feature threshold is determined from at least one of a first set of sound field features of the underwater environment comprising a plurality of sound field features of a target detection sound of the underwater environment and a second set of sound field features of the marine environment comprising a plurality of sound field features of a target detection sound of the marine environment.
In an exemplary manner, the first and second electrodes are,the first sound field characteristic set of the underwater environment is A ═ Swater,1,Swater,2…Swater,l1The set of second sound field characteristics of the aquatic environment is B ═ Sair,1,Sair,2…Sair,l2Wherein l1 and l2 may be set based on actual conditions, which is not particularly limited, and optionally l1 and l2 are greater than or equal to 10. By A ═ Swater,1,Swater,2…Swater,l1And/or B ═ Sair,1,Sair,2…Sair,l2A sound field feature threshold may be determined.
In one embodiment, a reference sound field characteristic is obtained, a difference value between a sound field characteristic of a target detection sound and the reference sound field characteristic is calculated, and a current environment where the electronic device is located is determined according to the difference value between the sound field characteristic of the target detection sound and the reference sound field characteristic. Wherein the reference acoustic field feature is determined from a first set of acoustic field features of the underwater environment or from a second set of acoustic field features of the underwater environment.
Specifically, if the reference sound field feature is determined according to the first sound field feature set of the underwater environment, if a difference between the sound field feature of the target detection sound and the reference sound field feature is smaller than a preset difference threshold, it may be determined that the current environment in which the electronic device is located is the underwater environment, and if a difference between the sound field feature of the target detection sound and the reference sound field feature is greater than or equal to the preset difference threshold, it may be determined that the current environment in which the electronic device is located is the water environment.
Specifically, if the reference sound field feature is determined according to a second sound field feature set of the marine environment, if a difference between the sound field feature of the target detection sound and the reference sound field feature is smaller than a preset difference threshold, it may be determined that the current environment in which the electronic device is located is the marine environment, and if a difference between the sound field feature of the target detection sound and the reference sound field feature is greater than or equal to the preset difference threshold, it may be determined that the current environment in which the electronic device is located is the underwater environment.
Specifically, a first sound field feature set of an underwater environment and a second sound field feature set of an above-water environment are obtained; calculating a first mean value and a first standard deviation of the first sound field feature set, and calculating a second mean value and a second standard deviation of the second sound field feature set; and determining a sound field characteristic threshold according to the first mean value, the first standard deviation, the second mean value and the second standard deviation.
Illustratively, a first difference between the first mean and the second mean is calculated, and a second difference between the first standard deviation and the second standard deviation is calculated; determining a candidate threshold coefficient meeting a preset condition according to the first difference and the second difference, and taking the maximum candidate threshold coefficient as a target threshold coefficient; calculating a first product of the first standard deviation and the target threshold coefficient, and calculating the sum of the first product and the first mean value; calculating a second product of the second standard deviation and the target threshold coefficient, and calculating the sum of the second product and the second mean value; and determining a sound field characteristic threshold value according to the sum of the first product and the first mean value and the sum of the second product and the second mean value. Wherein the preset condition is that the target threshold coefficient is smaller than a ratio of the first difference to the second difference.
For example, the multiframe frequency domain amplitude of the underwater environment is:
Figure BDA0002970011640000081
then the sound field characteristics of the underwater environment obtained after fusion is 9458104]And the multiframe frequency domain amplitude of the aquatic environment is:
Figure BDA0002970011640000082
the sound field characteristic of the water environment obtained after fusion is 16711]And obtaining a first sound field feature set of the underwater environment and a second sound field feature set of the water environment through multiple times of acquisition, and then determining a sound field feature threshold through the first sound field feature set and/or the second sound field feature set of the water environment.
In the environment detection method provided by the embodiment, the target detection sound is obtained by collecting the environment detection sound, the characteristic of the target detection sound is extracted, the sound field characteristic of the target detection sound is obtained, and the current environment of the electronic equipment is determined based on the sound field characteristic.
Referring to fig. 4, fig. 4 is a flowchart illustrating steps of another environment detection method according to an embodiment of the present application.
Specifically, as shown in fig. 4, the environment detection method includes steps S201 to S204.
S201, an environment detection sound is emitted through a sound generator of the electronic device, and the environment detection sound is collected through a sound receiver of the electronic device so as to obtain a collected target detection sound.
After the electronic equipment starts the environment detection function, the electronic equipment sends out environment detection sound at intervals of preset time or in real time through the sounder, and the environment detection sound is collected through a sound receiver of the electronic equipment to obtain target detection sound. It should be noted that the preset time may be set based on actual situations, and the present application is not limited to this.
S202, extracting the characteristics of the target detection sound to obtain the sound field characteristics of the target detection sound.
And after the target detection sound is obtained, carrying out feature extraction on the target detection sound to obtain the sound field features of the target detection sound. In the underwater rigid wall environment, the sound waves emitted by the sound source are continuously reflected and superposed on the wall surface, so that the whole sound field is enhanced; in the water non-rigid wall environment, the sound waves emitted by the sound source are attenuated or even leaked on the non-rigid wall, the attenuation is related to the frequency, the sound field of the water non-rigid wall environment is weaker than that of the underwater rigid wall environment, and the sound field characteristics comprise at least one of time domain amplitude and frequency domain amplitude.
S203, inputting the sound field characteristics to a preset environment judgment model to obtain an environment type label output by the preset environment judgment model.
The preset environment discrimination model is obtained by optimizing a first sound field feature set of an underwater environment and a second sound field feature set of an above-water environment, the environment discrimination model is a binary model and can also be a neural network model, and the binary model or the neural network model is subjected to parameter optimization through the first sound field feature set of the underwater environment and the second sound field feature set of the above-water environment, so that the environment discrimination model can be obtained. The neural network models include, but are not limited to, convolutional neural networks, cyclic neural networks, and cyclic convolutional neural networks.
After the target detection sound is obtained, the sound field characteristics are input into a preset environment judgment model to obtain an environment type label output by the preset environment judgment model. The environment type label comprises a first preset label corresponding to an underwater environment and a second preset label corresponding to an overwater environment. The following explains the construction process of the environment discrimination model by using a binary classification model.
Calculating sound field characteristics of target detection sounds of an above-water environment and an underwater environment for multiple times in advance, and respectively obtaining a first sound field characteristic set A ═ S of the underwater environmentwater,1,Swater,2…Swater,l1And a second set of sound field features of the underwater environment B ═ Sair,1,Sair,2…Sair,l2Set A with labels yi+1, label of set B is yjAnd (3) randomly mixing all data to obtain sample data T { (x) when the sample data T is larger than 10 in l1 and l21,y1),(x2,y2),…,(xN,yN)},N=l1+l2。
Selecting a kernel function K (x, z) and a parameter C, where the kernel function can be selected based on actual conditions, and this specification takes a gaussian kernel function as an example to illustrate, and by using the gaussian kernel function and the parameter C, the following can be obtained:
Figure BDA0002970011640000095
then pass the sample data T { (x)1,y1),(x2,y2),…,(xN,yN) Get the optimal solution
Figure BDA0002970011640000092
Selection of alpha*A positive component of 0<αj<c, calculating
Figure BDA0002970011640000093
And finally, constructing a decision function:
Figure BDA0002970011640000094
the constructed decision function is an environment discrimination model, and it can be understood that when the sound field characteristics of the target detection sound are input into the decision function, and the output result is +1, the current environment of the electronic equipment can be determined to be an underwater environment, and when the output result is-1, the current environment of the electronic equipment can be determined to be an overwater environment.
S204, determining the current environment of the electronic equipment according to the environment type label.
Specifically, whether an environment type label output by the environment discrimination model is a first preset label or a second preset label is determined; if the environment type label output by the environment discrimination model is a first preset label, determining that the current environment where the electronic equipment is located is an underwater environment; and if the environment type label output by the environment discrimination model is a second preset label, determining that the current environment where the electronic equipment is located is the water environment. Optionally, the first preset tag is +1 and the second preset tag is-1.
In one embodiment, when a camera of an electronic device is determined to be in an underwater environment, imaging parameters of the camera are adjusted. Wherein the imaging parameters include at least one of: exposure duration, aperture value, photosensitivity value and exposure gain. When the shooting device is in an underwater environment, the imaging parameters of the shooting device are adjusted, the image quality shot by the shooting device can be improved, and the user experience is greatly improved.
In the environment detection method provided by the embodiment, the target detection sound is obtained by collecting the environment detection sound, the characteristic of the target detection sound is extracted to obtain the sound field characteristic of the target detection sound, and based on the sound field characteristic and the environment discrimination model, the current environment where the electronic equipment is located can be determined quickly and accurately, so that the user experience is greatly improved.
Referring to fig. 5, fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present application. In one embodiment, the electronic device includes, but is not limited to, a camera, a mobile phone, a tablet computer, a handheld cradle head, and the like. Further, the electronic device 300 comprises a processor 301, a sounder 302 and a sound receiver 303, the processor 301, the sounder 302 and the sound receiver 303 being connected by a bus 304, the bus 304 being for example an I2C (Inter-integrated Circuit) bus.
Specifically, the Processor 301 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the sounder 302 includes, but is not limited to, a speaker and a buzzer, and the sound receiver 303 includes, but is not limited to, a single microphone and a dual microphone.
Wherein, the sounder 302 is used for emitting environment detection sound;
the sound receiver 303 is configured to collect the environmental detection sound to obtain a collected target detection sound;
the processor 301 is configured to implement the following steps:
acquiring the target detection sound collected by the sound receiver;
performing feature extraction on the target detection sound to acquire sound field features of the target detection sound;
determining the current environment of the electronic equipment according to the sound field characteristics, wherein the current environment comprises an above-water environment and an underwater environment.
Optionally, when the processor performs feature extraction on the target detection sound to obtain the sound field feature of the target detection sound, the processor is configured to:
correcting the target detection sound to enable the target detection sound subjected to correction processing to be time-synchronous with the environment detection sound;
and performing feature extraction on the corrected target detection sound to obtain the sound field features of the target detection sound.
Optionally, when the processor performs feature extraction on the target detection sound to obtain the sound field feature of the target detection sound, the processor is configured to:
sampling the target detection tone to acquire a multi-frame time domain signal of the target detection tone;
performing frequency domain transformation on the multi-frame time domain signal to obtain a multi-frame frequency domain signal;
and determining the sound field characteristics of the target detection sound according to the multi-frame frequency domain signals.
Optionally, when the processor implements frequency domain transformation on the multiple frames of time domain signals to obtain multiple frames of frequency domain signals, the processor is configured to implement:
windowing the multi-frame time domain signal according to a preset window function;
and carrying out frequency domain transformation on the multi-frame time domain signal after the windowing processing so as to obtain a multi-frame frequency domain signal.
Optionally, the processor, when implementing determining the sound field characteristic of the target detection sound according to the multiple frames of frequency domain signals, is configured to implement:
determining a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal according to each frame frequency domain signal in the multi-frame frequency domain signals;
fusing a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal in the multi-frame frequency domain signals to obtain a plurality of fused characteristic frequency domain amplitudes;
and determining the plurality of fused characteristic frequency domain amplitude values as the sound field characteristic of the target detection sound.
Optionally, when the processor performs feature extraction on the target detection sound to obtain the sound field feature of the target detection sound, the processor is configured to:
sampling the target detection tone to acquire a multi-frame time domain signal of the target detection tone;
and determining the sound field characteristics of the target detection sound according to the time domain amplitude of the multi-frame time domain signal.
Optionally, when the processor is implemented to determine the sound field characteristic of the target detection sound according to the time-domain amplitude of the multiple frames of time-domain signals, the processor is configured to implement:
acquiring the average value of the time domain amplitude of each frame of time domain signal in the multi-frame time domain signal;
and fusing the average values of the multi-frame time domain signals, and determining the fused result as the sound field characteristic of the target detection sound.
Optionally, the processor is configured to, when determining the current environment of the electronic device according to the sound field features, implement:
acquiring a sound field feature threshold, wherein the sound field feature threshold is determined according to at least one of a first sound field feature set of an underwater environment and a second sound field feature set of an overwater environment;
determining the current environment of the electronic equipment according to the sound field characteristic threshold and the sound field characteristic.
Optionally, the processor, when implementing to obtain the sound field feature threshold, is configured to implement:
acquiring a first sound field feature set of an underwater environment and a second sound field feature set of an overwater environment;
calculating a first mean value and a first standard deviation of the first sound field feature set, and calculating a second mean value and a second standard deviation of the second sound field feature set;
and determining a sound field characteristic threshold according to the first mean value, the first standard deviation, the second mean value and the second standard deviation.
Optionally, the processor is configured to, when determining the current environment of the electronic device according to the sound field feature threshold and the sound field feature, implement:
if the sound field characteristic is larger than the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is an underwater environment;
and if the sound field characteristic is smaller than or equal to the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is a water environment.
Optionally, the processor is configured to, when determining the current environment of the electronic device according to the sound field features, implement:
inputting the sound field characteristics to a preset environment judgment model to obtain an environment type label output by the preset environment judgment model;
and determining the current environment of the electronic equipment according to the environment type label.
Optionally, the preset environment discrimination model is obtained by optimizing according to a first sound field feature set of an underwater environment and a second sound field feature set of an above-water environment.
Optionally, the electronic device comprises a camera; the processor implementation is further to implement:
adjusting imaging parameters of the camera when it is determined that the camera is in an underwater environment.
Optionally, the sounder comprises a speaker, buzzer or motor and the sound receiver comprises a microphone.
Optionally, the sound field features comprise at least one of time domain amplitude values and frequency domain amplitude values.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working process of the electronic device described above may refer to the corresponding process in the foregoing embodiment of the environment detection method, and is not described herein again.
In an embodiment of the present application, a computer-readable storage medium is further provided, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the processor executes the program instructions to implement the steps of the environment detection method provided in the foregoing embodiment.
The computer-readable storage medium may be an internal storage unit of the electronic device according to any of the foregoing embodiments, for example, a hard disk or a memory of the electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (31)

1. An environment detection method, comprising:
sending an environment detection sound through a sound generator of electronic equipment, and collecting the environment detection sound through a sound receiver of the electronic equipment to obtain a collected target detection sound;
performing feature extraction on the target detection sound to acquire sound field features of the target detection sound;
determining the current environment of the electronic equipment according to the sound field characteristics, wherein the current environment comprises an above-water environment and an underwater environment.
2. The environment detection method according to claim 1, wherein the performing feature extraction on the target detection sound to obtain the sound field feature of the target detection sound comprises:
correcting the target detection sound to enable the target detection sound subjected to correction processing to be time-synchronous with the environment detection sound;
and performing feature extraction on the corrected target detection sound to obtain the sound field features of the target detection sound.
3. The environment detection method according to claim 1 or 2, wherein the performing feature extraction on the target detection sound to obtain the sound field feature of the target detection sound comprises:
sampling the target detection tone to acquire a multi-frame time domain signal of the target detection tone;
performing frequency domain transformation on the multi-frame time domain signal to obtain a multi-frame frequency domain signal;
and determining the sound field characteristics of the target detection sound according to the multi-frame frequency domain signals.
4. The environment detection method according to claim 3, wherein the frequency-domain transforming the plurality of frames of time-domain signals to obtain a plurality of frames of frequency-domain signals comprises:
windowing the multi-frame time domain signal according to a preset window function;
and carrying out frequency domain transformation on the multi-frame time domain signal after the windowing processing so as to obtain a multi-frame frequency domain signal.
5. The environment detection method according to claim 3 or 4, wherein the determining of the sound field characteristic of the target detection sound from the plurality of frames of frequency domain signals comprises:
determining a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal according to each frame frequency domain signal in the multi-frame frequency domain signals;
fusing a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal in the multi-frame frequency domain signals to obtain a plurality of fused characteristic frequency domain amplitudes;
and determining the plurality of fused characteristic frequency domain amplitude values as the sound field characteristic of the target detection sound.
6. The environment detection method according to any one of claims 1 to 5, wherein the performing feature extraction on the target detection sound to obtain the sound field feature of the target detection sound comprises:
sampling the target detection tone to acquire a multi-frame time domain signal of the target detection tone;
and determining the sound field characteristics of the target detection sound according to the time domain amplitude of the multi-frame time domain signal.
7. The environment detection method according to claim 6, wherein the determining the sound field characteristic of the target detection sound according to the time-domain amplitude of the multiple frames of time-domain signals comprises:
acquiring the average value of the time domain amplitude of each frame of time domain signal in the multi-frame time domain signal;
and fusing the average values of the multi-frame time domain signals, and determining the fused result as the sound field characteristic of the target detection sound.
8. The environment detection method according to any one of claims 1 to 7, wherein the determining the current environment in which the electronic device is located according to the sound field feature comprises:
acquiring a sound field feature threshold, wherein the sound field feature threshold is determined according to at least one of a first sound field feature set of an underwater environment and a second sound field feature set of an overwater environment;
determining the current environment of the electronic equipment according to the sound field characteristic threshold and the sound field characteristic.
9. The environment detection method according to claim 8, wherein the obtaining of the sound field feature threshold comprises:
acquiring a first sound field feature set of an underwater environment and a second sound field feature set of an overwater environment;
calculating a first mean value and a first standard deviation of the first sound field feature set, and calculating a second mean value and a second standard deviation of the second sound field feature set;
and determining a sound field characteristic threshold according to the first mean value, the first standard deviation, the second mean value and the second standard deviation.
10. The environment detection method according to claim 8 or 9, wherein the determining the current environment of the electronic device according to the sound field feature threshold and the sound field feature comprises:
if the sound field characteristic is larger than the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is an underwater environment;
and if the sound field characteristic is smaller than or equal to the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is a water environment.
11. The environment detection method according to any one of claims 1 to 10, wherein the determining a current environment in which the electronic device is located according to the sound field feature comprises:
inputting the sound field characteristics to a preset environment judgment model to obtain an environment type label output by the preset environment judgment model;
and determining the current environment of the electronic equipment according to the environment type label.
12. The environment detection method according to claim 11, wherein the preset environment discrimination model is optimized according to a first sound field feature set of an underwater environment and a second sound field feature set of an aquatic environment.
13. The environment detection method according to any one of claims 1 to 12, characterized in that the electronic apparatus includes a camera; the method further comprises the following steps:
adjusting imaging parameters of the camera when it is determined that the camera is in an underwater environment.
14. The environment detection method according to claim 1 or 13, wherein the sound generator comprises a speaker, a buzzer or a motor, and the sound receiver comprises a microphone.
15. The environment detection method according to claim 1 or 13, wherein the sound field characteristics include at least one of time domain amplitude and frequency domain amplitude.
16. An electronic device, comprising a sound generator, a sound receiver, and a processor;
the sounder is used for emitting environment detection sound;
the sound receiver is used for collecting the environment detection sound to obtain a collected target detection sound;
the processor is configured to implement the steps of:
performing feature extraction on the target detection sound to acquire sound field features of the target detection sound;
determining the current environment of the electronic equipment according to the sound field characteristics, wherein the current environment comprises an above-water environment and an underwater environment.
17. The electronic device according to claim 16, wherein the processor, when implementing feature extraction on the target detection sound to obtain the sound field feature of the target detection sound, is configured to implement:
correcting the target detection sound to enable the target detection sound subjected to correction processing to be time-synchronous with the environment detection sound;
and performing feature extraction on the corrected target detection sound to obtain the sound field features of the target detection sound.
18. The electronic device according to claim 16 or 17, wherein the processor, when implementing feature extraction on the target detection sound to obtain the sound field feature of the target detection sound, is configured to implement:
sampling the target detection tone to acquire a multi-frame time domain signal of the target detection tone;
performing frequency domain transformation on the multi-frame time domain signal to obtain a multi-frame frequency domain signal;
and determining the sound field characteristics of the target detection sound according to the multi-frame frequency domain signals.
19. The electronic device of claim 18, wherein the processor, when implementing the frequency domain transform on the plurality of frames of time domain signals to obtain a plurality of frames of frequency domain signals, is configured to implement:
windowing the multi-frame time domain signal according to a preset window function;
and carrying out frequency domain transformation on the multi-frame time domain signal after the windowing processing so as to obtain a multi-frame frequency domain signal.
20. The electronic device according to claim 18 or 19, wherein the processor, when implementing determining the sound field characteristic of the target detection sound from the plurality of frames of frequency domain signals, is configured to implement:
determining a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal according to each frame frequency domain signal in the multi-frame frequency domain signals;
fusing a plurality of characteristic frequency domain amplitudes corresponding to each frame frequency domain signal in the multi-frame frequency domain signals to obtain a plurality of fused characteristic frequency domain amplitudes;
and determining the plurality of fused characteristic frequency domain amplitude values as the sound field characteristic of the target detection sound.
21. The electronic device according to any one of claims 16 to 20, wherein the processor, when implementing feature extraction on the target detection sound to obtain the sound field feature of the target detection sound, is configured to implement:
sampling the target detection tone to acquire a multi-frame time domain signal of the target detection tone;
and determining the sound field characteristics of the target detection sound according to the time domain amplitude of the multi-frame time domain signal.
22. The electronic device of claim 21, wherein the processor, when enabled to determine the sound field characteristic of the target detection sound according to the time-domain amplitude values of the plurality of frames of time-domain signals, is configured to enable:
acquiring the average value of the time domain amplitude of each frame of time domain signal in the multi-frame time domain signal;
and fusing the average values of the multi-frame time domain signals, and determining the fused result as the sound field characteristic of the target detection sound.
23. The electronic device according to any of the claims 16 to 22, wherein the processor is configured to, when determining the current environment of the electronic device according to the sound field characteristics, implement:
acquiring a sound field feature threshold, wherein the sound field feature threshold is determined according to at least one of a first sound field feature set of an underwater environment and a second sound field feature set of an overwater environment;
determining the current environment of the electronic equipment according to the sound field characteristic threshold and the sound field characteristic.
24. The electronic device of claim 23, wherein the processor, when implemented to obtain the sound field feature threshold, is configured to implement:
acquiring a first sound field feature set of an underwater environment and a second sound field feature set of an overwater environment;
calculating a first mean value and a first standard deviation of the first sound field feature set, and calculating a second mean value and a second standard deviation of the second sound field feature set;
and determining a sound field characteristic threshold according to the first mean value, the first standard deviation, the second mean value and the second standard deviation.
25. The electronic device of claim 23 or 24, wherein the processor is configured to, when determining the current environment of the electronic device according to the sound field feature threshold and the sound field feature, implement:
if the sound field characteristic is larger than the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is an underwater environment;
and if the sound field characteristic is smaller than or equal to the sound field characteristic threshold value, determining that the current environment where the electronic equipment is located is a water environment.
26. The electronic device according to any of the claims 16 to 25, wherein the processor is configured to, when determining the current environment of the electronic device according to the sound field characteristics, perform:
inputting the sound field characteristics to a preset environment judgment model to obtain an environment type label output by the preset environment judgment model;
and determining the current environment of the electronic equipment according to the environment type label.
27. The electronic device according to claim 26, wherein the preset environment discrimination model is optimized according to a first sound field feature set of an underwater environment and a second sound field feature set of an above-water environment.
28. The electronic device according to any one of claims 16 to 27, wherein the electronic device comprises a camera; the processor implementation is further to implement:
adjusting imaging parameters of the camera when it is determined that the camera is in an underwater environment.
29. The electronic device of claim 16 or 28, wherein the sound generator comprises a speaker, a buzzer, or a motor, and the sound receiver comprises a microphone.
30. The electronic device of claim 16 or 28, wherein the sound field characteristics comprise at least one of time domain amplitude and frequency domain amplitude.
31. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the environment detection method according to any one of claims 1 to 15.
CN201980059247.1A 2019-11-29 2019-11-29 Environment detection method, electronic device and computer-readable storage medium Pending CN112868061A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/122170 WO2021102993A1 (en) 2019-11-29 2019-11-29 Environment detection method, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112868061A true CN112868061A (en) 2021-05-28

Family

ID=75995574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980059247.1A Pending CN112868061A (en) 2019-11-29 2019-11-29 Environment detection method, electronic device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112868061A (en)
WO (1) WO2021102993A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140207460A1 (en) * 2013-01-24 2014-07-24 Huawei Device Co., Ltd. Voice identification method and apparatus
CN107144341A (en) * 2017-04-28 2017-09-08 珠海格力电器股份有限公司 Environment monitoring method and device and air conditioner with device
CN107820037A (en) * 2016-09-14 2018-03-20 南京中兴新软件有限责任公司 The methods, devices and systems of audio signal, image procossing
CN108919277A (en) * 2018-07-02 2018-11-30 深圳米唐科技有限公司 Indoor and outdoor surroundings recognition methods, system and storage medium based on sub- ultrasonic wave
CN109655835A (en) * 2018-10-15 2019-04-19 浙江天地人科技有限公司 A kind of detection method and device of channel environment variation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135619A (en) * 2010-12-06 2011-07-27 王茂森 Biosonar sounding device and method
CN104808209A (en) * 2015-05-13 2015-07-29 集怡嘉数码科技(深圳)有限公司 Method and device for detecting obstacle
CN105738908B (en) * 2016-01-29 2018-08-24 宇龙计算机通信科技(深圳)有限公司 A kind of anticollision method for early warning, device and earphone

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140207460A1 (en) * 2013-01-24 2014-07-24 Huawei Device Co., Ltd. Voice identification method and apparatus
CN107820037A (en) * 2016-09-14 2018-03-20 南京中兴新软件有限责任公司 The methods, devices and systems of audio signal, image procossing
CN107144341A (en) * 2017-04-28 2017-09-08 珠海格力电器股份有限公司 Environment monitoring method and device and air conditioner with device
CN108919277A (en) * 2018-07-02 2018-11-30 深圳米唐科技有限公司 Indoor and outdoor surroundings recognition methods, system and storage medium based on sub- ultrasonic wave
CN109655835A (en) * 2018-10-15 2019-04-19 浙江天地人科技有限公司 A kind of detection method and device of channel environment variation

Also Published As

Publication number Publication date
WO2021102993A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
CN108899044B (en) Voice signal processing method and device
CN110223680B (en) Voice processing method, voice recognition device, voice recognition system and electronic equipment
US10602267B2 (en) Sound signal processing apparatus and method for enhancing a sound signal
CN110970057B (en) Sound processing method, device and equipment
CN106664486B (en) Method and apparatus for wind noise detection
US9269367B2 (en) Processing audio signals during a communication event
US11941968B2 (en) Systems and methods for identifying an acoustic source based on observed sound
JP6703525B2 (en) Method and device for enhancing sound source
CN110515085B (en) Ultrasonic processing method, ultrasonic processing device, electronic device, and computer-readable medium
US9131295B2 (en) Multi-microphone audio source separation based on combined statistical angle distributions
EP3360137B1 (en) Identifying sound from a source of interest based on multiple audio feeds
EP2552131A2 (en) Reverberation suppression device, method, and program for a mobile terminal device
CN112309417B (en) Method, device, system and readable medium for processing audio signal with wind noise suppression
CN113192527A (en) Method, apparatus, electronic device and storage medium for cancelling echo
CN109756818B (en) Dual-microphone noise reduction method and device, storage medium and electronic equipment
JP6265903B2 (en) Signal noise attenuation
WO2022121182A1 (en) Voice activity detection method and apparatus, and device and computer-readable storage medium
CN110169082A (en) Combining audio signals output
JP2014532891A (en) Audio signal noise attenuation
CN112969130A (en) Audio signal processing method and device and electronic equipment
CN116959471A (en) Voice enhancement method, training method of voice enhancement network and electronic equipment
CN112868061A (en) Environment detection method, electronic device and computer-readable storage medium
US10748554B2 (en) Audio source identification
CN112201267A (en) Audio processing method and device, electronic equipment and storage medium
US20230186943A1 (en) Voice activity detection method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination