CN117129998A - Near-ultrasonic-based non-line-of-sight signal identification method - Google Patents

Near-ultrasonic-based non-line-of-sight signal identification method Download PDF

Info

Publication number
CN117129998A
CN117129998A CN202311101412.XA CN202311101412A CN117129998A CN 117129998 A CN117129998 A CN 117129998A CN 202311101412 A CN202311101412 A CN 202311101412A CN 117129998 A CN117129998 A CN 117129998A
Authority
CN
China
Prior art keywords
signal
line
sight
expression
intermediate frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311101412.XA
Other languages
Chinese (zh)
Inventor
王智
贾乃征
崔维蒙
王宇威
刘光耀
薛灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202311101412.XA priority Critical patent/CN117129998A/en
Publication of CN117129998A publication Critical patent/CN117129998A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/08Systems for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52004Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/539Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The application discloses a near-ultrasonic-based non-line-of-sight signal identification method, which comprises the following steps: collecting audio positioning data through an audio collecting device; preprocessing the audio positioning data to obtain an intermediate frequency signal spectrum; performing characteristic acquisition on the intermediate frequency signal spectrum to obtain a characteristic set; and inputting the feature set into a machine learning Xgboost model, and outputting a non-line-of-sight recognition result. According to the application, the fact that the sound non-line-of-sight signal often has special characteristics is considered, so that the channel characteristic is selected as the input characteristic, and the Xgboost is used for improving the fitting capacity of the nonlinear characteristic, so that the technical effects of time-space correlation information of the non-line-of-sight and reduction of recognition complexity are simultaneously considered. By the method, extremely high accuracy of cross-room sound non-line-of-sight identification can be obtained.

Description

Near-ultrasonic-based non-line-of-sight signal identification method
Technical Field
The application relates to the fields of indoor positioning, signal processing and artificial intelligence, in particular to a near-ultrasonic-based non-line-of-sight signal identification method.
Background
Currently, GNSS positioning systems have been widely used. However, in indoor environments, GNSS cannot be used for positioning because the signals are subject to environmental shadowing. This increases the need for indoor positioning technology. In recent years, the near-ultrasonic positioning technology has become a popular indoor positioning technology due to the characteristics of wide coverage range, strong robustness, high positioning precision, high equipment compatibility and the like.
However, the transmission of acoustic signals in non line of sight (NLOS) environments can be greatly affected, resulting in signal weakening, distortion or complete loss. In order to solve this problem, near-ultrasonic non-line-of-sight signal recognition techniques have been developed in recent years. The technology can realize the identification of non-line-of-sight signals by acquiring signals through the smart phone, processing the signals and identifying the machine learning algorithm.
In the academy, there are often the following decision schemes:
most algorithms will use the previous time ranging value to compare with the current ranging value to form outlier detection. If an outlier is detected, the NLOS signal is discarded. Meanwhile, part of the algorithms use an optimized positioning algorithm to relieve the positioning effect of the non-line-of-sight signals. However, this method requires a certain degree of complexity of the algorithm, and requires a high-precision ranging result, which is not suitable for large-scale use.
Another approach is to use channel quality, channel spectrum models, etc. as samples for machine learning and deep learning to achieve high accuracy non-line-of-sight sample identification. The generalized cross-correlation spectrum is calculated through a large amount of data, so that one-dimensional data identification of the channel impulse response can be realized. Or extracting the characteristics of frequency spectrum, wavelet transformation and the like to realize two-dimensional information identification. With the massive popularization of the deep learning framework, the method also becomes one of the hot methods.
Although near-ultrasonic non-line-of-sight signal recognition technology has advanced to some extent, the prior art still has the problem of low signal recognition rate, especially in the case of cross-room type, the signal cannot be recognized normally. Therefore, the application provides a near-ultrasonic non-line-of-sight signal recognition method, which aims to improve the recognition rate of a non-line-of-sight near-ultrasonic positioning signal under the cross-room condition and further improve the performance and stability of the technology.
Disclosure of Invention
The application aims at overcoming the defects of the prior art and provides a near-ultrasonic-based non-line-of-sight signal identification method.
In order to achieve the above purpose, the application provides a near-ultrasonic-based non-line-of-sight signal identification method, which comprises the following steps:
s1, acquiring audio positioning data;
s2, preprocessing the audio positioning data to obtain an intermediate frequency signal spectrum;
s3, carrying out characteristic acquisition on the intermediate frequency signal spectrum to obtain a characteristic set;
s4, inputting the feature set into an Xgboost identifier, and outputting a non-line-of-sight identification result.
Further, the step S1 specifically includes: acquiring modulation signals through an audio acquisition device to obtain mixed audio data of direct line-of-sight signals, non-line-of-sight diffraction signals and reflected signals, wherein the mixed audio data is audio positioning data; the audio acquisition device comprises a microphone and a smart phone microphone.
Further, the step S2 includes the following sub-steps:
s21, carrying out cross-correlation on the audio positioning data and an original reference signal to obtain theoretical arrival time; calculating the expected arrival time in advance for the theoretical arrival time, and extracting the complete signal in a postponed manner; upsampling the complete signal by a spline interpolation method, and increasing the signal sampling rate to be twice of the original sampling rate to obtain an upsampled signal;
s22, calculating the product of the up-sampled signal and the original reference signal up-sampled by a spline interpolation method to obtain an intermediate frequency signal, wherein the intermediate frequency signal expression is as follows:
wherein s (t) is an original signal sent by the acoustic positioning base station, r is a receiving end receiving signal, and τ is calculated by the base station d For the arrival time of the acoustic signal propagation, B is the bandwidth, d is the acoustic signal transmission distance, c is the speed of sound, T is the signal duration, f min Is the lowest frequency value of the positioning acoustic signal;
s23, performing FFT operation on the intermediate frequency signal obtained in the step S22 to obtain an intermediate frequency spectrum;
the operation window used by the FFT is a Nuttall window, and the expression of the Nuttall window is as follows:
where N is the signal sampling point, and N is the total signal length.
Further, in the step S3, the feature set includes data of a rise time, a first arrival path arrival time, a peak number, an average additional delay, an average root mean square delay, a kurtosis coefficient, a signal energy, and a strongest path energy.
Further, the rise time t rise The expression of (2) is:
t rise =t H -t L
wherein,t H is the maximum value of the frequency spectrum h, t L In order to reach a time starting point, lambda is a signal effective segment extraction threshold value, i represents a current signal index, and h represents an obtained intermediate frequency spectrum;
the first arrival path arrival time t first The expression of (2) is:
wherein, extremum (·) is a first Extremum operator, peaks (·) is a peak finding operator;
the number of peaks num peaks The expression of (2) is:
num peaks =peaks[|h(i)|]
said average additional delay τ m The expression of (2) is:
where τ represents the time index of the signal;
the average root mean square delay tau rms The expression of (2) is:
the kurtosis coefficient kappa has the expression:
wherein,e is the expectation, μ is the first order variance, σ is the variance;
the signal energy epsilon r The expression of (2) is:
the expression of the strongest path energy SPE is as follows:
further, in the step S4, the Xgboost identifier has the expression:
Obj(θ)=L(θ)+Ω(θ)
wherein L (θ) is a loss function, regularized amountGamma and lambda are regularized parameters, theta is an error parameter, J represents the J-th node, b j Representing the predicted value of the j-th node.
Compared with the prior art, the application has the beneficial effects that: the application adopts a frequency modulation continuous wave method to extract the up-sampled signal into an intermediate frequency signal spectrum, and does not directly input the signal or the signal spectrum into a machine learning model. And then, inputting the features extracted from the intermediate frequency spectrum into an Xgboost model, and finally realizing non-line-of-sight identification of the acoustic signals through a tree model. The channel time delay is converted into the distance information distribution of the intermediate frequency, so that the problem of low cross-room recognition rate caused by the signal time delay difference due to different environments in the prior art is solved. According to the application, the fact that the sound non-line-of-sight signal often has special characteristics is considered, so that the channel characteristic is selected as the input characteristic, and the Xgboost is used for improving the fitting capacity of the nonlinear characteristic, so that the technical effects of time-space correlation information of the non-line-of-sight and reduction of recognition complexity are simultaneously considered. By the method, extremely high accuracy of cross-room sound non-line-of-sight identification can be obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart illustrating a near-ultrasound based non-line-of-sight signal identification method, according to an exemplary embodiment;
FIG. 2 is a block diagram illustrating intermediate frequency signal feature extraction according to an exemplary embodiment;
FIG. 3 is an xgboost identification block diagram shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
As shown in fig. 1, the method for identifying non-line-of-sight signals based on near ultrasound provided by the application comprises the following steps:
s1, acquiring an original sound positioning signal sample set, namely audio positioning data.
S2, preprocessing the audio positioning data to obtain an intermediate frequency signal spectrum;
s3, carrying out characteristic acquisition on the intermediate frequency signal spectrum to obtain a characteristic set;
s4, inputting the feature set into a machine learning Xgboost model (namely an Xgboost recognizer) and outputting a non-line-of-sight recognition result.
As can be seen from the foregoing description, the present application extracts the up-sampled signal as an intermediate frequency signal spectrum by using a fm continuous wave method, rather than directly inputting the signal or signal spectrum into a machine learning model. And then, inputting the features extracted from the intermediate frequency spectrum into an Xgboost model, and finally realizing non-line-of-sight identification of the acoustic signals through a tree model. The channel time delay is converted into the distance information distribution of the intermediate frequency, so that the problem of low cross-room recognition rate caused by the signal time delay difference due to different environments in the prior art is solved. According to the application, the fact that the sound non-line-of-sight signal often has special characteristics is considered, so that the channel characteristic is selected as the input characteristic, and the Xgboost is used for improving the fitting capacity of the nonlinear characteristic, so that the technical effects of time-space correlation information of the non-line-of-sight and reduction of recognition complexity are simultaneously considered. By the method, extremely high accuracy of cross-room acoustic non-line-of-sight identification is obtained.
In a specific embodiment of S1: and acquiring audio positioning data through an audio acquisition device.
Specifically, the audio collection device may employ a microphone, a smart phone microphone, etc., and preferably a phone microphone is employed to reduce cost. The mixed audio data of the direct line-of-sight signal, the non-line-of-sight diffraction signal and the reflected signal can be obtained through the modulation signals collected by the microphone of the mobile phone, and the mixed audio data is audio positioning data. The modulation signal is a continuous acoustic signal with the period of 0.5 seconds, and a commonly used Chirp acoustic signal with the period of 17kHz-21kHz and 40ms is adopted, so that the refresh rate is ensured, and the universality and the frequency response range of the smart phone microphone are considered.
In the S2 embodiment, the preprocessing is performed on the audio positioning data to obtain an intermediate frequency signal spectrum, as shown in fig. 2, which may include the following sub-steps:
s21, performing cross-correlation on the audio positioning data and an original reference signal (namely an original signal sent by an acoustic positioning base station) to obtain theoretical arrival time.
Calculating an expected arrival time 0.015-0.02 seconds earlier than the theoretical arrival time, and 0.06 seconds later to extract the complete signal;
and up-sampling the complete signal by a spline interpolation method, and increasing the signal sampling rate to be twice as high as the original signal sampling rate (48 kHz), namely 96kHz, so as to obtain an up-sampled signal.
Specifically, the upsampling method is spline interpolation, so that important information of the signal is not lost.
S22, multiplying the up-sampled signal with an original reference signal up-sampled by a spline interpolation method to obtain an intermediate frequency signal. The intermediate frequency signal expression is:
wherein s is mix (t) is an intermediate frequency signal, s (t) is an original signal sent by the acoustic positioning base station, r is a receiving end receiving signal, and τ d For the arrival time of the acoustic signal propagation, B is the bandwidth, d is the acoustic signal transmission distance, c is the speed of sound, T is the signal duration, f min Is the lowest frequency value of the positioning acoustic signal.
S23, performing FFT operation (fast Fourier operation) on the intermediate frequency signal obtained in the step S22 to obtain an intermediate frequency spectrum.
Specifically, the operation window of the FFT should use a nutall window. Wherein the expression of the nutall window is:
wherein w (N) is a nutall window, N is a signal sampling point, and N is the total signal length.
In a specific embodiment of S3, the feature sets are respectively: rise time, first path arrival time, number of peaks, average additional delay, average root mean square delay, kurtosis coefficient, signal energy, and strongest path energy.
Specifically, the rise time t is constructed from the features rise The method comprises the following steps:
t rise =t H -t L
wherein,t H is the maximum value of the frequency spectrum h, t L To reach the time starting point, λ is a signal valid segment extraction threshold, typically set to 0.1-0.2, i represents the current signal index, and h represents the resulting intermediate frequency spectrum.
The first arrival path arrival time t first The method comprises the following steps:
wherein, extremum (·) is the first Extremum operator, peaks (·) is the peak operator.
The number of peaks num peaks The method comprises the following steps:
num peaks =peaks[|h(i)|]
said average additional delay τ m The method comprises the following steps:
where τ represents the time index of the signal and N is the total number of signal points.
The average root mean square delay tau rms The method comprises the following steps:
the kurtosis coefficient κ is:
wherein,e is the expectation, μ is the first order variance, σ is the variance.
The strongest path energy SPE is as follows:
the signal energy epsilon r The method comprises the following steps:
in the specific embodiment of S4: and extracting the frequency spectrum of the intermediate frequency signal by adopting a signal characteristic extraction method, and then inputting the characteristic set into a trained Xgboost identifier. The process is as follows: firstly, data preprocessing is carried out on an input feature set, an Xgboost identifier uses a decision tree as a basic weak classifier, then gradient and second derivative of a loss function are calculated, an objective function is optimized, and finally, the decision tree is updated, so that a non-line-of-sight identification result is output. As shown in fig. 3.
Specifically, the expression of the Xgboost identifier is:
Obj(θ)=L(θ)+Ω(θ)
where L (θ) is the loss function of the system, regularized amount of the loss functionGamma and lambda are regularized parameters, theta is an error parameter, J represents the J-th node, b j Representing the predicted value of the j-th node.
Specifically, the parameters of the xgboost identifier are as follows: total number of trees: 2800-3000, learning rate is 0.01-0.02, maximum depth is 15, and minimum sub-weight is 0.05-0.07. The purpose of adjusting the xgboost identifier parameters is to optimize the performance and generalization ability of the model.
The method is mainly oriented to intelligent mobile terminal users, aims at identifying the sound non-line-of-sight discrimination of the sound indoor positioning signals, and particularly has extremely high accuracy of identifying the sound non-line-of-sight signals crossing rooms. The method of the application considers the characteristic that the sound non-line-of-sight signal always continuously appears, so the channel characteristic is extracted through the intermediate frequency signal, and the Xgboost is utilized for fitting, thereby better adapting the sound non-line-of-sight signal. By adjusting a proper training framework, the training accuracy of the acquired data in a single room can be improved to 99.71%. If the data collected in the room 1 is used as a training set and the data collected in the room 2 is used as a test set, 86% accuracy can be achieved.

Claims (6)

1. The near-ultrasonic-based non-line-of-sight signal identification method is characterized by comprising the following steps of:
s1, acquiring audio positioning data;
s2, preprocessing the audio positioning data to obtain an intermediate frequency signal spectrum;
s3, carrying out characteristic acquisition on the intermediate frequency signal spectrum to obtain a characteristic set;
s4, inputting the feature set into an Xgboost identifier, and outputting a non-line-of-sight identification result.
2. The near-ultrasonic-based non-line-of-sight signal identification method according to claim 1, wherein the step S1 is specifically: acquiring modulation signals through an audio acquisition device to obtain mixed audio data of direct line-of-sight signals, non-line-of-sight diffraction signals and reflected signals, wherein the mixed audio data is audio positioning data; the audio acquisition device comprises a microphone and a smart phone microphone.
3. The near-ultrasonic based non-line-of-sight signal recognition method according to claim 1, wherein the step S2 comprises the sub-steps of:
s21, carrying out cross-correlation on the audio positioning data and an original reference signal to obtain theoretical arrival time; calculating the expected arrival time in advance for the theoretical arrival time, and extracting the complete signal in a postponed manner; upsampling the complete signal by a spline interpolation method, and increasing the signal sampling rate to be twice of the original sampling rate to obtain an upsampled signal;
s22, calculating the product of the up-sampled signal and the original reference signal up-sampled by a spline interpolation method to obtain an intermediate frequency signal, wherein the intermediate frequency signal expression is as follows:
wherein s (t) is an original signal sent by the acoustic positioning base station, r is a receiving end receiving signal, and τ is calculated by the base station d For the arrival time of the acoustic signal propagation, B is the bandwidth, d is the acoustic signal transmission distance, c is the speed of sound, T is the signal duration, f min Is the most frequent of the positioning acoustic signalA low value;
s23, performing FFT operation on the intermediate frequency signal obtained in the step S22 to obtain an intermediate frequency spectrum;
the operation window used by the FFT is a Nuttall window, and the expression of the Nuttall window is as follows:
where N is the signal sampling point, and N is the total signal length.
4. The near-ultrasonic based non-line-of-sight signal recognition method according to claim 1, wherein in the step S3, the feature set includes data of a rise time, a first arrival path arrival time, a peak number, an average additional delay, an average root mean square delay, a kurtosis coefficient, a signal energy, and a strongest path energy.
5. The near-ultrasonic based non-line-of-sight signal recognition method of claim 4, wherein the rise time t rise The expression of (2) is:
t rise =t H -t L
wherein, t H is the maximum value of the frequency spectrum h, t L In order to reach a time starting point, lambda is a signal effective segment extraction threshold value, i represents a current signal index, and h represents an obtained intermediate frequency spectrum;
the first arrival path arrival time t first The expression of (2) is:
wherein, extremum (·) is a first Extremum operator, peaks (·) is a peak finding operator;
the number of peaks num peaks The expression of (2) is:
num peaks =peaks[h(i)]
said average additional delay τ m The expression of (2) is:
where τ represents the time index of the signal;
the average root mean square delay tau rms The expression of (2) is:
the kurtosis coefficient kappa has the expression:
wherein,e is the expectation, μ is the first order variance, σ is the variance;
the signal energy epsilon r The expression of (2) is:
the expression of the strongest path energy SPE is as follows:
6. the near-ultrasonic-based non-line-of-sight signal recognition method according to claim 1, wherein in the step S4, the expression of the Xgboost recognizer is:
Obj(θ)=L(θ)+Ω(θ)
wherein L (θ) is a loss function, regularized amountGamma and lambda are regularized parameters, theta is an error parameter, J represents the J-th node, b j Representing the predicted value of the j-th node.
CN202311101412.XA 2023-08-29 2023-08-29 Near-ultrasonic-based non-line-of-sight signal identification method Pending CN117129998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311101412.XA CN117129998A (en) 2023-08-29 2023-08-29 Near-ultrasonic-based non-line-of-sight signal identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311101412.XA CN117129998A (en) 2023-08-29 2023-08-29 Near-ultrasonic-based non-line-of-sight signal identification method

Publications (1)

Publication Number Publication Date
CN117129998A true CN117129998A (en) 2023-11-28

Family

ID=88859540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311101412.XA Pending CN117129998A (en) 2023-08-29 2023-08-29 Near-ultrasonic-based non-line-of-sight signal identification method

Country Status (1)

Country Link
CN (1) CN117129998A (en)

Similar Documents

Publication Publication Date Title
CN107393542B (en) Bird species identification method based on two-channel neural network
CN111783558A (en) Satellite navigation interference signal type intelligent identification method and system
CN108847238B (en) Service robot voice recognition method
CN103901401B (en) A kind of binaural sound source of sound localization method based on ears matched filtering device
CN109890043B (en) Wireless signal noise reduction method based on generative countermeasure network
CN102565759B (en) Binaural sound source localization method based on sub-band signal to noise ratio estimation
CN106226739A (en) Merge the double sound source localization method of Substrip analysis
CN106373589B (en) A kind of ears mixing voice separation method based on iteration structure
CN110176250B (en) Robust acoustic scene recognition method based on local learning
CN110610718B (en) Method and device for extracting expected sound source voice signal
CN102426837B (en) Robustness method used for voice recognition on mobile equipment during agricultural field data acquisition
CN113259288A (en) Underwater acoustic communication modulation mode identification method based on feature fusion and lightweight hybrid neural network
CN109767760A (en) Far field audio recognition method based on the study of the multiple target of amplitude and phase information
CN102760435A (en) Frequency-domain blind deconvolution method for voice signal
CN113870893B (en) Multichannel double-speaker separation method and system
CN116665654A (en) Voice recognition method based on personalized federal learning
CN115267671A (en) Distributed voice interaction terminal equipment and sound source positioning method and device thereof
CN113780521B (en) Radiation source individual identification method based on deep learning
CN104665875A (en) Ultrasonic Doppler envelope and heart rate detection method
CN110333484A (en) The room area grade localization method with analysis is known based on environmental background phonoreception
CN116559778B (en) Vehicle whistle positioning method and system based on deep learning
CN110580915B (en) Sound source target identification system based on wearable equipment
CN117081806A (en) Channel authentication method based on feature extraction
CN117129998A (en) Near-ultrasonic-based non-line-of-sight signal identification method
CN117169812A (en) Sound source positioning method based on deep learning and beam forming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination