CN103345921B - Based on the nighttime sleep acoustic signal analysis method of multiple features - Google Patents

Based on the nighttime sleep acoustic signal analysis method of multiple features Download PDF

Info

Publication number
CN103345921B
CN103345921B CN201310295535.1A CN201310295535A CN103345921B CN 103345921 B CN103345921 B CN 103345921B CN 201310295535 A CN201310295535 A CN 201310295535A CN 103345921 B CN103345921 B CN 103345921B
Authority
CN
China
Prior art keywords
nighttime sleep
frame
acoustical signal
sigma
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310295535.1A
Other languages
Chinese (zh)
Other versions
CN103345921A (en
Inventor
许志勇
钱昆
吴亚琦
赵兆
韩东旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201310295535.1A priority Critical patent/CN103345921B/en
Publication of CN103345921A publication Critical patent/CN103345921A/en
Application granted granted Critical
Publication of CN103345921B publication Critical patent/CN103345921B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a kind of nighttime sleep acoustic signal analysis method based on multiple features.First the method reads the person's nighttime sleep acoustical signal data to be measured by microphone records, carries out end-point detection by short-time energy gate method to nighttime sleep acoustical signal; Then multiple acoustic features of a series of signal Processing Algorithm to nighttime sleep acoustical signal are utilized to extract; The database that can supply subsequent analysis and research is set up finally by every nighttime sleep acoustical signal feature of basic statistical method to person to be measured.The present invention realizes easily, and method is simple, and after being developed to product, hardware cost is little, has a extensive future.

Description

Based on the nighttime sleep acoustic signal analysis method of multiple features
Technical field
The invention belongs to and realize During Night Time sleep acoustical signal Database technology, particularly a kind of nighttime sleep acoustic signal analysis method based on multiple features by Underwater Acoustic channels means.
Background technology
The analysis of the nighttime sleep acoustical signal of human body is very important for the health monitoring system in exploitation wisdom family.The nighttime sleep acoustical signal be often concerned generally includes the improper Speech acoustics signals such as the sound of snoring, cough, groan.These nighttime sleep acoustical signals are for early monitoring and the snoring that notes abnormalities, judge that health states, monitoring dream sleep-walking are significant.
For utilizing acoustic signal analysis means to person's nighttime sleep acoustic signal extraction feature to be measured, it is follow-up basis of carrying out large-scale data analysis, mathematical modeling and pattern-recognition.Taiwan has method (the 1.Liao Wenhung that scholar proposes the data analysis of SAN acoustical signal, Su Yisyuan.Classification of audio signalsin all-night sleep studies [C] .Proc.18th Int.Conf.on Pattern Recognition, HongKong, China, August2006,302-305), but they do not relate to and carry out Database to the nighttime sleep acoustical signal of person to be measured, and selected feature is single, physical significance also and indefinite.Thus can not reflect the sound-source signal discrimination characteristically of some specific physical significance, thus not be suitable for follow-up analysis and modeling.
As from the foregoing, prior art exist method not comprehensively, function singleness, be not suitable for the defects such as follow-up study.
Summary of the invention
The object of the present invention is to provide a kind of nighttime sleep acoustic signal analysis method based on multiple features.
The technical solution realizing the object of the invention is: a kind of nighttime sleep acoustic signal analysis method based on multiple features, and step is as follows:
Step 1, carry out end-point detection to nighttime sleep acoustical signal, namely the nighttime sleep acoustical signal data of first reading microphone records, then become sample by Data Segmentation, and carry out end-point detection to nighttime sleep acoustical signal, by the filtering of " unvoiced segments " data; Carry out end-point detection to nighttime sleep acoustical signal specifically to comprise the following steps:
Step 1-1, carry out sub-frame processing in short-term to nighttime sleep acoustical signal data, " frame length " is 64ms, and " frame moves " is 32ms, and " frame " data are considered as a sample;
Step 1-2, starting point is set to " frame " sequence number n=k of current reading, and pointer j resets;
Step 1-3, utilize formula determine the short-time energy of current reading " frame ";
In formula, k is current reading " frame " sequence number, and the digital signal samples that N comprises for sample is counted, E kfor the short-time energy of sample, S kfor digital signal samples amplitude;
Step 1-4, judge E kwhether be greater than threshold value E th, if so, perform step 1-5, otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing step 1-3; Described threshold value E thfor microphone noise substrate E btwice;
Step 1-5, judge whether pointer j is greater than interval d, if so, perform step 1-6, otherwise perform step 1-2; Described interval d is 3 ~ 4.
Step 1-6, to determine " abort frame " and " start frame ", wherein " abort frame " sequence number n stop=n-1, " start frame " sequence number n initial=n – j – 1.
Step 2, multi-feature extraction is carried out to the nighttime sleep acoustical signal by end-point detection; Be specially:
Step 2-1, each frame of the nighttime sleep acoustical signal fragment by end-point detection is all multiplied by Hamming window function, wherein Hamming window function is:
w ( m ) = 0.54 - 0 . 46 cos ( 2 π m M ) , 0 ≤ m ≤ M ;
In formula, M is the sampling number of one " frame " sample;
Step 2-2, all extract " frequecy characteristic " each frame of the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
Σ f i = 0 f center X f i = Σ f i = f center f c X f i ;
X f peak = max { X f i , f i = 0 , . . . , f c } ;
f mean = Σ f i = 0 f c f i * X f i Σ f i = 0 f c X f i ;
f mean ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j f i * X f i Σ f i = 1000 * ( j - 1 ) 1000 * j X f i j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, X fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f peakthe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, [] is bracket function, f centercentre frequency, f peakcrest frequency, f meanbarycenter of frequency spectrum, f mean(j) (j=1,2 ..., [f s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band;
Step 2-3, all extract " energy feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
BER ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j X f i 2 Σ f i = 0 f c X f i 2 j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, BER (j) (j=1,2 ..., [f s/ 1000]) be the energy Ratios of each 1000Hz sub-band;
Step 2-4, all extract " empirical mode decomposition feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
E i = ∫ - ∞ + ∞ | c i ( t ) | 2 dti = 1,2 , . . . , l ;
T = [ E 1 / E , E 2 / E , . . . , E l / E ] , E = Σ i = 1 l E i ;
H E = - Σ i = 1 l p i log 2 p i , p i = E i / E ;
In formula, c i(t) (i=1,2 ..., l) be the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, E ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H eit is intrinsic mode function Energy-Entropy.
Step 3, multiple feature building databases that step 2 is extracted, and the distribution of feature is added up.Carry out statistics to the distribution of feature to comprise: obtain the maximal value of each feature of overall sample, minimum value, average, variance and dynamic range.
Compared with prior art, its remarkable advantage is in the present invention: 1) method of the present invention is concentrated and is extracted comparatively comprehensive multiple nighttime sleep acoustical signal feature; 2) the present invention carries out feature extraction to person's nighttime sleep acoustical signal data to be measured, and the statistical property finally drawn can reflect the dynamic changing process of relevant physiological or behavior state in person's SAN to be measured; 3) method of the present invention realizes simple, and after being developed to hardware product, cost is less, is easy to promote the use of.
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the nighttime sleep acoustic signal analysis method that the present invention is based on multiple features.
Fig. 2 is that the present invention carries out the process flow diagram of end-point detection to nighttime sleep acoustical signal.
Fig. 3 is that the present invention is to the histogram of certain feature of nighttime sleep acoustical signal data extracted in example and the Kernel function curve diagram of matching.
Embodiment
Composition graphs 1, a kind of nighttime sleep acoustic signal analysis method based on multiple features of the present invention, step is as follows:
Step 1, carry out end-point detection to nighttime sleep acoustical signal, namely the nighttime sleep acoustical signal data of first reading microphone records, then become sample by Data Segmentation, and carry out end-point detection to nighttime sleep acoustical signal, by the filtering of " unvoiced segments " data; Carry out end-point detection to nighttime sleep acoustical signal specifically to comprise the following steps:
Step 1-1, carry out sub-frame processing in short-term to nighttime sleep acoustical signal data, " frame length " is 64ms, and " frame moves " is 32ms, and " frame " data are considered as a sample;
Step 1-2, starting point is set to " frame " sequence number n=k of current reading, and pointer j resets;
Step 1-3, utilize formula determine the short-time energy of current reading " frame ";
In formula, k is current reading " frame " sequence number, and the digital signal samples that N comprises for sample is counted, E kfor the short-time energy of sample, S kfor digital signal samples amplitude;
Step 1-4, judge E kwhether be greater than threshold value E th, if so, perform step 1-5, otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing step 1-3; Described threshold value E thfor microphone noise substrate E btwice;
Step 1-5, judge whether pointer j is greater than interval d, if so, perform step 1-6, otherwise perform step 1-2; Described middle interval d is 3 ~ 4.
Step 1-6, to determine " abort frame " and " start frame ", wherein " abort frame " sequence number n stop=n-1, " start frame " sequence number n initial=n – j – 1.
Step 2, multi-feature extraction is carried out to the nighttime sleep acoustical signal by end-point detection; Carry out multi-feature extraction to the nighttime sleep acoustical signal by end-point detection to be specially:
Step 2-1, each frame of the nighttime sleep acoustical signal fragment by end-point detection is all multiplied by Hamming window function, wherein Hamming window function is:
w ( m ) = 0.54 - 0 . 46 cos ( 2 π m M ) , 0 ≤ m ≤ M ;
In formula, M is the sampling number of one " frame " sample;
Step 2-2, all extract " frequecy characteristic " each frame of the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
Σ f i = 0 f center X f i = Σ f i = f center f c X f i ;
X f peak = max { X f i , f i = 0 , . . . , f c } ;
f mean = Σ f i = 0 f c f i * X f i Σ f i = 0 f c X f i ;
f mean ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j f i * X f i Σ f i = 1000 * ( j - 1 ) 1000 * j X f i j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, X fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f peakthe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, [] is bracket function, f centercentre frequency, f peakcrest frequency, f meanbarycenter of frequency spectrum, f mean(j) (j=1,2 ..., [f s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band;
Step 2-3, all extract " energy feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
BER ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j X f i 2 Σ f i = 0 f c X f i 2 j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, BER (j) (j=1,2 ..., [f s/ 1000]) be the energy Ratios of each 1000Hz sub-band;
Step 2-4, all extract " empirical mode decomposition feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
E i = ∫ - ∞ + ∞ | c i ( t ) | 2 dti = 1,2 , . . . , l ;
T = [ E 1 / E , E 2 / E , . . . , E l / E ] , E = Σ i = 1 l E i ;
H E = - Σ i = 1 l p i log 2 p i , p i = E i / E ;
In formula, c i(t) (i=1,2 ..., l) be the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, E ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H eit is intrinsic mode function Energy-Entropy.
Step 3, multiple feature building databases that step 2 is extracted, and the distribution of feature is added up.Carry out statistics to the distribution of feature to comprise: obtain the maximal value of each feature of overall sample, minimum value, average, variance and dynamic range.
Below in conjunction with example, further detailed description is done to the present invention:
Composition graphs 1, the present invention is based on the nighttime sleep acoustic signal analysis method of multiple features, step is as follows:
The first step, carries out " in short-term framing " process to person's nighttime sleep acoustical signal data to be measured of microphone records, calculates the short-time energy value of every " frame " (i.e. sample) respectively, then carries out end-point detection by shown in Fig. 2 to nighttime sleep acoustical signal.
(1) to reading by the nighttime sleep sound data of microphone records and using " in short-term framing " to process, 50% overlap " framing " is carried out to raw data, " frame length " is got for 64ms in example, " frame moves " is 32ms, and " frame " nighttime sleep sound data visualization is a sample.
(2) starting point is that current reading " frame " sequence number n=k, pointer j reset.
(3) according to formula the short-time energy of (k represents a kth sample, and the digital signal samples that N comprises for each sample is counted) calculating " present frame " sample.
(4) E is judged kwhether be greater than threshold value E th, if so, perform (5), otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing (3), E in example thbe 0.015.
(5) judge whether pointer j is greater than interval d, getting d in example is 3, if so, performs (6), otherwise performs (2);
(6) " abort frame " sequence number n stop=n-1, " start frame " sequence number n initial=n – j – 1.
Second step, multi-feature extraction: as shown in Figure 1, carries out multi-feature extraction to the nighttime sleep acoustical signal data by end-point detection.
(1) to being carried out " windowing process " (namely each frame of former nighttime sleep acoustical signal fragment is all multiplied by a window function) by the nighttime sleep acoustical signal data after end-point detection, generally choose Hamming window, window function formula used is:
w ( m ) = 0.54 - 0 . 46 cos ( 2 π m M ) , 0 ≤ m ≤ M
Wherein, M is the sampling number of one " frame " sample, is 1024 in example.
(2) " frequecy characteristic " is extracted:
Σ f i = 0 f center X f i = Σ f i = f center f c X f i ;
X f peak = max { X f i , f i = 0 , . . . , f c } ;
f mean = Σ f i = 0 f c f i * X f i Σ f i = 0 f c X f i ;
f mean ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j f i * X f i Σ f i = 1000 * ( j - 1 ) 1000 * j X f i j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, X fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f peakthe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, [] is bracket function, f centercentre frequency, f peakcrest frequency, f meanbarycenter of frequency spectrum, f mean(j) (j=1,2 ..., [f s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band.
(3) " energy feature " is extracted:
BER ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j X f i 2 Σ f i = 0 f c X f i 2 j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, BER (j) (j=1,2 ..., [f s/ 1000]) be the energy Ratios of each 1000Hz sub-band.
(4) " empirical mode decomposition feature " is extracted:
E i = ∫ - ∞ + ∞ | c i ( t ) | 2 dti = 1,2 , . . . , n ;
T = [ E 1 / E , E 2 / E , . . . , E l / E ] , E = Σ i = 1 l E i ;
H E = - Σ i = 1 n p i log 2 p i , p i = E i / E ;
In formula, c i(t), i=1,2, n is the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, in example, l is taken as 10, (2.Huang N.E. can be tried to achieve by empirical mode decomposition method, Shen Zheng, Long R.S., et al.The empiricalmode decomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysis [J] .Proc.R.Soc.Lond.A, 1998,454 (1971): 903-995.), E ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H eit is intrinsic mode function Energy-Entropy.
3rd step, multiple features building database to extracting: obtain the maximal value of overall each feature of sample, minimum value, average, variance and dynamic range (difference of maxima and minima), the statistic histogram of certain feature is depicted in example, simulate the Kernel function curve of this feature distribution, Kernel function is as follows:
f ^ h ( x ) = 1 Ns Σ i = 1 Ns K h ( x - x i ) ;
In formula, K () is Kernel function, selects normal distyribution function in example, and h is Kernel function " bandwidth ", is taken as 1.06 σ Ns in example -0.2(σ is the positive square root value of this feature variance), x i(i=1,2,3 ..., Ns) and by all numerical value of certain feature of nighttime sleep acoustical signal of matching, Ns is the number of nighttime sleep acoustical signal sample.Fig. 3 gives certain feature histogram of nighttime sleep acoustical signal data and the Kernel function curve of matching.
As from the foregoing, method of the present invention can extract the comparatively comprehensive multinomial acoustic feature of nighttime sleep acoustical signal, and makes analysis by the feature distribution situation of analytical approach to nighttime sleep acoustical signal of Corpus--based Method.The person's nighttime sleep acoustical signal multiple features database to be measured risen by method establishment of the present invention can carry out analyzing and research provides basis for follow-up.

Claims (3)

1., based on a nighttime sleep acoustic signal analysis method for multiple features, it is characterized in that, step is as follows:
Step 1, carry out end-point detection to nighttime sleep acoustical signal, namely the nighttime sleep acoustical signal data of first reading microphone records, then become sample by Data Segmentation, and carry out end-point detection to nighttime sleep acoustical signal, by the filtering of " unvoiced segments " data; Carry out end-point detection to nighttime sleep acoustical signal specifically to comprise the following steps:
Step 1-1, carry out sub-frame processing in short-term to nighttime sleep acoustical signal data, " frame length " is 64ms, and " frame moves " is 32ms, and " frame " data are considered as a sample;
Step 1-2, starting point is set to " frame " sequence number n=k of current reading, and pointer j resets;
Step 1-3, utilize formula determine the short-time energy of current reading " frame ";
In formula, k is current reading " frame " sequence number, and the digital signal samples that N comprises for sample is counted, E kfor the short-time energy of sample, S kfor digital signal samples amplitude;
Step 1-4, judge E kwhether be greater than threshold value E th, if so, perform step 1-5, otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing step 1-3;
Step 1-5, judge whether pointer j is greater than interval d, if so, perform step 1-6, otherwise perform step 1-2;
Step 1-6, to determine " abort frame " and " start frame ", wherein " abort frame " sequence number n stop=n-1, " start frame " sequence number n initial=n – j – 1;
Step 2, multi-feature extraction is carried out to the nighttime sleep acoustical signal by end-point detection; Be specially:
Step 2-1, each frame of the nighttime sleep acoustical signal fragment by end-point detection is all multiplied by Hamming window function, wherein Hamming window function is:
w ( m ) = 0.54 - 0.46 cos ( 2 π m M ) , 0 ≤ m ≤ M ;
In formula, M is the sampling number of one " frame " sample;
Step 2-2, all extract " frequecy characteristic " each frame of the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
Σ f i = 0 f center X f i = Σ f i = f center f c X f i ;
X f peak = max { X f i , f i = 0 , . . . , f c } ;
f mean = Σ f i = 0 f c f i * X f i Σ f i = 0 f c X f i ;
f mean ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j f i * X f i Σ f i = 1000 * ( j - 1 ) 1000 * j X f i , j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, X fithe frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, X fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f peakthe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, [] is bracket function, f centercentre frequency, f peakcrest frequency, f meanbarycenter of frequency spectrum, f mean(j) (j=1,2 ..., [f s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band;
Step 2-3, all extract " energy feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
BER ( j ) = Σ f i = 1000 * ( j - 1 ) 1000 * j X f i 2 Σ f i = 0 f c X f i 2 , j = 1,2 , . . . , [ f s / 1000 ] ;
In formula, X fithe frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT) ithe absolute value of the power spectrum density at place, f cbeing cutoff frequency, is sample frequency f s1/2, BER (j) (j=1,2 ..., [f s/ 1000]) be the energy Ratios of each 1000Hz sub-band;
Step 2-4, all extract " empirical mode decomposition feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
E i = ∫ - ∞ + ∞ | c i ( t ) | 2 dt , i = 1,2 , . . . , l ;
T = [ E 1 / E 2 , E 2 / E , . . . , E l / E ] , E = Σ i = 1 l E i ;
H E = - Σ i = 1 l p i log 2 p i , p i = E i / E ;
In formula, c i(t) (i=1,2 ..., l) be the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, E ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H eit is intrinsic mode function Energy-Entropy;
Step 3, multiple feature building databases that step 2 is extracted, and the distribution of feature is added up.
2. the nighttime sleep acoustic signal analysis method based on multiple features according to claim 1, is characterized in that, the distribution of step 3 pair feature is carried out statistics and comprised: obtain the maximal value of each feature of overall sample, minimum value, average, variance and dynamic range.
3. the nighttime sleep acoustic signal analysis method based on multiple features according to claim 1, is characterized in that, the threshold value E described in step 1-4 thfor microphone noise substrate E btwice, in step 1-5, interval d is 3 ~ 4.
CN201310295535.1A 2013-07-15 2013-07-15 Based on the nighttime sleep acoustic signal analysis method of multiple features Expired - Fee Related CN103345921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310295535.1A CN103345921B (en) 2013-07-15 2013-07-15 Based on the nighttime sleep acoustic signal analysis method of multiple features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310295535.1A CN103345921B (en) 2013-07-15 2013-07-15 Based on the nighttime sleep acoustic signal analysis method of multiple features

Publications (2)

Publication Number Publication Date
CN103345921A CN103345921A (en) 2013-10-09
CN103345921B true CN103345921B (en) 2015-08-26

Family

ID=49280712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310295535.1A Expired - Fee Related CN103345921B (en) 2013-07-15 2013-07-15 Based on the nighttime sleep acoustic signal analysis method of multiple features

Country Status (1)

Country Link
CN (1) CN103345921B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109427345B (en) * 2017-08-29 2022-12-02 杭州海康威视数字技术股份有限公司 Wind noise detection method, device and system
CN109645957B (en) * 2018-12-21 2021-06-08 南京理工大学 Snore source classification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102429662A (en) * 2011-11-10 2012-05-02 大连理工大学 Screening system for sleep apnea syndrome in family environment
CN102499637A (en) * 2011-09-26 2012-06-20 大连理工大学 Obstructive sleep apnea-hypopnea syndrome screening method and device thereof
CN102579010A (en) * 2012-03-01 2012-07-18 上海大学 Method for diagnosing obstructive sleep apnea hypopnea syndrome according to snore

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010066008A1 (en) * 2008-12-10 2010-06-17 The University Of Queensland Multi-parametric analysis of snore sounds for the community screening of sleep apnea with non-gaussianity index

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102499637A (en) * 2011-09-26 2012-06-20 大连理工大学 Obstructive sleep apnea-hypopnea syndrome screening method and device thereof
CN102429662A (en) * 2011-11-10 2012-05-02 大连理工大学 Screening system for sleep apnea syndrome in family environment
CN102579010A (en) * 2012-03-01 2012-07-18 上海大学 Method for diagnosing obstructive sleep apnea hypopnea syndrome according to snore

Also Published As

Publication number Publication date
CN103345921A (en) 2013-10-09

Similar Documents

Publication Publication Date Title
CN109065030B (en) Convolutional neural network-based environmental sound identification method and system
CN102721545B (en) Rolling bearing failure diagnostic method based on multi-characteristic parameter
CN108896878B (en) Partial discharge detection method based on ultrasonic waves
CN109767785A (en) Ambient noise method for identifying and classifying based on convolutional neural networks
CN107393555B (en) Detection system and detection method for abnormal sound signal with low signal-to-noise ratio
CN102799892B (en) Mel frequency cepstrum coefficient (MFCC) underwater target feature extraction and recognition method
CN103325381B (en) A kind of speech separating method based on fuzzy membership functions
CN102163427A (en) Method for detecting audio exceptional event based on environmental model
CN107274911A (en) A kind of similarity analysis method based on sound characteristic
CN108169639A (en) Method based on the parallel long identification switch cabinet failure of Memory Neural Networks in short-term
CN104409073A (en) Substation equipment sound and voice identification method
CN108196164B (en) Method for extracting cable fault point discharge sound signal under strong background noise
CN105424366A (en) Bearing fault diagnosis method based on EEMD adaptive denoising
CN104887263A (en) Identity recognition algorithm based on heart sound multi-dimension feature extraction and system thereof
CN103487513A (en) Method for identifying types of acoustic emission signals of space debris impact damage
CN104122486B (en) Method and device for detecting early failure of cable
CN104391336A (en) Time-frequency spectrum analyzing method for processing earthly natural pulse electromagnetic field data
CN105137297A (en) Method and device for separating multi-source partial discharge signals of power transmission device
CN106548786A (en) A kind of detection method and system of voice data
CN103345921B (en) Based on the nighttime sleep acoustic signal analysis method of multiple features
CN109377982B (en) Effective voice obtaining method
CN103559893B (en) One is target gammachirp cepstrum coefficient aural signature extracting method under water
CN111862978A (en) Voice awakening method and system based on improved MFCC (Mel frequency cepstrum coefficient)
CN101849823A (en) Neuronal action potential feature extraction method based on permutation entropy
CN104102834A (en) Method for identifying sound recording locations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Xu Zhiyong

Inventor after: Qian Kun

Inventor after: Wu Yaqi

Inventor after: Zhao Zhao

Inventor after: Han Dongxu

Inventor after: Wang Bei

Inventor before: Xu Zhiyong

Inventor before: Qian Kun

Inventor before: Wu Yaqi

Inventor before: Zhao Zhao

Inventor before: Han Dongxu

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150826

Termination date: 20200715

CF01 Termination of patent right due to non-payment of annual fee