Summary of the invention
The object of the present invention is to provide a kind of nighttime sleep acoustic signal analysis method based on multiple features.
The technical solution realizing the object of the invention is: a kind of nighttime sleep acoustic signal analysis method based on multiple features, and step is as follows:
Step 1, carry out end-point detection to nighttime sleep acoustical signal, namely the nighttime sleep acoustical signal data of first reading microphone records, then become sample by Data Segmentation, and carry out end-point detection to nighttime sleep acoustical signal, by the filtering of " unvoiced segments " data; Carry out end-point detection to nighttime sleep acoustical signal specifically to comprise the following steps:
Step 1-1, carry out sub-frame processing in short-term to nighttime sleep acoustical signal data, " frame length " is 64ms, and " frame moves " is 32ms, and " frame " data are considered as a sample;
Step 1-2, starting point is set to " frame " sequence number n=k of current reading, and pointer j resets;
Step 1-3, utilize formula
determine the short-time energy of current reading " frame ";
In formula, k is current reading " frame " sequence number, and the digital signal samples that N comprises for sample is counted, E
kfor the short-time energy of sample, S
kfor digital signal samples amplitude;
Step 1-4, judge E
kwhether be greater than threshold value E
th, if so, perform step 1-5, otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing step 1-3; Described threshold value E
thfor microphone noise substrate E
btwice;
Step 1-5, judge whether pointer j is greater than interval d, if so, perform step 1-6, otherwise perform step 1-2; Described interval d is 3 ~ 4.
Step 1-6, to determine " abort frame " and " start frame ", wherein " abort frame " sequence number n
stop=n-1, " start frame " sequence number n
initial=n – j – 1.
Step 2, multi-feature extraction is carried out to the nighttime sleep acoustical signal by end-point detection; Be specially:
Step 2-1, each frame of the nighttime sleep acoustical signal fragment by end-point detection is all multiplied by Hamming window function, wherein Hamming window function is:
In formula, M is the sampling number of one " frame " sample;
Step 2-2, all extract " frequecy characteristic " each frame of the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
In formula,
the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT)
ithe absolute value of the power spectrum density at place, X
fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f
peakthe absolute value of the power spectrum density at place, f
cbeing cutoff frequency, is sample frequency f
s1/2, [] is bracket function, f
centercentre frequency, f
peakcrest frequency, f
meanbarycenter of frequency spectrum, f
mean(j) (j=1,2 ..., [f
s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band;
Step 2-3, all extract " energy feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
In formula,
the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT)
ithe absolute value of the power spectrum density at place, f
cbeing cutoff frequency, is sample frequency f
s1/2, BER (j) (j=1,2 ..., [f
s/ 1000]) be the energy Ratios of each 1000Hz sub-band;
Step 2-4, all extract " empirical mode decomposition feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
In formula, c
i(t) (i=1,2 ..., l) be the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, E
ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H
eit is intrinsic mode function Energy-Entropy.
Step 3, multiple feature building databases that step 2 is extracted, and the distribution of feature is added up.Carry out statistics to the distribution of feature to comprise: obtain the maximal value of each feature of overall sample, minimum value, average, variance and dynamic range.
Compared with prior art, its remarkable advantage is in the present invention: 1) method of the present invention is concentrated and is extracted comparatively comprehensive multiple nighttime sleep acoustical signal feature; 2) the present invention carries out feature extraction to person's nighttime sleep acoustical signal data to be measured, and the statistical property finally drawn can reflect the dynamic changing process of relevant physiological or behavior state in person's SAN to be measured; 3) method of the present invention realizes simple, and after being developed to hardware product, cost is less, is easy to promote the use of.
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Embodiment
Composition graphs 1, a kind of nighttime sleep acoustic signal analysis method based on multiple features of the present invention, step is as follows:
Step 1, carry out end-point detection to nighttime sleep acoustical signal, namely the nighttime sleep acoustical signal data of first reading microphone records, then become sample by Data Segmentation, and carry out end-point detection to nighttime sleep acoustical signal, by the filtering of " unvoiced segments " data; Carry out end-point detection to nighttime sleep acoustical signal specifically to comprise the following steps:
Step 1-1, carry out sub-frame processing in short-term to nighttime sleep acoustical signal data, " frame length " is 64ms, and " frame moves " is 32ms, and " frame " data are considered as a sample;
Step 1-2, starting point is set to " frame " sequence number n=k of current reading, and pointer j resets;
Step 1-3, utilize formula
determine the short-time energy of current reading " frame ";
In formula, k is current reading " frame " sequence number, and the digital signal samples that N comprises for sample is counted, E
kfor the short-time energy of sample, S
kfor digital signal samples amplitude;
Step 1-4, judge E
kwhether be greater than threshold value E
th, if so, perform step 1-5, otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing step 1-3; Described threshold value E
thfor microphone noise substrate E
btwice;
Step 1-5, judge whether pointer j is greater than interval d, if so, perform step 1-6, otherwise perform step 1-2; Described middle interval d is 3 ~ 4.
Step 1-6, to determine " abort frame " and " start frame ", wherein " abort frame " sequence number n
stop=n-1, " start frame " sequence number n
initial=n – j – 1.
Step 2, multi-feature extraction is carried out to the nighttime sleep acoustical signal by end-point detection; Carry out multi-feature extraction to the nighttime sleep acoustical signal by end-point detection to be specially:
Step 2-1, each frame of the nighttime sleep acoustical signal fragment by end-point detection is all multiplied by Hamming window function, wherein Hamming window function is:
In formula, M is the sampling number of one " frame " sample;
Step 2-2, all extract " frequecy characteristic " each frame of the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
In formula,
the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT)
ithe absolute value of the power spectrum density at place, X
fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f
peakthe absolute value of the power spectrum density at place, f
cbeing cutoff frequency, is sample frequency f
s1/2, [] is bracket function, f
centercentre frequency, f
peakcrest frequency, f
meanbarycenter of frequency spectrum, f
mean(j) (j=1,2 ..., [f
s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band;
Step 2-3, all extract " energy feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
In formula,
the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT)
ithe absolute value of the power spectrum density at place, f
cbeing cutoff frequency, is sample frequency f
s1/2, BER (j) (j=1,2 ..., [f
s/ 1000]) be the energy Ratios of each 1000Hz sub-band;
Step 2-4, all extract " empirical mode decomposition feature " each frame in the nighttime sleep acoustical signal fragment after step 2-1 process, formula used is:
In formula, c
i(t) (i=1,2 ..., l) be the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, E
ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H
eit is intrinsic mode function Energy-Entropy.
Step 3, multiple feature building databases that step 2 is extracted, and the distribution of feature is added up.Carry out statistics to the distribution of feature to comprise: obtain the maximal value of each feature of overall sample, minimum value, average, variance and dynamic range.
Below in conjunction with example, further detailed description is done to the present invention:
Composition graphs 1, the present invention is based on the nighttime sleep acoustic signal analysis method of multiple features, step is as follows:
The first step, carries out " in short-term framing " process to person's nighttime sleep acoustical signal data to be measured of microphone records, calculates the short-time energy value of every " frame " (i.e. sample) respectively, then carries out end-point detection by shown in Fig. 2 to nighttime sleep acoustical signal.
(1) to reading by the nighttime sleep sound data of microphone records and using " in short-term framing " to process, 50% overlap " framing " is carried out to raw data, " frame length " is got for 64ms in example, " frame moves " is 32ms, and " frame " nighttime sleep sound data visualization is a sample.
(2) starting point is that current reading " frame " sequence number n=k, pointer j reset.
(3) according to formula
the short-time energy of (k represents a kth sample, and the digital signal samples that N comprises for each sample is counted) calculating " present frame " sample.
(4) E is judged
kwhether be greater than threshold value E
th, if so, perform (5), otherwise current reading " frame " sequence number k is from increasing 1, pointer j from increasing 1 and performing (3), E in example
thbe 0.015.
(5) judge whether pointer j is greater than interval d, getting d in example is 3, if so, performs (6), otherwise performs (2);
(6) " abort frame " sequence number n
stop=n-1, " start frame " sequence number n
initial=n – j – 1.
Second step, multi-feature extraction: as shown in Figure 1, carries out multi-feature extraction to the nighttime sleep acoustical signal data by end-point detection.
(1) to being carried out " windowing process " (namely each frame of former nighttime sleep acoustical signal fragment is all multiplied by a window function) by the nighttime sleep acoustical signal data after end-point detection, generally choose Hamming window, window function formula used is:
Wherein, M is the sampling number of one " frame " sample, is 1024 in example.
(2) " frequecy characteristic " is extracted:
In formula,
the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT)
ithe absolute value of the power spectrum density at place, X
fpeakbe nighttime sleep acoustical signal after Fast Fourier Transform (FFT) in crest frequency frequency f
peakthe absolute value of the power spectrum density at place, f
cbeing cutoff frequency, is sample frequency f
s1/2, [] is bracket function, f
centercentre frequency, f
peakcrest frequency, f
meanbarycenter of frequency spectrum, f
mean(j) (j=1,2 ..., [f
s/ 1000]) be the barycenter of frequency spectrum of each 1000Hz sub-band.
(3) " energy feature " is extracted:
In formula,
the frequency f of nighttime sleep acoustical signal after Fast Fourier Transform (FFT)
ithe absolute value of the power spectrum density at place, f
cbeing cutoff frequency, is sample frequency f
s1/2, BER (j) (j=1,2 ..., [f
s/ 1000]) be the energy Ratios of each 1000Hz sub-band.
(4) " empirical mode decomposition feature " is extracted:
In formula, c
i(t), i=1,2, n is the l layer intrinsic mode function that nighttime sleep acoustical signal obtains after empirical mode decomposition, in example, l is taken as 10, (2.Huang N.E. can be tried to achieve by empirical mode decomposition method, Shen Zheng, Long R.S., et al.The empiricalmode decomposition and the Hilbert spectrum for nonlinear andnon-stationary time series analysis [J] .Proc.R.Soc.Lond.A, 1998,454 (1971): 903-995.), E
ibe the energy of i-th layer of intrinsic mode function, T is intrinsic mode function energy Ratios eigenvector, and E is the gross energy of each layer intrinsic mode function, H
eit is intrinsic mode function Energy-Entropy.
3rd step, multiple features building database to extracting: obtain the maximal value of overall each feature of sample, minimum value, average, variance and dynamic range (difference of maxima and minima), the statistic histogram of certain feature is depicted in example, simulate the Kernel function curve of this feature distribution, Kernel function is as follows:
In formula, K () is Kernel function, selects normal distyribution function in example, and h is Kernel function " bandwidth ", is taken as 1.06 σ Ns in example
-0.2(σ is the positive square root value of this feature variance), x
i(i=1,2,3 ..., Ns) and by all numerical value of certain feature of nighttime sleep acoustical signal of matching, Ns is the number of nighttime sleep acoustical signal sample.Fig. 3 gives certain feature histogram of nighttime sleep acoustical signal data and the Kernel function curve of matching.
As from the foregoing, method of the present invention can extract the comparatively comprehensive multinomial acoustic feature of nighttime sleep acoustical signal, and makes analysis by the feature distribution situation of analytical approach to nighttime sleep acoustical signal of Corpus--based Method.The person's nighttime sleep acoustical signal multiple features database to be measured risen by method establishment of the present invention can carry out analyzing and research provides basis for follow-up.