CN108388912A - Sleep stage method based on multisensor feature optimization algorithm - Google Patents

Sleep stage method based on multisensor feature optimization algorithm Download PDF

Info

Publication number
CN108388912A
CN108388912A CN201810125662.XA CN201810125662A CN108388912A CN 108388912 A CN108388912 A CN 108388912A CN 201810125662 A CN201810125662 A CN 201810125662A CN 108388912 A CN108388912 A CN 108388912A
Authority
CN
China
Prior art keywords
radar
signal
feature
audio
per minute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810125662.XA
Other languages
Chinese (zh)
Other versions
CN108388912B (en
Inventor
洪弘
张�诚
孙理
蒋洁
顾陈
李彧晟
朱晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201810125662.XA priority Critical patent/CN108388912B/en
Publication of CN108388912A publication Critical patent/CN108388912A/en
Application granted granted Critical
Publication of CN108388912B publication Critical patent/CN108388912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/24765Rule-based classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system

Abstract

The invention discloses a kind of sleep stage method based on multisensor feature optimization algorithm, continuous wave radar sensor and audio sensor are carried out at the same time the acquisition of signal by this method first;Then Digital Signal Processing carried out to signal, acquisition includes vital sign parameter signals and the sound of snoring signal including breathing, heartbeat, body are dynamic;Then feature extraction is carried out, concentrates shared weight to be adjusted using Character adjustment optimization algorithm in fusion feature each feature, training grader;Finally sleep stage is carried out with into excessively trained grader.The method of the present invention is effective and feasible, dependable performance, and sleeping on user, it is small to influence, and accurately assesses user's sleep.

Description

Sleep stage method based on multisensor feature optimization algorithm
Technical field
The invention belongs to Radar Technology and field of acoustics, and in particular to and multi-sensor collection data are utilized, feature is extracted, Data fusion is carried out in feature rank, carries out sleep stage.
Background technology
After the relevant technologies such as biomedical engineering and big data are gradually applied to clinic, it has been found that sleep all night information The health status that human body can be mapped out to a certain extent, to carry out the prevention and treatment of disease.In sleep procedure, people's A variety of physiological signals such as electroencephalogram, electrocardiogram change, and these signal intensities have certain phase with the weight of sleep The variation of closing property.Medically according to the physiological performances such as brain wave, electrocardio, eye electricity it is artificial sleep is divided into several stages, it is practical There is no very specific boundaries between upper each sleep stage, but gradually change and have connection between the period, whole night Sleep stage is also to present periodically, there is certain rule.In sleep all night, it generally will appear 4~6 sleep cycles, There is certain relevance between the front and back period, in cycles, moves in circles.
Continuous improvement with people to health requirements standard, the health status that sleep stage is reflected also increasingly by Pay attention to.Sleep monitor product mostly uses contact monitoring, needs to paste several electrodes at multiple positions of subject's body, wears mouth Nasal airflow pipe and abdomen bandage.The monitoring of contact largely affects the physiology and psychologic situation of subject, and surveys Test ring border is complicated, needs health care professional to operate, price is relatively expensive.This method needs professional to operate, and price is high Expensive, tester needs to be in direct contact instrument, and it is uncomfortable easily to generate body-sensing.The patient in contactless sleep stage monitoring method Need not with any instrument contacts, easy to operate, can long term monitoring, cost is relatively low.But in the prior art, correlation there is no to retouch It states.
Invention content
The purpose of the present invention is to provide a kind of sleep stage methods based on multisensor feature optimization algorithm, improve Contactless sleep monitor product accuracy rate.
Realize that the technical solution of the object of the invention is:A kind of sleep stage based on multisensor feature optimization algorithm Method includes the following steps:
Step 1, tester lie low on test envelope, and radar sensor is set to right over human chest, for adopting Collect radar echo signal;Audio sensor is set to the direction towards face, for acquiring breathing and sound of snoring signal;
Observation area includes on bed:Ceiling, bed surface, ceiling and bed surface are parallel, and tester, which balances, to be lain on bed surface, Radar is arranged in the centre of ceiling, and radar antenna is towards bed surface, face tester thoracic cavity;Radar uses continuous wave radar;Sound Video sensor is arranged on the centre of ceiling, microphone face tester head.
Step 2 at the same open radar sensor and audio sensor acquisition signal, tester fall asleep, radar illumination to people Body thoracic cavity surface forms echo, and the sound of snoring that tester's sleep generates is recorded by audio sensor;
Radar is detected specially object:Continuous wave radar is irradiated tester thoracic cavity, the arteries and veins of radar emission The reflection that signal passes through torso model is rushed, carries corresponding vital sign information in the echo-signal of reflection, then connect by radar Receipts machine receives echo-signal.
Step 3 is handled original echo using digital signal processing method, specially:Radar signal is carried out anti- Tangent demodulates, and extracts body movement signal, breathing and heartbeat signal, is handled original audio using noise reduction algorithm;
Step 4, the breathing obtained to step 3, body are dynamic, heartbeat extracts 23 kinds of features and extracts 11 kinds of spies to audio signal 11 kinds of features of audio signal are carried out splicing fusion, judged the data after fusion, with whether there is or not audio frequency characteristics to be divided to two by sign A model makes decisions;
11 kinds of features are calculated as unit of minute, but extract feature, the value model of the T with every T seconds for interval Enclose is 20<T<35;
(1)RPM
RPM is respiration rate per minute;
(2)RPM_VAR
RPM_VAR is the variance of respiration rate per minute, if respiration rate per minute is r (n_bre), whole night in signal The average value of respiration rate per minute isN_var counts for interpolated value per minute, and calculation formula is as follows:
(3)RPM_ADA
RPM_ADA is that the difference of breath signal amplitude per minute accumulates, and A_bre (n_bre) is breath signal amplitude, and N is The calculation formula of interpolated value points per minute, the difference accumulation Ada of breath signal amplitude per minute is as follows:
(4)RPM_MOVE
RPM_MOVE is the body movement signal feature in breath signal;
(5)BPM
BPM is beats per minute;
(6)BPM_VAR
BPM_VAR is the variance of beats per minute, and b_bre (n_bre_var) is beats,For heartbeat whole night Signal averaging amplitude, N count for interpolated value per minute, the variance S_var of beats per minute2Calculation formula is as follows:
(7)BPM_ADA
BPM_ADA is that the difference of heartbeat signal amplitude per minute accumulates, and wherein A_ada ' (n_bre_ada) are heartbeat signal Amplitude, N count for interpolated value per minute,
(8)BPM_MOVE
BPM_MOVE is the body movement signal feature in heartbeat signal, and door is set according to the amplitude of body movement signal in heartbeat signal Limit judges that the points more than thresholding move feature to calculate body;Per minute to take N number of interpolated point, N value ranges are 90<n<110, when The signal amplitude of preceding interpolated point is more than thresholding, and signal amplitude more than the points of thresholding is more than 50 in T second before and after current interpolated point A, this interpolated point is denoted as the body in a heartbeat signal and moves point, and the dynamic point number of body for counting heartbeat signal in T seconds is denoted as heartbeat The body movement signal feature of signal, T seconds values 20<T<35;
(9)REM
REM is the feature for taking calculated relationship between the difference of front and back T seconds of respiration rate in certain minute, and REM is specifically calculated Formula is as follows:
Wherein,In indicating j_rem+i_rem minutes current it is T seconds first in respiration rate,It indicates Respiration rate after in current minute in T seconds, k_rem are a constant, and REM (j) indicates front and back two minutes of current j minutes, The average value of interior front and back T seconds respiration rate differences summation per minute in totally five minutes, T seconds values 20<T<35;
(10)DEEP
Body movement signal DEEP feature calculation formula are as follows:
Wherein,Indicate the amplitude of the body movement signal in sleep,Indicate the width of the breath signal in sleep Value, DEEP (j_deep) indicate body movement signal proportion in total breathing adds body movement signal summation;
(11)SampEn
The One-dimension Time Series X (n) for obtaining radar echo signal, 1≤n≤k_r weights are sampled by radar echo signal X (t) Structure is phase space vector X*(ww), 1≤ww≤k- (m-1), m are Embedded dimensions, define any two vector X in phase space*(w1) And X*(w2) distance be:
Seek Sample Entropy the specific steps are:
Similar tolerance r is given, r=hh*SD, hh are the constants that value range is 0.1~0.25, and SD is radar echo signal The standard deviation of time series X (n), to each X in space*(w3), carry out template matches:
It indicatesNumber;
To w3It averages, is denoted as δm(r):
Dimension m adds 1, repeats the above steps, and obtains Gm+1(r);
The Sample Entropy of radar echo signal time series X (n) is:
K is finite value in practice, then Sample Entropy is estimated as
23 category feature is divided into two major classes, and one kind is sleep-respiratory correlated characteristic, and another kind of is the linear and non-of the sound of snoring Linear character:
A. correlated characteristic is breathed, the category feature totally 13 is respectively:
1)RPM:Internal respiration number per minute;
2)BVP:Front and back T seconds internal respiration number difference, b_bvp (n_bvp) are current T seconds internal respiration number, b_bvp (n_ Bvp-1) it is preceding T seconds internal respiration number, b_bvp (n_bvp+1) is rear T seconds internal respiration number, front and back T seconds internal respiration number difference Calculation formula it is as follows:
BVP=| b_bvp (n_bvp)-b_bvp (n_bvp-1) |
+|b_bvp(n_bvp)-b_bvp(n_bvp+1)|
Wherein T seconds value 20<T<35;
3)BC:Respiratory cycle takes the time difference mean value between respiration case twice;
4)RMSE:Root-mean-square error, according to the sampling number of each audio fragment, the root mean square found out in the segment misses Difference;X_rmse (n_rmse) is current breath signal,For average value, N counts for interpolated value per minute, and calculating formula is as follows:
5)RPM_VAR:The variance of internal respiration number per minute;xformerFor preceding T seconds internal respiration number, xlatterIt is T seconds latter Internal respiration number, T seconds values 20<T<35, calculation formula is as follows:
RPM_VAR=xformer-xlatter
6)EDA:Energy differences are accumulated;According to the sampling number in each audio fragment, the energy for finding out adjacent segment is tired Product moment value and, X1 (n) and X2 (n) are adjacent segment, and N is sampling number in audio fragment, and calculation formula is as follows:
7)Cross_zr:Zero-crossing rate, the number of the zero crossing of internal respiration signal per minute;
8-10)Formant1、Formant2、Formant3:3 formants, formant refer to the frequency spectrum in voice signal The region that energy comparison is concentrated in figure;It takes the audio-frequency information of the stronger part of energy to help to obtain the signature of the segment, has Conducive to the differentiation of sleep stage;
11-13)Formant_var1、Formant_var2、Formant_var3:3 formants in (8-10) are taken respectively Variance;
B. the relevant linear and nonlinear feature of the sound of snoring
1)LLE:Lyapunov Liapunov exponents;What Lyapunov Liapunov exponents provided is dynamical system edge Its phase space main shaft dissipates or convergent average speed;
According to the time delay acquiredWith Embedded dimensions M phase space reconstruction W, to each point WjFind its nearest neighbor point W ′j, calculate WjTo W 'jDistance dj(0)=| Wj-W′j|;
To each point Wj, calculate itself and nearest neighbor point W 'jTo the distance after evolution before the i-th step
dj(i)=| Wj+i-W′j+i|=dj(0)×eλ×i
Largest Lyapunov exponent is calculated to obtain by following formula,
2) time delayThe common method of computing relay parameter is correlation method, after giving one-dimensional time series, meter It calculates the auto-correlation function of sequence and makees function graft of the function about the time, when functional value drops to initial value When, the corresponding time is exactly time delay;E takes a minimum;
3) Embedded dimensions M:Embedded dimensions are one of two reconstruction parameters in phase space reconfiguration, and phase space reconfiguration is fallen into a trap Calculate the parameter of Liapunov exponent;Utilize time delayOne-dimensional sound of snoring event time sequence is embedded into m-dimensional space;
4)ApEn:Approximate entropy is a kind of parameter of the complexity and statistic quantification of the sequence of calculation;
By the One-dimension Time Series x (n) of sound of snoring event time sequence=(x1,x2,x3,...,xi,...,xk), group in order At v n dimensional vector ns Vi=[x (i), x (i+1) ..., x (i+v-1)], i=1,2 ..., k, wherein k are sound of snoring event time sequence x (n) length;Phasor1 V is calculated to each i valueiWith its complement vector VjDistance;
dij=max | x (i+1)-x (j+1) |, 1=0,1 ..., v-1
Given threshold value r=a3× SD, the wherein value range of a3 are that 0.1~0.25, SD is sound of snoring event time sequence x (n) standard deviation;Record each dijThe number of i corresponding less than threshold value r, and find out and total v dimension phasor1 numbers (k-v+1) Ratio, be denoted asIt willLogarithm is taken, and is averaged, φ is denoted asv(r):
The approximate entropy of x (n) is
Apen=φv(r)-φv+1(r)
5)N:Count Embedded dimensions M>=4 number.
6)D:The dimension of time series;
One α 1 of Alpha for 7-8) removing trend fluction analysis and two α 2 of Alpha for removing trend fluction analysis:It is right Length is the sound of snoring event time sequence Y of k2(n), 1≤n≤k calculates its accumulated deficiency:
Wherein,It is the mean value of time series:
By y2(n) it is divided into nonoverlapping n2A length is the section of l, and l is time scale, n2For section quantity;
Local trend y ' is gone out using least square fitting to each section of sound of snoring event time sequence2(n);
Reject y2(n) local trend in each section in, and calculate the root mean square of new sequence:
The size for changing length of window l, repeats the above steps, then there is the fluctuation of power law form:F(n)∝nαIt draws vertical Coordinate is log [F (n)], and abscissa is the curve of log (n), then slope of a curve is the scaling exponent α of time series;
It is l to take length of window1When corresponding scaling exponent α be trend fluction analysis one α 1 of Alpha;
It is l to take length of window2When corresponding scaling exponent α be trend fluction analysis two α 2 of Alpha;9) sample This entropy SampEn:By sound of snoring event time sequence Y2(n), 1≤n≤k is reconstructed into m dimension phase space vectors1≤i2≤k- (m-1), any two vector in phase space is definedWithDistance be:
Seek Sample Entropy the specific steps are:
Give similar tolerance r1, to each in spaceCarry out template matches:
N_dj3j4Indicate d* j3j4< r1Number;
To i2It averages, is denoted as
Dimension m adds 1, repeats the above steps, and obtains Bm+1(r1);
The Sample Entropy of sound of snoring event time sequence is:
K is finite value in practice, then Sample Entropy is estimated as:
10) Shannon entropy H:That is comentropy is defined as the probability of occurrence of Discrete Stochastic event, enables each sound of snoring event time sequence Arrange Y={ y1,y2…,y3, corresponding probability is p (Y=yi), then the comentropy of stochastic variable is
Temporal aspect at:If { a1,a2,……,aNIt is that radar or audio signal characteristic group are arranged according to relative chronological order N number of serial number of row corresponds to residing sleep period, then at=a/amaxFor the temporal aspect of sleep stage.
Step 5 is separately added into the characteristic model that step 4 obtains temporal aspect, construction feature grade emerging system model, packet The model of rest segment containing radar and radar+audio fragment model, radar rest segment model are the independent spies by radar sensor Data structure is levied, radar+audio fragment model is the characteristic structure obtained jointly by radar sensor and audio sensor It builds;
Construction feature grade emerging system model, specially:
Step 5-1, two models are built respectively:
(1) radar rest segment model:Individually by the characteristic of radar sensor structure, by radar signature data pair Should there is the period of audio characteristic data to be split, be divided into radar rest segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } and Radar segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 }, with radar rest segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } instruction Practice radar rest segment model;
(2) radar+audio fragment model:The characteristic structure obtained jointly by radar sensor and audio sensor , there will be the fusion of the characteristic of radar and audio to be known as radar+audio fragment simultaneously, with radar+audio fragment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } trains radar+audio fragment model;
Step 5-2, construction feature grade emerging system model --- radar+audio model whole night:By above-mentioned (1), (2) two The block mold that model collectively constitutes, the feature that radar sensor and audio sensor are obtained respectively carry out time alignment, spy It is sent into the total model of radar+audio whole night after sign fusion, is trained to obtain radar rest segment model by 11 radar signatures;By two 11 radar signatures and 23 audio frequency characteristics fusions, training that a sensor obtains simultaneously obtain radar+audio fragment model;Two A model is respectively used to obtain sleep stage prediction result, obtains radar rest segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } With radar+audio fragment { t1, t2 }, the corresponding period splice, obtains feature level and melt by { t2, t3 } ... { tn, tn+1 } The training dataset of the sleep all night of collaboration system.
Step 6 calculates weight shared by features described above using Character adjustment algorithm, screens feature according to weight, enters later Grader is adjudicated;Subspace KNN graders are then used if it is radar rest segment model, if it is radar+audio fragment mould Type then uses Bagged Trees graders, obtains sleep stage data;
Weight shared by features described above is calculated using Character adjustment algorithm, screening feature according to weight is specially:
Step 6-1, the data slice comprising 11 radar signatures and 23 audio frequency characteristics is randomly selected from training sample set D Section, is set as sample data R, if m is sample frequency in sampling, then from the sample set of sleep stage classification identical with sample R Find out the k closest sample H of sample data Rj(j=1,2 ..., k), the training sample set D include 11 radar signatures With 23 audio frequency characteristics;
Step 6-2, it is most adjacent that k distance is found out from the sample set for differing sleep stage class of each sample data R Close sample Mj(C), C indicates the classification other than the sleep stage classification belonging to sample R;
Step 6-3, the weight of each feature is updated, specific formula for calculation is as follows:
Diff (A, R in above formula1,R2) indicate sample R1With R2Difference on feature A, wherein Mj(C) the in C classes is indicated J closest samples, diff (A, R1,R2) calculation formula it is as follows,
The A indicates the arbitrary characteristics in sample set D includes 11 radar signatures and 23 audio frequency characteristics.
Step 6-4, feature is judged, is negative feature for ReliefF algorithms result of calculation, merged in sensor In play negative interaction, be deleted;It is positive feature for ReliefF algorithm result of calculations, in the sleep classification of sensor fusion Middle these features of reservation.
Step 7 splices two classifier results sequentially in time, obtains final sleep stage result.Specifically For:
Step 7-1, the model that step 5 obtains is further processed in the Feature Selection result obtained by step 6, removal The feature that weight is negative is calculated by step 6, keeping characteristics weight is positive feature.If obtained by radar+audio sensor Feature include audio frequency characteristics, then follow the steps 7-2, otherwise execute 7-3;
Step 7-2, include 35 features of radar and audio (1 radar signature+23 of temporal aspect+11 audio frequency characteristics) It is sent into radar+audio fragment model, sleep stage result is obtained with grader Bagged Trees;
Step 7-3, only radar segment is sent into comprising 12 features of radar signature (+11 radar signatures of 1 temporal aspect) Model obtains sleep stage result with grader Subspace KNN;
Step 7-4, two classifier results are spliced sequentially in time, are finally obtained every packet characteristic and are corresponded in advance The label result of survey.
Compared with prior art, the present invention its remarkable advantage is:1) suffer from contactless sleep stage monitoring method Person need not with any instrument contacts, easy to operate, can long term monitoring, cost is relatively low;2) present invention uses multiple sensing acquisitions Sleep all night data carry out sleep stage from different angles, by stages result more science;3) feature level that the present invention uses melts The grader of conjunction merges the judgement result of each sensor, using multiple sensing datas, improves accuracy rate;4) feature The fault-tolerant ability of grade fusion is strong, is also applied for Dissimilar sensors.
The present invention is described further with reference to the accompanying drawings of the specification.
Description of the drawings
Fig. 1 is that the present invention is based on the sleep stage method flow diagrams of multisensor feature optimization algorithm.
Fig. 2 is the experiment schematic diagram of the method for the present invention.
Fig. 3 is the Radar Signal Processing flow chart of the method for the present invention.
Fig. 4 is characterized a grade emerging system model framework chart.
Fig. 5 is radar+audio fragment model flow figure.
The outline flowchart of Fig. 6 feature-based fusion system models of radar+audio whole night.
Fig. 7 comparative result figures by stages.Figure (a) is that standard sleep is schemed by stages, and figure (b) is without Relief feature optimization algorithms Result figure directly by stages, figure (c) are into the result figure of Relief feature optimization algorithms by stages excessively.
Specific implementation mode
In conjunction with attached drawing, a kind of sleep stage method based on multisensor feature optimization algorithm of the invention, including it is following Step:
Equipment needed for step 1, whole system has:(1) radar sensor:Using continuous wave radar, the life of human body is acquired Order sign (2) audio sensor:Acquisition breathing and sound of snoring letter.As shown in Fig. 2, tester lies on a bed, radar is set to Right over human chest, audio frequency apparatus is set to the direction towards face;
Step 2 at the same open radar sensor and audio sensor acquisition signal, tester fall asleep.Radar illumination is to people Body thoracic cavity surface forms echo, and the sound of snoring that tester's sleep generates is recorded by audio sensor;Wait for that polysomnographs of leading reach more Monitoring dwell time or tester have waken up behind, close radar and audio frequency apparatus.
Step 3 is handled original echo using digital signal processing method, specially:Radar signal is carried out anti- Tangent demodulates, and extracts body movement signal, breathing and heartbeat signal, is handled original audio using noise reduction algorithm;
Step 3-1 this system uses continuous wave radar, is the digital intermediate frequency radar that radio frequency is 2.475GHz.Radar emission electricity Magnetostatic wave signal is irradiated to after human body and reflects to form echo-signal in human body surface.Due to heartbeat of the people in sleep procedure Periodic movement is belonged to chest cavity movement when breathing, radar transmitted pulse signal passes through the reflection of torso model, instead It carries corresponding vital sign information in the echo-signal penetrated, then echo-signal is received by radar receiver.
As shown in figure 3, carrying out arc tangent demodulation to the I/Q two-way echo-signals of acquisition first, the characteristics of according to signal, carry Take out body movement signal, further according to breathing heartbeat the characteristics of and difference, choose suitable filter, filter respectively to obtain breathing and heartbeat letter Number.By signal processing, breathed accordingly, heartbeat and body movement information, therefrom extracting can be used to adjudicate sleep stage Feature.
(1) extraction of breath signal
Adult's eupnea numbers range is 16-24 times per minute, since it is desired that all limiting cases are considered, so will The range of respiration rate is set to 8-24 times (too slow to breathe less than 8 times, to be tachypnea higher than 24 times), the frequency of breath signal Ranging from 0.13-0.4Hz.Heartbeat signal frequency is more much larger than breath signal frequency, therefore designs lowpass frequency and be less than 0.13Hz Low-pass filter, high-frequency cut-off frequency should be designed as be less than 0.8Hz, can effectively remove heartbeat signal, inhibit low-frequency noise, Direct current offset obtains more pure breath signal.
(2) extraction of heartbeat signal
Adult's normal heartbeat numbers range is 60-100 times per minute, is medically judged as more than 100 times " aroused in interest Overrun ", it is judged as " bradycardia " less than 60 times.The higher hamonic wave of breath signal is to obtain the main interference of heartbeat signal, In view of the frequency of breath signal, the low-frequency cut-off frequency of heartbeat signal needs the frequency higher than breath signal, therefore, by heartbeat The frequency range of signal is set to 0.83-3.3Hz.The cutoff frequency for designing low-pass filter is 0.8Hz, the cut-off of high-pass filter Frequency is 4Hz, and effective heartbeat signal can be obtained using the combination of low-pass filter and high-pass filter.
(3) extraction of body movement signal
During sleep all night, it can occur with different degrees of body is dynamic, the micro- shake of the dynamic similar body of slight body, hand Arm swing etc., it is substantially dynamic similar to turn over, lean to one side.The detection of body movement signal is also extremely heavy for the feature reference of sleep stage It wants.According to not androgynous dynamic type characteristic, body can be moved detection method and be divided into three classes:Magnitude threshold detection method, energy spectrum detection Method, Mean Square Error.
Magnitude threshold detection method:When generating body is moved, the echo amplitude of radar signal can increase suddenly, and compared with eupnea width It is worth much larger, at the end of body is dynamic, the echo amplitude of radar signal returns to normal value.Therefore, according to the dynamic letter of body under normal circumstances Number characteristic, the dynamic detection threshold of setting body, it is dynamic then to regard as generating body higher than threshold value, then regards as not sending out less than threshold value Raw body is dynamic.
Energy spectrum detection method:Energy spectrum is one of four kinds of frequency characteristics of signal, and energy spectrum can be with the power of reaction signal. When there is body to move event generation, energy spectrum significantly increases.The energy spectrum formula of discrete signal is as follows:
Mean Square Error:It is a kind of time-domain processing method, is observation x (n) and actual valueDeviation square summation with The square root of observation frequency N ratio,The dispersion degree of sample can be embodied.Calculation formula it is as follows:
For the energy spectrum also very little of the smaller body movement signal of amplitude, but mean square error can be detected, can It is dynamic to distinguish smaller body.When not having body to move event, mean square error is more than 0, may be mistaken for the larger breathing of amplitude Body is dynamic, and energy spectrum is smaller at this time, larger with difference when having body to move.So may be used energy spectrum and mean square error is combined Method, then use magnitude threshold detection method that can accurately detect body movement signal as the condition of anticipation.
Step 3-2, audio sensor can by detect sleep in breathing and sound of snoring signal come diagnose sleep in sleeping Dormancy apnea and the classification for realizing different sleep stages.
Breathing and the sound of snoring are Breathiness of the people in normal processes (awake or sleep), Hou Zheyi difference lies in the former As for big and heavy nasal snort in sleep procedure.Audio signal when sleep is divided into sound section and unvoiced segments, due to each tester Physiological property it is different, there may be a large amount of unvoiced segments in signal whole night.In sound section, there are a plurality of types of sound, such as Breathing, the sound of snoring, cough, ambient noise etc..Therefore breath signal and sound of snoring signal distinguishing are come,
As shown in figure 4, being divided into two steps:(1) the interception segmentation of sound bite and noise reduction (2) Modulation recognition, including:It exhales Inhale detection and sound of snoring detection.
Step 3-2-1, it audio signal will be divided whole night for one section with 5 minutes, audio signal includes breathing, the sound of snoring and environment Noise obtains more pure breathing and sound of snoring signal by noise reduction algorithm processing
Step 3-2-2, respiration case and sound of snoring event are detected using adaptive threshold method.
Breathing and sound of snoring signal extraction:(1) sound section, the i.e. end-point detecting method of sound event are detected first.It is common End-point detection have double threshold method, spectrum entropy method, correlation method, frequency band variance method, the quick detection of change-point algorithm of longitudinal box etc..(2) from Sound of snoring signal and non-sound of snoring signal are distinguished in sound section.
Step 4, the breathing obtained to step 3, body be dynamic, heartbeat and audio signal carry out feature extraction;
Step 4-1, it the characteristics of as shown in figure 3, according to the human body vital sign of each sleep stage, is carried from radar signal It takes out the feature for being used for being divided to sleep stage and shares 11.Here feature is calculated as unit of minute, still With every 30 seconds feature was extracted for interval.
(1)RPM
RPM is respiration rate per minute
(2)RPM_VAR
RPM_VAR is the variance of respiration rate per minute, if respiration rate per minute is r (n), whole night every point in signal The average value of clock respiration rate isN counts for interpolated value per minute, and calculation formula is as follows:
(3)RPM_ADA
RPM_ADA is that the difference of breath signal amplitude per minute accumulates, and A_bre (n_bre) is breath signal amplitude, and N is The calculation formula of interpolated value points per minute, the difference accumulation Ada of breath signal amplitude per minute is as follows:
(4)RPM_MOVE
RPM_MOVE is the body movement signal feature in breath signal;
(5)BPM
BPM is beats per minute;
(6)BPM_VAR
BPM_VAR is the variance of beats per minute.B_bre (n_bre_var) is beats,For heartbeat whole night Signal averaging amplitude, N count for interpolated value per minute, the variance S_var of beats per minute2Calculation formula is as follows
(7)BPM_ADA
BPM_ADA is that the difference of heartbeat signal amplitude per minute accumulates.Wherein A_ada ' (n_bre_ada) are heartbeat signal Amplitude, N count for interpolated value per minute.
(8)BPM_MOVE
BPM_MOVE is the body movement signal feature in heartbeat signal.Door is set according to the amplitude of body movement signal in heartbeat signal Limit judges that the points more than thresholding move feature to calculate body.Per minute to take N number of interpolated point, N value ranges are 90<n<110, when The signal amplitude of preceding interpolated point is more than thresholding, and signal amplitude more than the points of thresholding is more than 50 in T second before and after current interpolated point A, this interpolated point is denoted as the body in a heartbeat signal and moves point, and the dynamic point number of body for counting heartbeat signal in T seconds is denoted as heartbeat The body movement signal feature of signal, T seconds values 20<T<35.
(9)REM
REM is the feature for taking calculated relationship between the difference of front and back T seconds of respiration rate in certain minute.In the REM phases, sleep Feature REM can be significantly increased, therefore can be used for distinguishing REM phases and non-REM phases.
REM specific formula for calculation is as follows:
Wherein,In indicating j_rem+i_rem minutes current it is T seconds first in respiration rate,It indicates Respiration rate after in current minute in T second, here k take front and back two minutes, totally five minutes of current j minutes of 2, REM (j) expressions The average value summed of interior front and back T second respiration rate differences per minute, T seconds values 20<T<35.
(10)DEEP
According to practical sleep signal it is found that when people is in deep sleep stages, body movement signal is very small and frequency is compared with other All stages are all minimum, therefore according to the relationship of body movement signal and breath signal, and DEEP feature calculation formula are as follows:
Wherein,Indicate the amplitude of the body movement signal in sleep,Indicate the width of the breath signal in sleep Value, DEEP (j_deep) indicate body movement signal proportion in total breathing adds body movement signal summation.When deep sleep, DEEP Feature is all small compared with the DEEP features of other sleep cycles.
(11)SampEn
The One-dimension Time Series X (n) for obtaining radar echo signal, 1≤n≤k_r weights are sampled by radar echo signal X (t) Structure is phase space vector X*(ww), 1≤ww≤k- (m-1), m are Embedded dimensions, define any two vector X in phase space*(w1) And X*(w2) distance be:
Seek Sample Entropy the specific steps are:
Similar tolerance r is given, r=hh*SD, hh are the constants that value range is 0.1~0.25, and SD is radar echo signal The standard deviation of time series X (n), to each X in space*(w3), carry out template matches:
It indicatesNumber;
To w3It averages, is denoted as δm(r):
Dimension m adds 1, repeats the above steps, and obtains Gm+1(r);
The Sample Entropy of radar echo signal time series X (n) is:
K is finite value in practice, then Sample Entropy is estimated as
Step 4-2, according to the human body vital sign of each sleep stage the characteristics of, use is tentatively extracted from audio signal Feature to be divided to sleep stage shares 23.This 23 features are divided into two major classes, and one kind is that sleep-respiratory is related special Sign, another kind of is the linear and nonlinear feature of the sound of snoring.
A. correlated characteristic is breathed, the category feature totally 13 is respectively:
1)RPM:Internal respiration number per minute.
2)BVP(breath variance parameter):Front and back T seconds internal respiration number difference.B_bvp (n_bvp) is Current T seconds internal respiration numbers, b_bvp (n_bvp-1) are preceding T seconds internal respiration number, and b_bvp (n_bvp+1) is rear T seconds internal respiration The calculation formula of number, front and back T seconds internal respiration number difference is as follows:
BVP=| b_bvp (n_bvp)-b_bvp (n_bvp-1) |
+|b_bvp(n_bvp)-b_bvp(n_bvp+1)|
Wherein T seconds value 20<T<35.
3)BC(breath cycle):Respiratory cycle takes the time difference mean value between respiration case twice.
4)RMSE(root mean square error):Root-mean-square error.According to the sampling number of each audio fragment, Find out the root-mean-square error in the segment.X_rmse (n_rmse) is current breath signal,For average value, N is interpolation per minute Value points, calculating formula are as follows:
5)RPM_VAR:The variance of internal respiration number per minute.xformerFor preceding T seconds internal respiration number, xlatterIt is T seconds latter Internal respiration number, T seconds values 20<T<35, calculation formula is as follows:
RPM_VAR=xformer-xlatter
6)EDA(energy difference accumulation):Energy differences are accumulated.According in each audio fragment Sampling number, find out adjacent segment energy accumulation difference and.X1 (n) and X2 (n) is adjacent segment, and N is in audio fragment Sampling number, calculation formula are as follows:
7)Cross_zr(cross zero):Zero-crossing rate.The number of the zero crossing of internal respiration signal per minute.
8-10)Formant1、Formant2、Formant3:3 formants.Formant refers to the frequency spectrum in voice signal The region that energy comparison is concentrated in figure.It takes the audio-frequency information of the stronger part of energy to help to obtain the signature of the segment, has Conducive to the differentiation of sleep stage.
11-13)Formant_var1、Formant_var2、Formant_var3:3 formants in (8-10) are taken respectively Variance.
B. the relevant linear and nonlinear feature of the sound of snoring
1)LLE:Lyapunov Liapunov exponents.Movement is extremely sensitive to initial condition in chaos system, Lyapunov indexes are to generate similar track for describing the initial value that two close in dynamic system, and track is over time And the value of this phenomena process exponentially detached, what Lyapunov indexes provided is dynamical system along its phase space main shaft Diverging or convergent average speed.
According to the time delay acquiredWith Embedded dimensions M phase space reconstruction W, to each point WjFind its nearest neighbor point W ′j, calculate WjTo W 'jDistance dj(0)=| Wj-W′j|;
To each point Wj, calculate itself and nearest neighbor point W 'jTo the distance after evolution before the i-th step
dj(i)=| Wj+i-W′j+i|=dj(0)×eλ×i
Largest Lyapunov exponent is calculated to obtain by following formula,
2) time delayTime delay is one of two reconstruction parameters in phase space reconfiguration.Computing relay parameter is normal Method is correlation method, after giving one-dimensional time series, the auto-correlation function of the sequence of calculation and make the function about when Between function graft.When functional value drops to initial valueWhen (e takes a minimum), when the corresponding time is exactly Between postpone.
3) Embedded dimensions M:Embedded dimensions are one of two reconstruction parameters in phase space reconfiguration, and phase space reconfiguration is fallen into a trap Calculate the parameter of Liapunov exponent.Utilize time delayOne-dimensional sound of snoring event time sequence is embedded into m-dimensional space.
4)ApEn:Approximate entropy is a kind of parameter of the complexity and statistic quantification of the sequence of calculation.
By the One-dimension Time Series x (n) of sound of snoring event time sequence=(x1,x2,x3,...,xi,...,xk), group in order At v n dimensional vector ns Vi=[x (i), x (i+1) ..., x (i+v-1)], i=1,2 ..., k, wherein k are sound of snoring event time sequence x (n) length;Phasor1 V is calculated to each i valueiWith its complement vector VjDistance.
dij=max | x (i+1)-x (j+1) |, 1=0,1 ..., v-1
Given threshold value r=a3× SD, the wherein value range of a3 are that 0.1~0.25, SD is sound of snoring event time sequence x (n) standard deviation.Record each dijThe number of i corresponding less than threshold value r, and find out and total v dimension phasor1 numbers (k-v+1) Ratio, be denoted asIt willLogarithm is taken, and is averaged, φ is denoted asv(r):
The approximate entropy of x (n) is
Apen=φv(r)-φv+1(r)
5)N:Count Embedded dimensions M>=4 number.
6)D:The dimension of time series.
One α 1 of Alpha for 7-8) removing trend fluction analysis and two α 2 of Alpha for removing trend fluction analysis:It is right Length is the sound of snoring event time sequence Y of k2(n), 1≤n≤k calculates its accumulated deficiency:
Wherein,It is the mean value of time series:
By y2(n) it is divided into nonoverlapping n2A length is the section of l, and l is time scale (window is long), n2For section (or Window) quantity;
Local trend y ' is gone out using least square fitting to each section of sound of snoring event time sequence2(n);
Reject y2(n) local trend in each section in, and calculate the root mean square of new sequence:
The size for changing length of window l, repeats the above steps, then there is the fluctuation of power law form:F(n)∝nαIt draws vertical Coordinate is log [F (n)], and abscissa is the curve of log (n), then slope of a curve is the scaling exponent α of time series.
It is l to take length of window1When corresponding scaling exponent α be trend fluction analysis one α 1 of Alpha;Take window Mouth length is l2When corresponding scaling exponent α be trend fluction analysis two α 2 of Alpha;
9) Sample Entropy SampEn:By sound of snoring event time sequence Y2(n), 1≤n≤k is reconstructed into m dimension phase space vectors1≤i2≤ k- (m-1) defines any two vector in phase spaceWithDistance be:
Seek Sample Entropy the specific steps are:
Give similar tolerance r1, to each in spaceCarry out template matches:
N_dj3j4Indicate d* j3j4< r1Number;
To i2It averages, is denoted as
Dimension m adds 1, repeats the above steps, and obtains Bm+1(r1);
The Sample Entropy of sound of snoring event time sequence is:
K is finite value in practice, then Sample Entropy is estimated as
10) Shannon entropy H:That is comentropy is defined as the probability of occurrence of Discrete Stochastic event, enables each sound of snoring event time sequence Arrange Y2(n), corresponding probability is p (Y2=yn), 1≤n≤k, then the comentropy of stochastic variable be
Temporal aspect at:If { a1,a2,……,aNIt is that radar or audio signal characteristic group are arranged according to relative chronological order N number of serial number of row corresponds to residing sleep period, then at=a/amaxFor the temporal aspect of sleep stage.
Step 5, construction feature grade emerging system model, including radar rest segment model and radar+audio fragment model, Radar rest segment model is that individually the characteristic by radar sensor is built, and radar+audio fragment model is by radar The characteristic structure that sensor and audio sensor obtain jointly;
Step 5-1, since the not all period of audio sensor can collect audio signal, and only in audio Sound section of signal can just extract audio frequency characteristics, so audio-frequency unit is characterized in fragment section, i.e., audio frequency characteristics whole night It is in segment to be.And radar sensor be all periods can collected radar signal, i.e., radar signature whole night is Continuously.The model construction basic principle of feature-based fusion system is individually radar to be used to sense when not having audio signal The data of device, when two sensors can obtain information while using the data of radar and audio sensor.In order to accurate Data type is distinguished, the part for having audio data is known as " audio fragment " here, it is corresponding with there is the period of audio fragment Radar data is known as " radar segment ", and the period of audio data corresponding radar data is not known as " radar rest segment ".
The structure block diagram of feature-based fusion system model is illustrated in fig. 5 shown below, and according to the above characteristic of sensor, is distinguished first Build two models:
(1) radar rest segment model:Individually by the characteristic of radar sensor structure.By radar signature data pair There should be the period of audio characteristic data to be split, be divided into radar rest segment 1,2 ... N and radar segment 1,2 ... N.Use thunder Radar rest segment model is trained up to rest segment 1,2 ... N.
(2) radar+audio fragment model:As shown in figure 5, the feature obtained jointly by radar sensor and audio sensor Data structure.There to be the fusion of the characteristic of radar and audio to be known as radar+audio fragment simultaneously, with radar+audio fragment 1,2 ... N train radar+audio fragment model.Construction feature grade emerging system model --- radar+audio model whole night again:By (1), the block mold that (2) two models collectively constitute.Wherein, the Characteristic Number of radar sensor extraction is 11, and audio passes The Characteristic Number of sensor extraction is 23 (being free of temporal aspect).The feature that radar sensor and audio sensor are obtained respectively It is sent into the total model of radar+audio whole night after carrying out time alignment, Fusion Features.It trains to obtain radar by 11 radar signatures and remain Remaining piece segment model.11 radar signatures obtained simultaneously by two sensors and the fusion of 23 audio frequency characteristics, training obtain radar+ Audio fragment model.Prediction result obtains radar rest segment result 1,2 ... N and radar+audio fragment knot to two models respectively Fruit 1,2 ... N splice the result of corresponding period, obtain the sleep all night of feature-based fusion system result by stages.
Step 6, in order to obtain the significance level that higher classification accuracy and different speciality classify to grader, need The processing that multiple features are carried out with a selection optimization, chooses the method that ReliefF algorithms are chosen as characteristic optimization.Utilize spy It levies adjustment algorithm and calculates weight shared by features described above, feature is screened according to weight, enter grader later and adjudicate.
Step 7, as shown in fig. 6, if it is radar rest segment model then use Subspace KNN graders, radar+sound Frequency piece segment model then uses Bagged Trees graders, obtains sleep stage data;Two classifier results are suitable according to the time Sequence is spliced, and obtains final sleep stage result.
Patient need not can grow with any instrument contacts, easy to operate in contactless sleep stage monitoring method Phase monitors, and cost is relatively low.
Further detailed description is done to the present invention with reference to embodiment and comparative example.
Method includes the following steps by stages for existing sleep test:
Polysomnograph SOMNOlab2 is led step 1, the wearing of tester front are portable more, posting each path electrode, abdomen is tied up Abdominal belt is tied up in good test, and left index finger wears pressure fingerstall.
Step 2, tester fall asleep.
Step 3, person to be tested are awake, and health care professional analyzes data.
These sleep monitor products mostly use contact monitoring, need to paste several electricity at multiple positions of subject's body Mouth and nose gas flow tube and abdomen bandage are worn in pole.The monitoring of contact largely affects the physiology and psychology of subject Situation, and test environment is complicated, needs health care professional to operate, price is relatively expensive.
The present invention is based on the sleep stage methods of multisensor feature optimization algorithm, include the following steps:
Step 1, tester:One male experimenter, height 177cm, 23 years old age, weight 70kg, BMI index 22.34, It requires experimenter to loosen phychology as possible before experiment, does not take vigorous exercise, dietary.The tester, which lies low, when experiment is surveying It tries on bed, radar sensor is set to the surface of human body, for acquiring radar echo signal;Audio sensor is set to Towards the direction of face, for acquiring breathing and sound of snoring signal;
Step 2 opens radar sensor and audio sensor acquisition data whole night, whole night in data, radar sensor number According to when it is 435 minutes a length of, whens effective audio sensor data, is 341 minutes a length of;
Step 3 will lead polysomnographs access computer more, export sleep stage as a result, preserve computer terminal radar and The signal data of audio frequency apparatus acquisition, completes the work such as subsequent data processing and sleep stage.
Step 4 carries out feature extraction to the collected radar echo signal of step 2, wherein the side of respiration rate per minute Poor S2In the long N_var of window take 5;Interpolation points N in the difference accumulation Ada_bre of breath signal amplitude per minute takes 10;Ginseng The T of number REM takes 30;K_rem takes 2;
Step 5, according to the characteristic of radar echo signal and audio signal, build radar+audio fragment model and radar Rest segment model screens feature using Relief algorithms, feature after screening is sent into two models, classification is identified;Thunder There is bright result compared to the result whole night of single radar and single audio sensor when up to+audio fragment model without temporal aspect It is aobvious to improve.The highest accuracy rate of radar+audio fragment model is 90.8%, Average Accuracy 80.5%;It is sensed compared to radar Device averagely improves 4.07%.Compared to audio sensor, 4.01% is averagely improved, illustrates the with the obvious advantage of fusion.
Step 6, as shown in fig. 7, the comparison PSG sleep stages result and sleep stage based on multisensor feature-based fusion Prediction result obtains the accuracy rate of final result by stages.Wherein sleep cycle is divided into 2,3,4,5.Sleep cycle 2 represents depth and sleeps The dormancy phase (DEEP), 3 represent either shallow sleep period, and 4 represent the rapid eye movement phase (REM), and 5 represent lucid interval.Feature-based fusion system The goodness of fit of prediction result and the standard staging result of PSG reaches 80.86%.It can be seen that tester E3_3's from figure (a) Occur 3 deep sleep phases (DEEP) in circle in sleep all night data, 3 rapid eye movement phases (REM) in circle meet and sleep Dormancy periodic law.Figure (b) is the prognostic chart after being not optimised.After Relief characteristic optimizations, accuracy rate is further promoted, from pre- In mapping (c) it can be seen that these periods can Accurate Prediction come out.

Claims (8)

1. a kind of sleep stage method based on multisensor feature optimization algorithm, which is characterized in that include the following steps:
Step 1, tester lie low on test envelope, and radar sensor is set to right over human chest, for acquiring thunder Up to echo-signal;Audio sensor is set to the direction towards face, for acquiring breathing and sound of snoring signal;
Step 2 at the same open radar sensor and audio sensor acquisition signal, tester fall asleep, radar illumination to human body chest Chamber surface forms echo, and the sound of snoring that tester's sleep generates is recorded by audio sensor;
Step 3 is handled original echo using digital signal processing method, specially:Arc tangent is carried out to radar signal Demodulation, is extracted body movement signal, breathing and heartbeat signal, is handled original audio using noise reduction algorithm;
Step 4, the breathing obtained to step 3, body are dynamic, heartbeat extracts 23 kinds of features and extracts 11 kinds of features to audio signal, right 11 kinds of features of audio signal carry out splicing fusion, judge the data after fusion, with whether there is or not audio frequency characteristics to be divided to two moulds Type makes decisions;
Step 5 is separately added into the characteristic model that step 4 obtains on temporal aspect, construction feature grade emerging system model, including thunder Up to rest segment model and radar+audio fragment model, radar rest segment model is the independent characteristic by radar sensor According to structure, radar+audio fragment model is built by the characteristic that radar sensor and audio sensor obtain jointly;
Step 6 calculates weight shared by features described above using Character adjustment algorithm, screens feature according to weight, enters classification later Device is adjudicated;Subspace KNN graders are then used if it is radar rest segment model, then if it is radar+audio fragment model With Bagged Trees graders, sleep stage data are obtained;
Step 7 splices two classifier results sequentially in time, obtains final sleep stage result.
2. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, which is characterized in that step Observation area includes on bed in rapid 1:Ceiling, bed surface, ceiling and bed surface are parallel, and tester, which balances, to be lain on bed surface, thunder Up to being arranged in the centre of ceiling, radar antenna is towards bed surface, face tester thoracic cavity;Radar uses continuous wave radar;Audio Sensor is arranged on the centre of ceiling, microphone face tester head.
3. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, which is characterized in that step Radar is detected specially object in rapid 2:Continuous wave radar is irradiated tester thoracic cavity, radar transmitted pulse letter Number by torso model reflection, carry corresponding vital sign information in the echo-signal of reflection, then by radar receiver Receive echo-signal.
4. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, which is characterized in that step 11 kinds of features in rapid 4 are calculated as unit of minute, but extract feature, the value range of the T with every T seconds for interval It is 20<T<35;
(1)RPM
RPM is respiration rate per minute;
(2)RPM_VAR
RPM_VAR is the variance of respiration rate per minute, if respiration rate per minute is r (n_bre), whole night every point in signal The average value of clock respiration rate isN_var counts for interpolated value per minute, and calculation formula is as follows:
(3)RPM_ADA
RPM_ADA is that the difference of breath signal amplitude per minute accumulates, and A_bre (n_bre) is breath signal amplitude, and N is every point Clock interpolated value is counted, and the calculation formula of the difference accumulation Ada of breath signal amplitude per minute is as follows:
(4)RPM_MOVE
RPM_MOVE is the body movement signal feature in breath signal;
(5)BPM
BPM is beats per minute;
(6)BPM_VAR
BPM_VAR is the variance of beats per minute, and b_bre (n_bre_var) is beats,For heartbeat signal whole night Average amplitude, N count for interpolated value per minute, the variance S_var of beats per minute2Calculation formula is as follows:
(7)BPM_ADA
BPM_ADA is that the difference of heartbeat signal amplitude per minute accumulates, and wherein A_ada ' (n_bre_ada) is heartbeat signal width Value, N count for interpolated value per minute,
(8)BPM_MOVE
BPM_MOVE is the body movement signal feature in heartbeat signal, and thresholding is set according to the amplitude of body movement signal in heartbeat signal, Judge that the points more than thresholding move feature to calculate body;Per minute to take N number of interpolated point, N value ranges are 90<n<110, it is current interior The signal amplitude for inserting point is more than thresholding, and signal amplitude is more than that thresholding being counted more than 50 in T second before and after current interpolated point, This interpolated point is denoted as the body in a heartbeat signal and moves point, and the dynamic point number of body for counting heartbeat signal in T seconds is denoted as heartbeat signal Body movement signal feature, T seconds values 20<T<35;
(9)REM
REM is the feature for taking calculated relationship between the difference of front and back T seconds of respiration rate in certain minute, REM specific formula for calculation It is as follows:
Wherein,In indicating j_rem+i_rem minutes current it is T seconds first in respiration rate,Indicate current Respiration rate after in minute in T second, k_rem are a constant, front and back two minutes of current j minutes of REM (j) expressions, totally five The average value of the interior front and back T seconds respiration rate differences summation per minute of minute, T seconds values 20<T<35;
(10)DEEP
Body movement signal DEEP feature calculation formula are as follows:
Wherein,Indicate the amplitude of the body movement signal in sleep,Indicate the amplitude of the breath signal in sleep, DEEP (j_deep) indicates body movement signal proportion in total breathing adds body movement signal summation;
(11)SampEn
The One-dimension Time Series X (n) for obtaining radar echo signal is sampled by radar echo signal X (t), 1≤n≤k_r is reconstructed into Phase space vector X*(ww), 1≤ww≤k- (m-1), m are Embedded dimensions, define any two vector X in phase space*(w1) and X* (w2) distance be:
1≤kk_r≤m-1;1≤w1,w2≤k_r-m+1,w1≠w2
Seek Sample Entropy the specific steps are:
Similar tolerance r is given, r=hh*SD, hh are the constants that value range is 0.1~0.25, and SD is the radar echo signal time The standard deviation of sequence X (n), to each X in space*(w3), carry out template matches:
It indicatesNumber;
To w3It averages, is denoted as δm(r):
Dimension m adds 1, repeats the above steps, and obtains Gm+1(r);
The Sample Entropy of radar echo signal time series X (n) is:
K is finite value in practice, then Sample Entropy is estimated as
5. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, which is characterized in that step 23 category features in rapid 4 are divided into two major classes, and one kind is sleep-respiratory correlated characteristic, and another kind of is the linear and nonlinear spy of the sound of snoring Sign:
A. correlated characteristic is breathed, the category feature totally 13 is respectively:
1)RPM:Internal respiration number per minute;
2)BVP:Front and back T seconds internal respiration number difference, b_bvp (n_bvp) are current T seconds internal respiration number, b_bvp (n_bvp- 1) it is preceding T seconds internal respiration number, b_bvp (n_bvp+1) is rear T seconds internal respiration number, the meter of front and back T seconds internal respiration number difference It is as follows to calculate formula:
BVP=| b_bvp (n_bvp)-b_bvp (n_bvp-1) |+| b_bvp (n_bvp)-b_bvp (n_bvp+1) |
Wherein T seconds value 20<T<35;
3)BC:Respiratory cycle takes the time difference mean value between respiration case twice;
4)RMSE:Root-mean-square error finds out the root-mean-square error in the segment according to the sampling number of each audio fragment;x_ Rmse (n_rmse) is current breath signal,For average value, N counts for interpolated value per minute, and calculating formula is as follows:
5)RPM_VAR:The variance of internal respiration number per minute;xformerFor preceding T seconds internal respiration number, xlatterTo exhale in T seconds latter Inhale number, T seconds values 20<T<35, calculation formula is as follows:
RPM_VAR=xformer-xlatter
6)EDA:Energy differences are accumulated;According to the sampling number in each audio fragment, the energy accumulation for finding out adjacent segment is poor Value and, X1 (n) and X2 (n) are adjacent segment, and N is sampling number in audio fragment, and calculation formula is as follows:
7)Cross_zr:Zero-crossing rate, the number of the zero crossing of internal respiration signal per minute;
8-10)Formant1、Formant2、Formant3:3 formants, formant refer in the spectrogram of voice signal The region that energy comparison is concentrated;It takes the audio-frequency information of the stronger part of energy to help to obtain the signature of the segment, is conducive to The differentiation of sleep stage;
11-13)Formant_var1、Formant_var2、Formant_var3:The side of 3 formants in (8-10) is taken respectively Difference;
B. the relevant linear and nonlinear feature of the sound of snoring
1)LLE:Lyapunov Liapunov exponents;What Lyapunov Liapunov exponents provided is dynamical system along its phase Spatial major axes dissipate or convergent average speed;
According to the time delay acquiredWith Embedded dimensions M phase space reconstruction W, to each point WjFind its nearest neighbor point Wj', it calculates WjTo Wj' distance dj(0)=| Wj-Wj'|;
To each point Wj, calculate itself and nearest neighbor point Wj' before the i-th step to the distance after evolution
dj(i)=| Wj+i-W'j+i|=dj(0)×eλ×i
Largest Lyapunov exponent is calculated to obtain by following formula,
2) time delayThe common method of computing relay parameter is correlation method, after giving one-dimensional time series, calculates sequence The auto-correlation function of row simultaneously makees function graft of the function about the time, when functional value drops to initial valueWhen, it is right The time answered is exactly time delay;E takes a minimum;
3) Embedded dimensions M:Embedded dimensions are one of two reconstruction parameters in phase space reconfiguration, Lee are calculated in phase space reconfiguration The parameter of Ya Punuofu indexes;Utilize time delayOne-dimensional sound of snoring event time sequence is embedded into m-dimensional space;
4)ApEn:Approximate entropy is a kind of parameter of the complexity and statistic quantification of the sequence of calculation;
By the One-dimension Time Series x (n) of sound of snoring event time sequence=(x1,x2,x3,...,xi,...,xk), v is formed in order N dimensional vector n Vi=[x (i), x (i+1) ..., x (i+v-1)], i=1,2 ..., k, wherein k are sound of snoring event time sequence x (n) Length;Phasor1 V is calculated to each i valueiWith its complement vector VjDistance;
dij=max | x (i+1)-x (j+1) |, 1=0,1 ..., v-1
Given threshold value r=a3× SD, the wherein value range of a3 are the mark that 0.1~0.25, SD is sound of snoring event time sequence x (n) It is accurate poor;Record each dijThe number of i corresponding less than threshold value r, and the ratio with total v dimension phasor1 numbers (k-v+1) is found out, It is denoted asIt willLogarithm is taken, and is averaged, φ is denoted asv(r):
The approximate entropy of x (n) is
Apen=φv(r)-φv+1(r)
5)N:Count Embedded dimensions M>=4 number;
6)D:The dimension of time series;
One α 1 of Alpha for 7-8) removing trend fluction analysis and two α 2 of Alpha for removing trend fluction analysis:To length For the sound of snoring event time sequence Y of k2(n), 1≤n≤k calculates its accumulated deficiency:
Wherein,It is the mean value of time series:
By y2(n) it is divided into nonoverlapping n2A length is the section of l, and l is time scale, n2For section quantity;
Local trend y' is gone out using least square fitting to each section of sound of snoring event time sequence2(n);
Reject y2(n) local trend in each section in, and calculate the root mean square of new sequence:
The size for changing length of window l, repeats the above steps, then there is the fluctuation of power law form:F(n)∝nαDrawing ordinate is Log [F (n)], abscissa are the curve of log (n), then slope of a curve is the scaling exponent α of time series;
It is l to take length of window1When corresponding scaling exponent α be trend fluction analysis one α 1 of Alpha;
It is l to take length of window2When corresponding scaling exponent α be trend fluction analysis two α 2 of Alpha;9) Sample Entropy SampEn:By sound of snoring event time sequence Y2(n), 1≤n≤k is reconstructed into m dimension phase space vectors1≤i2≤ k- (m-1), Define any two vector in phase spaceWithDistance be:
Seek Sample Entropy the specific steps are:
Give similar tolerance r1, to each in spaceCarry out template matches:
N_dj3j4Indicate d* j3j4< r1Number;
To i2It averages, is denoted as
Dimension m adds 1, repeats the above steps, and obtains Bm+1(r1);
The Sample Entropy of sound of snoring event time sequence is:
K is finite value in practice, then Sample Entropy is estimated as:
10) Shannon entropy H:That is comentropy is defined as the probability of occurrence of Discrete Stochastic event, enables each sound of snoring event time sequence Y ={ y1,y2…,y3, corresponding probability is p (Y=yi), then the comentropy of stochastic variable is
Temporal aspect at:If { a1,a2,……,aNIt is radar or the N that audio signal characteristic group is arranged according to relative chronological order A serial number corresponds to residing sleep period, then at=a/amaxFor the temporal aspect of sleep stage.
6. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, which is characterized in that step Rapid 5 construction feature grade emerging system model, specially:
Step 5-1, two models are built respectively:
(1) radar rest segment model:Individually by the characteristic of radar sensor structure, radar signature data are corresponding with The period of audio characteristic data is split, and is divided into radar rest segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } and radar Segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 }, with radar rest segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } trains thunder Up to rest segment model;
(2) radar+audio fragment model:It is built by the characteristic that radar sensor and audio sensor obtain jointly, it will There is the fusion of the characteristic of radar and audio to be known as radar+audio fragment simultaneously, with radar+audio fragment { t1, t2 }, t2, T3 } ... { tn, tn+1 } trains radar+audio fragment model;
Step 5-2, construction feature grade emerging system model --- radar+audio model whole night:By above-mentioned (1), (2) two models The block mold collectively constituted, by the feature that radar sensor and audio sensor obtain respectively carries out time alignment, feature is melted It is sent into after conjunction in the total model of radar+audio whole night, is trained to obtain radar rest segment model by 11 radar signatures;It is passed by two 11 radar signatures and 23 audio frequency characteristics fusions, training that sensor obtains simultaneously obtain radar+audio fragment model;Two moulds Type is respectively used to obtain sleep stage prediction result, obtains radar rest segment { t1, t2 }, { t2, t3 } ... { tn, tn+1 } and thunder Up to+audio fragment { t1, t2 }, the corresponding period splice, obtains feature-based fusion system by { t2, t3 } ... { tn, tn+1 } The training dataset of the sleep all night of system.
7. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, it is characterised in that step Rapid 6 calculate weight shared by features described above using Character adjustment algorithm, and screening feature according to weight is specially:
Step 6-1, the data slot comprising 11 radar signatures and 23 audio frequency characteristics is randomly selected from training sample set D, It is set as sample data R, if m is sample frequency in sampling, is then found out from the sample set of sleep stage classification identical with sample R The k closest sample H of sample data Rj(j=1,2 ..., k), the training sample set D include 11 radar signatures and 23 A audio frequency characteristics;
Step 6-2, k are found out from the sample set for differing sleep stage class of each sample data R apart from closest Sample Mj(C), C indicates the classification other than the sleep stage classification belonging to sample R;
Step 6-3, the weight of each feature is updated, specific formula for calculation is as follows:
Diff (A, R in above formula1,R2) indicate sample R1With R2Difference on feature A, wherein Mj(C) j-th in C classes is indicated Closest sample, diff (A, R1,R2) calculation formula it is as follows,
The A indicates the arbitrary characteristics in sample set D includes 11 radar signatures and 23 audio frequency characteristics;
Step 6-4, feature is judged, is negative feature for ReliefF algorithms result of calculation, risen in sensor fusion Negative interaction is deleted;It is positive feature for ReliefF algorithm result of calculations, is protected in the sleep classification of sensor fusion Stay these features.
8. the sleep stage method according to claim 1 based on multisensor feature optimization algorithm, which is characterized in that step Rapid 7 splice two classifier results sequentially in time, specially:
Step 7-1, the model that step 5 obtains is further processed in the Feature Selection result obtained by step 6, removes by step Rapid 6 calculate the feature that weight is negative, and keeping characteristics weight is positive feature;If the spy obtained by radar+audio sensor Sign includes audio frequency characteristics, thens follow the steps 7-2, otherwise executes 7-3;
Step 7-2, include that 35 features of radar and audio (1 radar signature+23 of temporal aspect+11 audio frequency characteristics) are sent into Radar+audio fragment model obtains sleep stage result with grader Bagged Trees;
Step 7-3, it only is sent into radar piece segment model comprising 12 features of radar signature (+11 radar signatures of 1 temporal aspect), Sleep stage result is obtained with grader Subspace KNN;
Two classifier results of step 7-4 are spliced sequentially in time, are finally obtained every packet characteristic and are corresponded to prediction Label result.
CN201810125662.XA 2018-02-08 2018-02-08 Sleep staging method based on multi-sensor feature optimization algorithm Active CN108388912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810125662.XA CN108388912B (en) 2018-02-08 2018-02-08 Sleep staging method based on multi-sensor feature optimization algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810125662.XA CN108388912B (en) 2018-02-08 2018-02-08 Sleep staging method based on multi-sensor feature optimization algorithm

Publications (2)

Publication Number Publication Date
CN108388912A true CN108388912A (en) 2018-08-10
CN108388912B CN108388912B (en) 2021-12-10

Family

ID=63075387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810125662.XA Active CN108388912B (en) 2018-02-08 2018-02-08 Sleep staging method based on multi-sensor feature optimization algorithm

Country Status (1)

Country Link
CN (1) CN108388912B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109480787A (en) * 2018-12-29 2019-03-19 中国科学院合肥物质科学研究院 A kind of contactless sleep monitor equipment and sleep stage method based on ULTRA-WIDEBAND RADAR
CN109597058A (en) * 2018-12-21 2019-04-09 上海科勒电子科技有限公司 Method for microwave measurement, electronic equipment and the storage medium of induction tap
CN110051347A (en) * 2019-03-15 2019-07-26 华为技术有限公司 A kind of user's sleep detection method and system
CN110081923A (en) * 2019-05-16 2019-08-02 中国人民解放军战略支援部队信息工程大学 Field baseline environmental parameter automated collection systems fault detection method and device
CN110693454A (en) * 2019-08-23 2020-01-17 深圳大学 Sleep characteristic event detection method and device based on radar and storage medium
CN111227793A (en) * 2020-01-10 2020-06-05 京东方科技集团股份有限公司 Apnea recognition method and system, electronic equipment and storage medium
CN111461201A (en) * 2020-03-30 2020-07-28 重庆大学 Sensor data classification method based on phase space reconstruction
CN112168139A (en) * 2019-07-05 2021-01-05 腾讯科技(深圳)有限公司 Health monitoring method and device and storage medium
CN112450881A (en) * 2020-11-12 2021-03-09 武汉大学 Multi-modal sleep staging method based on time sequence relevance driving
CN112914589A (en) * 2021-03-02 2021-06-08 钦州市第二人民医院 Multi-sleep-guidance monitoring wireless net cap device and monitoring method
CN113080966A (en) * 2021-03-22 2021-07-09 华南师范大学 Automatic depression detection method based on sleep stages
CN113180596A (en) * 2021-04-07 2021-07-30 中山大学 Non-contact sleep analysis method and device and storage medium
CN113384264A (en) * 2021-06-11 2021-09-14 森思泰克河北科技有限公司 Radar-based respiratory frequency detection method and sleep monitoring equipment
WO2021196872A1 (en) * 2020-03-31 2021-10-07 京东方科技集团股份有限公司 Measurement method and apparatus for periodic information of biological signal, and electronic device
CN113729641A (en) * 2021-10-12 2021-12-03 南京润楠医疗电子研究院有限公司 Non-contact sleep staging system based on conditional countermeasure network
CN113842111A (en) * 2020-06-28 2021-12-28 珠海格力电器股份有限公司 Sleep staging method and device, computing equipment and storage medium
CN114081450A (en) * 2021-12-03 2022-02-25 中山大学·深圳 Sleep apnea detection method and device based on difference visible diagram
CN114098645A (en) * 2021-11-25 2022-03-01 青岛海信日立空调系统有限公司 Sleep staging method and device
CN114224320A (en) * 2021-12-31 2022-03-25 深圳融昕医疗科技有限公司 Snore detection method, equipment and system for self-adaptive multi-channel signal fusion
CN114391807A (en) * 2021-12-17 2022-04-26 珠海脉动时代健康科技有限公司 Sleep breathing disorder analysis method, device, equipment and readable medium
CN115581435A (en) * 2022-08-30 2023-01-10 湖南万脉医疗科技有限公司 Sleep monitoring method and device based on multiple sensors
CN116269298A (en) * 2023-02-21 2023-06-23 郑州大学 Non-contact sleep respiration monitoring method and system based on millimeter wave radar
CN116509336A (en) * 2023-06-27 2023-08-01 安徽星辰智跃科技有限责任公司 Sleep periodicity detection and adjustment method, system and device based on waveform analysis
CN117530666A (en) * 2024-01-03 2024-02-09 北京清雷科技有限公司 Breathing abnormality recognition model training method, breathing abnormality recognition method and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101128150A (en) * 2004-07-23 2008-02-20 以康源有限公司 Apparatus and method for breathing pattern determination using a non-contact microphone
CN102065753A (en) * 2008-04-14 2011-05-18 伊塔马医疗有限公司 Non-invasive method and apparatus for determining light- sleep and deep-sleep stages
CN102665535A (en) * 2009-09-30 2012-09-12 健康监测有限公司 Continuous non-interfering health monitoring and alert system
US20150313531A1 (en) * 2009-04-22 2015-11-05 Joe Paul Tupin, Jr. Fetal monitoring device and methods
CN105105718A (en) * 2015-05-19 2015-12-02 上海兆观信息科技有限公司 Detection method of non-contact sleep stage and sleep breathing disorder
CN112353401A (en) * 2020-10-19 2021-02-12 燕山大学 Staged regulation and control method based on physiological state evaluation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101128150A (en) * 2004-07-23 2008-02-20 以康源有限公司 Apparatus and method for breathing pattern determination using a non-contact microphone
CN102065753A (en) * 2008-04-14 2011-05-18 伊塔马医疗有限公司 Non-invasive method and apparatus for determining light- sleep and deep-sleep stages
US20150313531A1 (en) * 2009-04-22 2015-11-05 Joe Paul Tupin, Jr. Fetal monitoring device and methods
CN102665535A (en) * 2009-09-30 2012-09-12 健康监测有限公司 Continuous non-interfering health monitoring and alert system
CN105105718A (en) * 2015-05-19 2015-12-02 上海兆观信息科技有限公司 Detection method of non-contact sleep stage and sleep breathing disorder
CN112353401A (en) * 2020-10-19 2021-02-12 燕山大学 Staged regulation and control method based on physiological state evaluation

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597058A (en) * 2018-12-21 2019-04-09 上海科勒电子科技有限公司 Method for microwave measurement, electronic equipment and the storage medium of induction tap
CN109597058B (en) * 2018-12-21 2021-06-22 上海科勒电子科技有限公司 Microwave measuring method for induction tap, electronic equipment and storage medium
CN109480787A (en) * 2018-12-29 2019-03-19 中国科学院合肥物质科学研究院 A kind of contactless sleep monitor equipment and sleep stage method based on ULTRA-WIDEBAND RADAR
CN109480787B (en) * 2018-12-29 2021-06-25 中国科学院合肥物质科学研究院 Non-contact sleep monitoring equipment based on ultra-wideband radar and sleep staging method
CN110051347A (en) * 2019-03-15 2019-07-26 华为技术有限公司 A kind of user's sleep detection method and system
CN110081923B (en) * 2019-05-16 2021-03-02 中国人民解放军战略支援部队信息工程大学 Fault detection method and device for automatic acquisition system of field baseline environmental parameters
CN110081923A (en) * 2019-05-16 2019-08-02 中国人民解放军战略支援部队信息工程大学 Field baseline environmental parameter automated collection systems fault detection method and device
CN112168139A (en) * 2019-07-05 2021-01-05 腾讯科技(深圳)有限公司 Health monitoring method and device and storage medium
CN112168139B (en) * 2019-07-05 2022-09-30 腾讯科技(深圳)有限公司 Health monitoring method, device and storage medium
CN110693454A (en) * 2019-08-23 2020-01-17 深圳大学 Sleep characteristic event detection method and device based on radar and storage medium
CN110693454B (en) * 2019-08-23 2023-04-25 深圳大学 Sleep characteristic event detection method and device based on radar and storage medium
CN111227793A (en) * 2020-01-10 2020-06-05 京东方科技集团股份有限公司 Apnea recognition method and system, electronic equipment and storage medium
CN111227793B (en) * 2020-01-10 2022-11-01 京东方科技集团股份有限公司 Apnea recognition method and system, electronic equipment and storage medium
CN111461201A (en) * 2020-03-30 2020-07-28 重庆大学 Sensor data classification method based on phase space reconstruction
CN111461201B (en) * 2020-03-30 2023-09-19 重庆大学 Sensor data classification method based on phase space reconstruction
WO2021196872A1 (en) * 2020-03-31 2021-10-07 京东方科技集团股份有限公司 Measurement method and apparatus for periodic information of biological signal, and electronic device
CN113842111A (en) * 2020-06-28 2021-12-28 珠海格力电器股份有限公司 Sleep staging method and device, computing equipment and storage medium
CN112450881A (en) * 2020-11-12 2021-03-09 武汉大学 Multi-modal sleep staging method based on time sequence relevance driving
CN112450881B (en) * 2020-11-12 2021-11-02 武汉大学 Multi-modal sleep staging method based on time sequence relevance driving
CN112914589A (en) * 2021-03-02 2021-06-08 钦州市第二人民医院 Multi-sleep-guidance monitoring wireless net cap device and monitoring method
CN113080966A (en) * 2021-03-22 2021-07-09 华南师范大学 Automatic depression detection method based on sleep stages
CN113180596B (en) * 2021-04-07 2024-02-06 中山大学 Non-contact sleep analysis method, device and storage medium
CN113180596A (en) * 2021-04-07 2021-07-30 中山大学 Non-contact sleep analysis method and device and storage medium
CN113384264A (en) * 2021-06-11 2021-09-14 森思泰克河北科技有限公司 Radar-based respiratory frequency detection method and sleep monitoring equipment
CN113729641A (en) * 2021-10-12 2021-12-03 南京润楠医疗电子研究院有限公司 Non-contact sleep staging system based on conditional countermeasure network
CN114098645A (en) * 2021-11-25 2022-03-01 青岛海信日立空调系统有限公司 Sleep staging method and device
CN114098645B (en) * 2021-11-25 2023-11-07 青岛海信日立空调系统有限公司 Sleep staging method and device
CN114081450A (en) * 2021-12-03 2022-02-25 中山大学·深圳 Sleep apnea detection method and device based on difference visible diagram
CN114081450B (en) * 2021-12-03 2024-03-22 中山大学·深圳 Sleep apnea detection method and device based on difference visible graph
CN114391807B (en) * 2021-12-17 2023-12-19 珠海脉动时代健康科技有限公司 Sleep breathing disorder analysis method, device, equipment and readable medium
CN114391807A (en) * 2021-12-17 2022-04-26 珠海脉动时代健康科技有限公司 Sleep breathing disorder analysis method, device, equipment and readable medium
CN114224320A (en) * 2021-12-31 2022-03-25 深圳融昕医疗科技有限公司 Snore detection method, equipment and system for self-adaptive multi-channel signal fusion
CN115581435A (en) * 2022-08-30 2023-01-10 湖南万脉医疗科技有限公司 Sleep monitoring method and device based on multiple sensors
CN116269298A (en) * 2023-02-21 2023-06-23 郑州大学 Non-contact sleep respiration monitoring method and system based on millimeter wave radar
CN116269298B (en) * 2023-02-21 2023-11-10 郑州大学 Non-contact sleep respiration monitoring method and system based on millimeter wave radar
CN116509336A (en) * 2023-06-27 2023-08-01 安徽星辰智跃科技有限责任公司 Sleep periodicity detection and adjustment method, system and device based on waveform analysis
CN117530666A (en) * 2024-01-03 2024-02-09 北京清雷科技有限公司 Breathing abnormality recognition model training method, breathing abnormality recognition method and equipment
CN117530666B (en) * 2024-01-03 2024-04-05 北京清雷科技有限公司 Breathing abnormality recognition model training method, breathing abnormality recognition method and equipment

Also Published As

Publication number Publication date
CN108388912B (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN108388912A (en) Sleep stage method based on multisensor feature optimization algorithm
CN108416367B (en) Sleep staging method based on multi-sensor data decision-level fusion
Long et al. Sleep and wake classification with actigraphy and respiratory effort using dynamic warping
CN108670200A (en) A kind of sleep sound of snoring classification and Detection method and system based on deep learning
Bhattacharjee et al. Sleep apnea detection based on rician modeling of feature variation in multiband EEG signal
JP2015042267A (en) Sleep/wakefulness state evaluation method and system
CN104545818A (en) Sleep apnea syndrome detection method based on pulse and blood oxygen signals
CN109328034A (en) For determining the determination system and method for the sleep stage of object
CN110664390A (en) Heart rate monitoring system and method based on wrist strap type PPG and deep learning
WO2018011801A1 (en) Estimation of sleep quality parameters from whole night audio analysis
CN103153183A (en) Apparatus and method for diagnosing obstructive sleep apnea
Wang et al. Eating detection and chews counting through sensing mastication muscle contraction
CN107530015B (en) Vital sign analysis method and system
CN114391807B (en) Sleep breathing disorder analysis method, device, equipment and readable medium
Camcı et al. Sleep apnea detection via smart phones
WO2021208656A1 (en) Sleep risk prediction method and apparatus, and terminal device
WO2018158219A1 (en) Methods and devices using meta-features extracted from accelerometry signals for swallowing impairment detection
Hussain et al. Food intake detection and classification using a necklace-type piezoelectric wearable sensor system
WO2013086615A1 (en) Device and method for detecting congenital dysphagia
CN106419884B (en) A kind of rate calculation method and system based on wavelet analysis
Sofwan et al. Normal and Murmur Heart Sound Classification Using Linear Predictive Coding and k-Nearest Neighbor Methods
WO2022269936A1 (en) Sleeping state estimation system
CN115474901A (en) Non-contact living state monitoring method and system based on wireless radio frequency signals
Bonizzi et al. Sleep apnea detection directly from unprocessed ECG through singular spectrum decomposition
Lu et al. Pulse waveform analysis for pregnancy diagnosis based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20180810

Assignee: Nanjing Hongding perception Technology Co.,Ltd.

Assignor: NANJING University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2022980001965

Denomination of invention: Sleep staging method based on multi-sensor feature optimization algorithm

Granted publication date: 20211210

License type: Exclusive License

Record date: 20220228

EE01 Entry into force of recordation of patent licensing contract