CN113729640B - Wearable swallowing behavior identification method and system - Google Patents

Wearable swallowing behavior identification method and system Download PDF

Info

Publication number
CN113729640B
CN113729640B CN202111179760.XA CN202111179760A CN113729640B CN 113729640 B CN113729640 B CN 113729640B CN 202111179760 A CN202111179760 A CN 202111179760A CN 113729640 B CN113729640 B CN 113729640B
Authority
CN
China
Prior art keywords
swallowing
data
signal
ppg
inertial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111179760.XA
Other languages
Chinese (zh)
Other versions
CN113729640A (en
Inventor
张颖
潘赟
李俊捷
朱怀宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111179760.XA priority Critical patent/CN113729640B/en
Publication of CN113729640A publication Critical patent/CN113729640A/en
Application granted granted Critical
Publication of CN113729640B publication Critical patent/CN113729640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/42Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
    • A61B5/4205Evaluating swallowing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Endocrinology (AREA)
  • Power Engineering (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Wearable swallowing deviceThe pharyngeal behavior identification method comprises the following steps: step (1), training a certain position l 1 Swallowing behavior recognition model c l1 (ii) a Step (2), swallowing inertial signals and two-way photoelectric pulse wave signals at the designated position of the throat are collected by using wearable equipment; step (3), the inertial data I and the double-path PPG data P are preprocessed to obtain new inertial signals
Figure DDA0003296741960000011
And PPG motion component p motion (ii) a Step (4), the preprocessed inertial data are processed
Figure DDA0003296741960000012
And PPG signal p motion Dividing into fine-grained data and extracting features; step (5), inputting the extracted fine-grained features into a swallowing behavior recognition model to obtain a recognition result; step (6), swallowing data at different positions are collected, and the steps are repeated to construct swallowing identification models at different positions; and (7) acquiring swallowing data at different positions, and performing swallowing recognition on all the swallowing data by using a swallowing recognition model at a designated position. And a system for implementing the wearable swallowing behavior recognition method. The invention can be used for daily monitoring and has higher accuracy.

Description

Wearable swallowing behavior identification method and system
Technical Field
The invention relates to the technical field of swallowing behavior identification, in particular to a swallowing behavior identification method and system based on wearable swallowing monitoring equipment.
Background
Swallowing is a complex physiological activity ubiquitous in daily life, requires coordination of various nerves and muscles, is a necessary condition for food intake and nutrition supply, and common diseases such as cerebral apoplexy and Parkinson cause swallowing disorder, so that serious consequences such as aspiration and lung infection can be caused. However, the traditional medical monitoring methods such as the video angiography (VFSS) and the videoendoscopic VESS (VESS) may radiate or cause pain to the subject, and the testing environment is limited to hospitals. Mobile and even wearable swallowing monitoring is therefore essential, and swallowing behaviour recognition is also very important as a fundamental step in mobile swallowing studies.
The current wearable swallowing behavior recognition research mainly comprises the major links of swallowing signal sensing, preprocessing of swallowing signals, feature extraction and judgment of whether the signals contain swallowing behaviors. For swallowing signal sensing, myoelectric, image, sound, respiratory flow, piezoelectric, acceleration and other signals are mainly adopted at present. Common ways for swallowing signal preprocessing comprise band-pass filtering, wavelet threshold denoising and the like, characteristics are extracted from a time domain, a frequency domain, a time-frequency domain and an information domain in a large number of researches, and then training is carried out to carry out secondary classification on whether signals contain swallowing behaviors or not by using models such as linear discriminant analysis and a support vector machine.
The existing research has some problems, wherein the important point is that many researches only collect single-kind swallowing signals, which results in insufficient information abundance, in order to solve the problem, the research adopts a method for increasing the number of measurement channels of the signals, such as measuring high-density surface myoelectric signals and the like, but the measurement is troublesome, the comfort level is low, and the contradiction that the information abundance and the measurement complexity can be simultaneously improved by adopting different kinds of swallowing signals can be coordinated to a certain degree.
Therefore, a wearable swallowing behavior identification method and system which are simple in data acquisition and can be used for daily monitoring are needed to be constructed.
Disclosure of Invention
In order to overcome the defects of complicated test process, poor test experience of a subject, inapplicability to daily monitoring and low accuracy rate of the conventional swallowing behavior recognition system, the invention provides a wearable swallowing behavior recognition method and a wearable swallowing behavior recognition system which can be used for daily monitoring and have high accuracy rate.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a wearable swallowing behavior recognition method, comprising the steps of:
step (1), train a certain location l 1 Swallowing behavior recognition model c l1
Step (2), a wearable swallowing signal acquisition device is utilized to be positioned at a designated position l 1 Collecting laryngeal swallowing inertia informationNumber I t And two-way photoelectric volume pulse wave PPG signal
Figure BDA0003296741940000021
Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (3) of respectively comparing the collected inertial data I t And two-way PPG data P t Preprocessing the signal to obtain a new inertial signal
Figure BDA0003296741940000022
And PPG motion component
Figure BDA0003296741940000023
Step (4), preprocessed inertial data are processed
Figure BDA0003296741940000024
And PPG motion component
Figure BDA0003296741940000025
Partitioning into fine-grained data
Figure BDA0003296741940000026
And
Figure BDA0003296741940000027
where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term
Figure BDA0003296741940000028
Are data that contain N long samples;
step (5), according to the specified feature set F C Extracting fine-grained features of all long-segment data
Figure BDA0003296741940000031
Wherein the first m columns are corresponding characteristics of the inertial signal, and the last column is PPG motion component corresponding characteristics;
step (6), extracting fine-grained characteristics
Figure BDA0003296741940000032
Input swallowing behavior recognition model c l1 Performing swallowing behavior recognition, and obtaining a fine-grained recognition result
Figure BDA0003296741940000033
Splicing to obtain the identification result of the complete original data
Figure BDA0003296741940000034
Where N is the total number of long samples and r is the number of fine-grained short samples sliced per long sample.
Further, the step of step (1) comprises:
step (1-1), a wearable swallowing signal acquisition device is utilized to acquire a designated position l 1 Collecting laryngeal swallowing data X l1 = { I, P }, swallowing data comprising an inertial signal I and a two-way photoplethysmography (PPG) signal
Figure BDA0003296741940000035
Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (1-2), swallowing data X are separately processed l1 The inertial data I and the two-way PPG data P in the system are preprocessed to obtain a new inertial signal
Figure BDA0003296741940000036
And PPG motion component p motion
Step (1-3), the preprocessed inertial data is processed
Figure BDA0003296741940000037
And PPG motion component p motion Partitioning into fine-grained data
Figure BDA0003296741940000041
And P motion_s =[(p motion_s ) 1 ;(p motion_s ) 2 ;…;(p motion_s ) r ]Where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term (i) s ) uv =[i sp1 ,i sp2 ,…,i spM ],(p motion_s ) u =[p motion_sp1 ,p motion_sp2 ,…,p motion_spM ]Are all data containing M long samples;
step (1-4), swallowing data X l1 Determining a sample label
Figure BDA0003296741940000042
Wherein r is the number of fine-grained samples contained in each long sample, and
Figure BDA0003296741940000043
an ith fine-grained label containing M long samples;
step (1-5), extracting fine-grained characteristics of all long fragment data to obtain characteristic set
Figure BDA0003296741940000044
Wherein
Figure BDA0003296741940000045
M is the total number of long samples, H is the total number of features extracted for each fine-grained data, and a sample label B l1 Form a training set s 1 ={F A ,B l1 };
Step (1-6), training set s 1 ={F A ,B l1 Inputting the classification model to perform swallowing behavior identification, and identifying the fine-grained identification result
Figure BDA0003296741940000046
Identification result Y = [ Y ] spliced into complete original data 1 y 2 … y M ]Wherein M is the total number of long samples, r is the number of fine-grained short samples sliced per long sample, and the method comprisesDifference result from s 1 Actual label B in (1) l1 Comparing and updating the model hyper-parameter to obtain a swallowing behavior recognition model c l1 And selecting the characteristics adopted by the recognition model to form a swallowing characteristic set under the data acquisition position
Figure BDA0003296741940000051
Wherein
Figure BDA0003296741940000052
Q is the total characteristic number extracted from each fine-grained data, Q is less than or equal to H, and the classification model is a random forest.
Further, the step (3) comprises:
step (3-1), removing the noise of the inertial signal in the non-swallowing frequency band to obtain the inertial signal after noise reduction
Figure BDA0003296741940000053
Each of which is
Figure BDA0003296741940000054
All the inertial signals contain N long samples, the noise reduction method is band-pass filtering, and then the modulus value of each inertial signal is taken to obtain the preprocessed inertial signal
Figure BDA0003296741940000055
Step (3-2), removing baseline drift of the two-path PPG signal, and reducing noise of the PPG signal by using band-pass filtering, wherein the frequency band W of the band-pass filtering is related to the concerned swallowing frequency, the normal swallowing frequency is 1Hz, and the band-pass filtering range is near the frequency, so that the filtered two-path PPG signal is obtained
Figure BDA0003296741940000056
Step (3-3), carrying out two-path PPG signal after noise reduction
Figure BDA0003296741940000057
Is subjected to separation to obtain
Figure BDA0003296741940000058
Firstly, blind source separation is carried out on two PPG signals to obtain a signal with a larger ratio of motion components
Figure BDA0003296741940000059
A signal with a larger ratio of heart rate components
Figure BDA00032967419400000510
The blind source separation mode is independent component analysis;
step (3-4), for two-path PPG signal
Figure BDA00032967419400000511
Performing 'difference amplification' to obtain
Figure BDA00032967419400000512
Wherein
Figure BDA00032967419400000513
Two paths of signals are combined
Figure BDA00032967419400000514
Multiplying time domain, reducing fluctuation degree of non-swallowing signal segment, increasing difference of other swallowing segments, and obtaining PPG motion component
Figure BDA00032967419400000515
Further, in the step (1-2), the preprocessing includes denoising the inertial signals, and taking a modulus value for each type of inertial signal to obtain preprocessed inertial signals
Figure BDA00032967419400000516
Filtering the two-path PPG signal to obtain a filtered two-path PPG signal P d =[(p d ) 1 ,(p d ) 2 ](ii) a Double-path PPG signal P after noise reduction d By "separation" to obtain P S =[(p S ) 1 ,(p S ) 2 ](ii) a For two-path PPG signal P S Carry out' differenceHetero-amplification to
Figure BDA00032967419400000517
Finally obtaining PPG motion component p motion
Still further, in the step (1-5), the extracted features specifically include a time domain feature F time ={(f time ) 1 ,(f time ) 2 …, frequency domain feature F f ={(f f ) 1 ,(f f ) 2 …, time-frequency domain feature F time-f ={(f time-f ) 1 ,(f time-f ) 2 …, domain feature F i ={(f i ) 1 ,(f i ) 2 …, i.e., a complete feature set, may be denoted as F A ={F time ,F f ,F time-f ,F i }。
Further, in the present invention, the designated data acquisition position in the step (1) and the step (2) is L = { L = { (L) 1 ,l 2 ,…l n And (5) constructing n data sets S = { S } according to the data acquired at the positions in the step (1-5), wherein n is the number of the acquired positions and includes the intersection point of a horizontal line where the depression between the thyroid cartilage and the cricoid cartilage and the lower edge of the thyroid cartilage are located and the upper thyroid artery 1 ,s 2 ,…s n Applied to each position data set, resulting in a different classifier C = { C) for swallowing behaviour recognition l1 ,c l1 ,…,c ln }。
Still further, the position of the neck where the fluctuation of the thyroid cartilage or cricoid cartilage occurs and the PPG signal changes during swallowing is within the applicable range L of the method, and the measurement position L includes a lateral position, i.e. the intersection point of the horizontal line of the lower edge of the thyroid cartilage and the upper thyroid artery.
A system for realizing a wearable swallowing behavior identification method comprises a wearable swallowing signal acquisition module and a data processing module.
Further, the wearable swallowing signal acquisition module comprises an inertia sensing submodule, a two-way PPG sensing submodule and a microcontroller submodule, and the microcontroller submodule is used for controlling reading and writing of the sensing submodule and data packaging; the equipment supported by the signal acquisition module comprises a battery and/or a battery charging interface, and the battery or a rechargeable battery is adopted to supply power to each submodule; the device is attached to the neck by attaching a viscoelastic band to the neck to position the neck for swallowing behavior data acquisition.
Furthermore, the data processing module realizes the data processing method in the wearable swallowing behavior identification method, the data processing module runs on an upper computer in the form of software, comprises a smart phone, a smart tablet or a personal computer, exchanges data with the swallowing signal acquisition equipment in a wireless or wired communication mode, comprises Bluetooth, wiFi or a flexible flat cable, or is integrated into the swallowing signal acquisition equipment in the form of a special hardware chip circuit.
The invention has the following beneficial effects: the influence of different measurement positions on the swallowing behaviour recognition performance was explored. In particular, the system comprises a two-way PPG motion component extraction module which is beneficial to improving the swallowing behavior identification performance. Compared with the prior art, the system provided by the invention collects the heterologous swallowing signals, ensures certain information abundance, is relatively simple in equipment wearing and signal collection process, is high in identification accuracy, can realize better performance on different position data sets, and can be used for daily monitoring.
Drawings
Fig. 1 is a flowchart of a swallowing behavior recognition method according to the present invention.
Fig. 2 is a block diagram of a wearable swallowing behavior recognition system of the invention.
Fig. 3 is a block diagram of the device side embedded program and bluetooth packet structure according to the present invention.
FIG. 4 is a flowchart of the present invention for decoding Bluetooth data packets.
FIG. 5 is a block diagram of a portion of the computer data processing of the present invention.
Fig. 6 is a schematic diagram of a two-way PPG motion component extraction module of the present invention.
Fig. 7 is a schematic diagram of a wearing position and a schematic waveform of the wearable device of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 7, a wearable swallowing behavior recognition method includes the following steps:
step (1), training the intermediate position l 1 Swallowing behavior recognition model c (depression between thyroid cartilage and cricoid cartilage) l1
Step (2), the wearable swallowing signal acquisition equipment is utilized to be at the middle position l 1 Collecting laryngeal swallowing inertia signal I t And two-way photoelectric volume pulse wave PPG signal
Figure BDA0003296741940000071
Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (3), respectively comparing the collected inertial data I t And two-way PPG data P t Pre-processing to obtain new inertial signal
Figure BDA0003296741940000081
And PPG motion component
Figure BDA0003296741940000082
Step (4), preprocessed inertial data are processed
Figure BDA0003296741940000083
And PPG motion component
Figure BDA0003296741940000084
Partitioning into fine-grained data
Figure BDA0003296741940000085
And
Figure BDA0003296741940000086
wherein m is the number of inertial signal types, r isThe number of fine-grained short samples per long sample cut, and each entry
Figure BDA0003296741940000087
Are data that contain N long samples;
step (5), according to the specified feature set F C Extracting fine-grained features of all long-segment data
Figure BDA0003296741940000088
Wherein the first m columns are corresponding characteristics of inertial signals, and the last column is corresponding characteristics of PPG motion components;
step (6), extracting fine-grained characteristics
Figure BDA0003296741940000089
Input swallowing behavior recognition model c l1 Carrying out swallowing behavior recognition and obtaining a fine-grained recognition result
Figure BDA00032967419400000810
Splicing to obtain the identification result of the complete original data
Figure BDA00032967419400000811
Where N is the total number of long samples and r is the number of fine-grained short samples sliced per long sample.
Further, the step of step (1) comprises:
step (1-1), a wearable swallowing signal acquisition device is utilized to acquire a designated position l 1 Collecting laryngeal swallowing data X l1 = { I, P }, swallowing data comprising an inertial signal I and a two-way photoplethysmography (PPG) signal
Figure BDA0003296741940000091
Wherein m is the number of the types of the inertial signals, k is the dimension of each type of the inertial signals, and the number of the single acquisition points of each path of signals is N d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (1-2), swallowing data X are separately processed l1 The inertial data I and the double-path PPG data P in the system are preprocessed to obtainTo new inertia signal
Figure BDA0003296741940000092
And PPG motion component p motion
Step (1-3), the preprocessed inertial data is processed
Figure BDA0003296741940000093
And PPG motion component p motion Partitioning into fine-grained data
Figure BDA0003296741940000094
And P motion_s =[(p motion_s ) 1 ;(p motion_s ) 2 ;…;(p motion_s ) r ]Where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term (i) s ) uv =[i sp1 ,i sp2 ,…,i spM ],(p motion_s ) u =[p motion_sp1 ,p motion_sp2 ,…,p motion_spM ]Are all data containing M long samples;
step (1-4), swallowing data X l1 Determining a sample label
Figure BDA0003296741940000095
Wherein r is the number of fine-grained samples contained in each long sample, and
Figure BDA0003296741940000096
an ith fine-grained label containing M long samples;
step (1-5), extracting fine-grained characteristics of all long-segment data to obtain a characteristic set
Figure BDA0003296741940000097
Wherein
Figure BDA0003296741940000098
M is the total number of long samples, H is the total number of features extracted for each fine-grained data, and a sample label B l1 Form a training set s 1 ={F A ,B l1 };
Step (1-6), training set s 1 ={F A ,B l1 Inputting a classification model to perform swallowing behavior identification, and identifying the fine-grained identification result
Figure BDA0003296741940000101
Identification result Y = [ Y ] spliced into complete original data 1 y 2 … y M ]Wherein M is the total number of long samples, r is the number of fine-grained short samples segmented by each long sample, and the identification result and s are calculated according to 1 Actual label B in (1) l1 Comparing and updating the model hyper-parameters to obtain a swallowing behavior recognition model c l1 And selecting the characteristics adopted by the recognition model to form a swallowing characteristic set under the data acquisition position
Figure BDA0003296741940000102
Wherein
Figure BDA0003296741940000103
And (4) extracting the total characteristic number of each fine-grained data, wherein Q is less than or equal to H, and the classification model is a random forest.
Further, the step of step (3) includes:
step (3-1), removing the noise of the inertial signal in the non-swallowing frequency band to obtain the inertial signal after noise reduction
Figure BDA0003296741940000104
Each of which is
Figure BDA0003296741940000105
All containing N long samples, the noise reduction method comprises band-pass filtering, and then taking a modulus value of each inertial signal to obtain a preprocessed inertial signal
Figure BDA0003296741940000106
Step (3-2), removing baseline drift of the two-way PPG signal, and reducing noise of the PPG signal by band-pass filtering, wherein the frequency band W of the band-pass filtering is related to the concerned frequency bandThe swallowing frequency is related, the normal swallowing frequency is 1Hz, the band-pass filtering range is near the frequency, and a filtered two-path PPG signal is obtained
Figure BDA0003296741940000107
Step (3-3), carrying out two-path PPG signal after noise reduction
Figure BDA0003296741940000108
Is subjected to separation to obtain
Figure BDA0003296741940000109
Firstly, blind source separation is carried out on two PPG signals to obtain a signal with a larger ratio of motion components
Figure BDA00032967419400001010
A signal with a larger ratio of heart rate components
Figure BDA0003296741940000111
The blind source separation mode comprises independent component analysis;
step (3-4), for two-path PPG signal
Figure BDA0003296741940000112
Performing 'difference amplification' to obtain
Figure BDA0003296741940000113
Wherein
Figure BDA0003296741940000114
In particular, two signals are combined
Figure BDA0003296741940000115
Time domain multiplication is carried out to reduce the fluctuation degree of non-swallowing signal segments and increase the difference of other swallowing segments to obtain PPG motion components
Figure BDA0003296741940000116
Still further, in the step (1-2), pretreatmentIncludes such steps as reducing noise of inertial signals, and taking the module value of each inertial signal to obtain preprocessed inertial signals
Figure BDA0003296741940000117
Filtering the two-path PPG signal to obtain a filtered two-path PPG signal P d =[(p d ) 1 ,(p d ) 2 ](ii) a Double-path PPG signal P after noise reduction d By "separation" to obtain P S =[(p S ) 1 ,(p S ) 2 ](ii) a For two-path PPG signal P S Performing 'difference amplification' to obtain
Figure BDA0003296741940000118
Finally obtaining PPG motion component p motion
Still further, in the step (1-5), the extracted features specifically include a time domain feature F time ={(f time ) 1 ,(f time ) 2 …, frequency domain feature F f ={(f f ) 1 ,(f f ) 2 …, time-frequency domain feature F time-f ={(f time-f ) 1 ,(f time-f ) 2 …, domain feature F i ={(f i ) 1 ,(f i ) 2 …, i.e., a complete feature set, may be denoted as F A ={F time ,F f ,F time-f ,F i }。
New swallowing data is collected at a lateral position, namely the intersection point of the horizontal line of the lower edge of the thyroid cartilage and the upper thyroid artery, and the steps are repeated to construct a swallowing behavior identification model c at the lateral position l2 And collecting new side position data, applying the trained side position model c l2 And carrying out swallowing behavior identification.
Referring to fig. 2, the system for implementing the wearable swallowing behavior recognition method includes a wearable swallowing signal acquisition module and a data processing module.
The wearable swallowing signal acquisition module comprises a specific hardware circuit and an embedded program. The system collects swallowing inertia signals and two-path PPG signals. The inertial sensor adopts MPU9250, the analog signal receiving module in the PPG sensing module adopts SFH7050 chip, a light source capable of emitting infrared light and green light and a photodiode for receiving reflected light are arranged in the chip, and the PPG analog-to-digital conversion module adopts AFE4490. In addition, the hardware is powered by a battery and has a battery charging function.
The data processing module receives and decodes the sensing data, then the swallowing data is subjected to preprocessing and feature extraction in sequence and then is subjected to secondary classification in a random forest (signal segments are divided into two types according to the fact that whether swallowing is contained or not), and the double-path PPG motion component extraction module in the preprocessing module can improve the swallowing behavior recognition rate.
Referring to fig. 3, the embedded program controls the reading and writing of the sensor, packages the data into a specified format and sends the data out in a bluetooth mode. The different types of swallowing sensing data are transmitted separately, each in a respective bluetooth packet. And setting the sampling rate of swallowing sensing data, and determining the number of sampling points in each Bluetooth packet and the data size of each sampling point. Each sensing data packet comprises a packet head of 2 bytes, a sequence number (id) where the current packet of 2 bytes is located, and specified type sensing data of the specified sampling point number. The sampling rates of the inertia sensing data and the two-path PPG data are respectively 250Hz and 500Hz, the number of sampling points in a corresponding type of Bluetooth packet is respectively 5 and 10, the number of bytes of each sampling point is respectively 18 and 6, so that the size of each type of Bluetooth packet is respectively 2+5 × 8=94Bytes and 2+10 × 6=64Bytes, and the packet header is respectively 0x5555 and 0xaaaa.
When the Bluetooth-to-serial module starts to receive data sent by the equipment, the computer end also starts to read and decode the sensing data transmitted by the serial port. When receiving, the computer end configures baud rate (115200 bps) matched with the host and serial port receiving cache size (4096 Bytes) and then opens the serial port, calls a serial port callback function to store a txt file with specified size (10 s sampling data), wherein inertial data in each sample is transmitted by 10 × 250=2500 sampling points, PPG data is transmitted by 10 × 500=5000 sampling points, and thus each file stores 500 pieces of two kinds of Bluetooth packet data, namely (94B 64B) × 500=79KB data.
Referring to fig. 4, the step of decoding data includes:
1) Decoding one type of data packets, finding all packet headers of the type, and determining each packet to be selected;
2) If the last packet is defective, the current packet to be selected is read by the last packet to be selected, and the last packet is deleted at the moment, otherwise, the next step is executed;
3) If the current packet is incomplete, the packet head of other types should be arranged in the packet to be selected, the current packet should be deleted, otherwise, the id of two bytes and the type of sensing data of the corresponding bytes are continuously read after the packet head;
4) Repeating the operation on other packets to be selected;
5) The decoding of the remaining types of packets is started in the same manner as above.
Referring to fig. 5, two types of swallowing signals are preprocessed separately. Inertial signal I * ={Acc x ,Acc y ,Acc z ,Gyro x ,Gyro y ,Gyro z And (4) denoising the inertial signal by using methods of wavelet decomposition reconstruction (12-level 'sym 8' wavelet basis decomposition), wavelet soft threshold denoising, moving smooth filtering (10 points window length) and the like in sequence, wherein the methods comprise triaxial acceleration and triaxial angular velocity
Figure BDA0003296741940000131
Respectively taking module values of the acceleration and the angular velocity to obtain two paths of fused inertia signals
Figure BDA0003296741940000132
And for two-path PPG signal
Figure BDA0003296741940000133
The preprocessing mode mainly comprises normalization processing, band-pass filtering and two-path PPG motion component extraction, wherein the band-pass filtering frequency band is
Figure BDA00032967419400001313
Obtaining a new two-path PPG signal after filtering
Figure BDA0003296741940000134
Referring to fig. 6, the two-way PPG motion component extraction module mainly comprises two steps of "separation" and "differential amplification".
Separation: the "separation" step employs a classical time-domain ICA algorithm.
After ICA, two recovered signals
Figure BDA0003296741940000135
The time-frequency difference between the two signals is increased, although the motion components are not completely extracted, the occupation ratios of the two signals in different areas are different, and one signal is a signal
Figure BDA0003296741940000136
The motion perception is more sensitive, the correlation with the original infrared light PPG signal is larger, and the other path is
Figure BDA0003296741940000137
The heart rate perception is more sensitive.
And (3) differential amplification: at the moment, one path of signal
Figure BDA0003296741940000138
The swallow amplitude of (1) is larger, but the fluctuation of the stationary segment is also larger (the signal of the path is more relevant to the infrared PPG signal), and the signal of the other path is larger
Figure BDA0003296741940000139
The swallowing amplitude is small, but the fluctuation of the static segment is small, therefore, two paths of signals are multiplied to reduce the fluctuation degree of the static segment and increase the difference between the static segment and the swallowing segment to obtain the swallowing signal
Figure BDA00032967419400001310
Wherein
Figure BDA00032967419400001311
Dividing the two paths of fused inertial signals and the extracted PPG motion component long fragment data into fine-grained short fragment data, and extracting the fine-grained short fragment dataThe characteristics cover time domain, frequency domain, time frequency domain and wavelet domain to form characteristic set
Figure BDA00032967419400001312
Where r =10, is the number of short segments into which each long segment is divided, N = m · H =3 × 96, is the total number of all features of the 3 signals. Where each feature of each short segment is divided by the sum of all corresponding short segment features of the long segment in which it resides, i.e.
Figure BDA0003296741940000141
The classification model adopts random forest, the short segment characteristics are used as input, and then the output short segment classification result is obtained
Figure BDA0003296741940000142
(wherein r =10, M is the total number of samples) to splice long fragment sequences
Figure BDA0003296741940000143
And performing super-parameter updating and performance testing on the classification result sequence. The algorithm involves two cycles, firstly, the whole samples in the major cycle are divided into a training and verification set S according to a five-fold mode train-valid And test set S test Training set and verification set S train-valid To determine the optimal hyper-parameter HP on the set final Test set S test For evaluating the performance of the super parameter. In a small cycle, at each training + validation set S train-valid The training set S is also divided into five sections train And a verification set S valid In the training set S train Go up through the whole super reference combination HP = { HP = { HP } 1 ,HP 2 ,…HP num At the corresponding verification set S valid Evaluation of the group of Superginseng HP j The performance of each super parameter after five-fold is averaged to select the super parameter HP with the best performance final Here, the performance index takes the F1 score, and then the hyper-parameter is applied to the entire training + validation set S train-valid Using a corresponding measureTest set S test To evaluate the performance of the trained model. The hyper-parameters for updating include the number of trees, the minimum number of samples of leaf nodes, and the out-of-bag importance.
Referring to FIG. 7, the present example utilizes the above system to acquire two positions for 36 subjects
Figure BDA0003296741940000144
The duration of each measurement is 10s, the identification resolution is 1s, and a data set is constructed
Figure BDA0003296741940000145
One position therein
Figure BDA0003296741940000146
Is a depression (intermediate position) between the thyroid cartilage and the cricoid cartilage, another position that has been used in swallowing studies
Figure BDA0003296741940000151
The intersection point (side position) of the horizontal line of the lower edge of the thyroid cartilage and the upper thyroid artery is shown, and only the waveforms of a single-path inertial signal and a single-path PPG signal are shown in the figure.
The measuring process is as follows:
1) The testee sits quietly, the upper part of the body sits straight, the face faces the front, the neck is in a natural state, the device is fixed on the neck through an elastic band with buttons, the left (right) side of the SFH7050 and the thyroid cartilage are guaranteed to be in the upper thyroid artery of the same horizontal line, the device is started, and the serial port is opened after the device and the development board establish Bluetooth communication.
2) The subject holds 5mL of drinking water, presses a button on the device, swallows the water at any time within 10s after a serial port indicator lamp of the development board is turned on, and the single measurement lasts for 10s (the lamp is always on).
3) Step 2) was repeated 5 times.
4) Each subject remained still at the 6 th measurement, and remained as unchanged as possible.
5) After rest, each subject repeated the above 6 measurements, for a total of 2 groups.
6) The above measurements were repeated with SFH7050 aligned to a position intermediate between the thyroid and cricoid cartilage, and 2 groups were also tested.
The two data set cases constructed are shown in table 1:
Figure BDA0003296741940000152
TABLE 1
In this example, the data sets are classified twice by the above method, and the identification result and the evaluation index are defined as shown in table 2,3:
Figure BDA0003296741940000153
TABLE 2
Figure BDA0003296741940000154
Figure BDA0003296741940000161
TABLE 3
In-instance recognition results
Figure BDA0003296741940000162
As shown in table 4, table 4 is the performance of the method proposed in this embodiment on two position data sets;
Figure BDA0003296741940000163
TABLE 4
According to the embodiment, the swallowing behavior recognition function with higher performance can be realized by using a simpler data acquisition method.
It should be noted that the data segment length and the segmentation method in the embodiment of the present invention include, but are not limited to, the above situations. The data processing part can be used for other swallowing acquisition equipment comprising the laryngeal inertia signal and the two-way PPG signal.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.

Claims (8)

1. A wearable swallowing behaviour recognition method, the method comprising the steps of:
step (1), train a certain location l 1 Swallowing behavior recognition model c l1
Step (2), a wearable swallowing signal acquisition device is utilized to be positioned at a designated position l 1 Collecting laryngeal swallowing inertia signal I t And two-way photoelectric volume pulse wave PPG signal
Figure FDA0003941381690000011
Figure FDA0003941381690000012
Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (3) of respectively comparing the collected inertial signals I t And two-way PPG data P t Preprocessing the signal to obtain a new inertial signal
Figure FDA0003941381690000013
And PPG motion component
Figure FDA0003941381690000014
The step of step (3) comprises:
step (3-1)) Removing noise of non-swallowing frequency band of inertial signal to obtain noise-reduced inertial signal
Figure FDA0003941381690000015
Each of which is
Figure FDA0003941381690000016
All contain N long samples, and the noise reduction method includes but is not limited to band-pass filtering, and then modulus value is taken for each inertial signal to obtain the preprocessed inertial signal
Figure FDA0003941381690000017
And (3-2) removing baseline drift of the two-path PPG signal, and reducing the noise of the PPG signal by a method including but not limited to band-pass filtering, wherein the frequency band W of the band-pass filtering is related to the concerned swallowing frequency, the normal swallowing frequency is 1Hz, and the band-pass filtering range is around the frequency, so that the filtered two-path PPG signal is obtained
Figure FDA0003941381690000018
Step (3-3), carrying out two-path PPG signal after noise reduction
Figure FDA0003941381690000019
Is subjected to separation to obtain
Figure FDA00039413816900000110
Firstly, blind source separation is carried out on two PPG signals to obtain a signal with a larger ratio of motion components
Figure FDA00039413816900000111
A signal having a larger ratio to a single heart rate component
Figure FDA00039413816900000112
Blind source separation approaches include, but are not limited to, independent component analysis;
step (3-4), forTwo-way PPG signal
Figure FDA00039413816900000113
Performing 'difference amplification' to obtain
Figure FDA00039413816900000114
Wherein
Figure FDA00039413816900000115
Two paths of signals are combined
Figure FDA00039413816900000116
Time domain multiplication is carried out to reduce the fluctuation degree of non-swallowing signal segments and increase the difference of other swallowing segments to obtain PPG motion components
Figure FDA00039413816900000117
Step (4), the preprocessed inertial data are processed
Figure FDA00039413816900000118
And PPG motion component
Figure FDA00039413816900000119
Partitioning into fine-grained data
Figure FDA00039413816900000120
And
Figure FDA00039413816900000121
where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term
Figure FDA0003941381690000021
Are data that contain N long samples;
step (5), according to the specified feature set F C Extracting fine-grained features of all long-segment data
Figure FDA0003941381690000022
Wherein the first m columns are corresponding characteristics of inertial signals, and the last column is corresponding characteristics of PPG motion components;
step (6), extracting fine-grained characteristics
Figure FDA0003941381690000023
Input swallowing behavior recognition model c l1 Carrying out swallowing behavior recognition and obtaining a fine-grained recognition result
Figure FDA0003941381690000024
Splicing to obtain the identification result of the complete original data
Figure FDA0003941381690000025
Where N is the total number of long samples and r is the number of fine-grained short samples sliced per long sample.
2. A wearable deglutition behavior recognition method as claimed in claim 1, wherein the step of step (1) comprises:
step (1-1), a wearable swallowing signal acquisition device is utilized to acquire a designated position l 1 Collecting laryngeal swallowing data X l1 = { I, P }, swallowing data comprising an inertial signal I and a two-way photoplethysmographic (PPG) signal P = [ P = [ P ] 1 ,p 2 ],
Figure FDA0003941381690000026
Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (1-2), swallowing data X is processed separately l1 The inertial data I and the two-way PPG data P in the system are preprocessed to obtain a new inertial signal
Figure FDA0003941381690000027
And PPG motion component p motion
Step (1-3), the preprocessed inertial data is processed
Figure FDA0003941381690000028
And PPG motion component p motion Partitioning into fine-grained data
Figure FDA0003941381690000029
And P motion_s =[(p motion_s ) 1 ;(p motion_s ) 2 ;…;(p motion_s ) r ]Where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term (i) s ) uv =[i sp1 ,i sp2 ,…,i spM ],(p motion_s ) u =[p motion_sp1 ,p motion_sp2 ,…,p motion_spM ]Are all data containing M long samples;
step (1-4), swallowing data X l1 Determining a sample label
Figure FDA0003941381690000031
Wherein r is the number of fine-grained samples contained in each long sample, and
Figure FDA0003941381690000032
an ith fine-grained label containing M long samples;
step (1-5), extracting fine-grained characteristics of all long fragment data to obtain characteristic set
Figure FDA0003941381690000033
Wherein
Figure FDA0003941381690000034
M is the total number of long samples, H is F A The total characteristic number extracted from each fine-grained data and a sample label B l1 Form a training set s 1 ={F A ,B l1 };
Step (1-6), training set s 1 ={F A ,B l1 Inputting the classification model to perform swallowing behavior identification, and identifying the fine-grained identification result
Figure FDA0003941381690000035
Identification result Y = [ Y ] spliced into complete original data 1 y 2 … y M ]Wherein M is the total number of long samples, r is the number of fine-grained short samples segmented by each long sample, and the identification result and s are calculated according to 1 Actual label B in (1) l1 Comparing and updating the model to obtain a swallowing behavior recognition model c l1 And selecting the characteristics adopted by the recognition model to form a swallowing characteristic set under the data acquisition position
Figure FDA0003941381690000036
Wherein
Figure FDA0003941381690000037
Q is F C Wherein Q is less than or equal to H, and the classification model includes but is not limited to random forest.
3. A wearable swallowing behaviour recognition method according to claim 2, wherein in step (1-2), the pre-processing comprises denoising the inertial signals and taking a module value for each inertial signal to obtain pre-processed inertial signals
Figure FDA0003941381690000038
Filtering the two-path PPG signal to obtain a filtered two-path PPG signal P d =[(p d ) 1 ,(p d ) 2 ](ii) a Double-path PPG signal P after noise reduction d By "separation" to obtain P S =[(p S ) 1 ,(p S ) 2 ](ii) a For two-path PPG signal P S Performing 'difference amplification' to obtain
Figure FDA0003941381690000039
Finally obtaining PPG motion component p motion
4. A wearable swallowing behavior recognition method as claimed in claim 2, wherein the designated data collection position in steps (1) and (2) is L = { L = 1 ,l 2 ,…l n The swallowing data acquisition method comprises the following steps of (1) acquiring swallowing data, wherein the swallowing data include but are not limited to a depression between thyroid cartilage and cricoid cartilage, an intersection point of a horizontal line where the lower edge of the thyroid cartilage is located and an upper thyroid artery, and other positions of swallowing data with better signal quality, wherein n is the number of the acquisition positions; thus, in step (1-5), n data sets S = { S } may be constructed from the data acquired at each position described above 1 ,s 2 ,…s n Applied to each position data set to obtain different classifiers C = { C) for swallowing behavior identification l1 ,c l1 ,…,c ln }。
5. A wearable method of swallowing behavior recognition as in claim 4 where the locations of the neck where the thyroid cartilage or cricoid fluctuates and the PPG signal changes during swallowing are within the scope of application L of the method, and the measured location L includes but is not limited to a lateral location, i.e., the intersection of the horizontal line of the inferior border of the thyroid cartilage and the upper thyroid artery.
6. The system implemented by the wearable swallowing behavior recognition method of claim 1, wherein the system comprises a wearable swallowing signal acquisition module and a data processing module.
7. The system of claim 6, wherein the wearable swallowing signal acquisition module mainly comprises an inertial sensing submodule, a two-way PPG sensing submodule and a microcontroller submodule, and the microcontroller submodule is used for controlling reading and writing of the sensing submodule and data packaging; the equipment supported by the signal acquisition module comprises a battery and/or a battery charging interface, and the battery or a rechargeable battery is adopted to supply power to each submodule; the device is placed on the neck position required for data acquisition of swallowing behavior by means of an adhesive elastic band worn around the neck.
8. The system of claim 6, wherein the data processing module implements the wearable swallowing behavior recognition method of claim 1, and the data processing module operates in the form of software on an upper computer, including a smart phone, a smart tablet or a personal computer, and exchanges data with the swallowing signal acquisition device in a wireless or wired communication manner, including bluetooth, wiFi or a flex cable, or is integrated into the device on which the swallowing signal acquisition module depends in the form of a dedicated hardware chip circuit.
CN202111179760.XA 2021-10-11 2021-10-11 Wearable swallowing behavior identification method and system Active CN113729640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111179760.XA CN113729640B (en) 2021-10-11 2021-10-11 Wearable swallowing behavior identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111179760.XA CN113729640B (en) 2021-10-11 2021-10-11 Wearable swallowing behavior identification method and system

Publications (2)

Publication Number Publication Date
CN113729640A CN113729640A (en) 2021-12-03
CN113729640B true CN113729640B (en) 2023-03-21

Family

ID=78726272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111179760.XA Active CN113729640B (en) 2021-10-11 2021-10-11 Wearable swallowing behavior identification method and system

Country Status (1)

Country Link
CN (1) CN113729640B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107518896A (en) * 2017-07-12 2017-12-29 中国科学院计算技术研究所 A kind of myoelectricity armlet wearing position Forecasting Methodology and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016038585A1 (en) * 2014-09-12 2016-03-17 Blacktree Fitness Technologies Inc. Portable devices and methods for measuring nutritional intake
CN108348154A (en) * 2015-08-12 2018-07-31 瓦伦赛尔公司 Method and apparatus for detecting movement via opto-mechanical

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107518896A (en) * 2017-07-12 2017-12-29 中国科学院计算技术研究所 A kind of myoelectricity armlet wearing position Forecasting Methodology and system

Also Published As

Publication number Publication date
CN113729640A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
Mannini et al. Activity recognition in youth using single accelerometer placed at wrist or ankle
Amft et al. Methods for detection and classification of normal swallowing from muscle activation and sound
Zhang et al. Diet eyeglasses: Recognising food chewing using EMG and smart eyeglasses
US7559903B2 (en) Breathing sound analysis for detection of sleep apnea/popnea events
CN103841888B (en) The apnea and hypopnea identified using breathing pattern is detected
CN108670200A (en) A kind of sleep sound of snoring classification and Detection method and system based on deep learning
CN103458777A (en) Method and device for swallowing impairment detection
Krakow et al. Instruments and techniques for investigating nasalization and velopharyngeal function in the laboratory: An introduction
CN103687540A (en) Osa/csa diagnosis using recorded breath sound amplitude profile and pitch contour
KR102134154B1 (en) Pattern Recognition System and Mehod of Ultra-Wideband Respiration Data Based on 1-Dimension Convolutional Neural Network
KR101967342B1 (en) An exercise guide system by using wearable device
CN110200640A (en) Contactless Emotion identification method based on dual-modality sensor
CN103263271A (en) Non-contact automatic blood oxygen saturation degree measurement system and measurement method
CN113520343A (en) Sleep risk prediction method and device and terminal equipment
Ouyang et al. An asymmetrical acoustic field detection system for daily tooth brushing monitoring
US20200060606A1 (en) Methods and devices using swallowing accelerometry signals for swallowing impairment detection
WO2013086615A1 (en) Device and method for detecting congenital dysphagia
WO2020165066A1 (en) Methods and devices for screening swallowing impairment
CN113729640B (en) Wearable swallowing behavior identification method and system
US20200170562A1 (en) Methods and devices for determining signal quality for a swallowing impairment classification model
FR3064463A1 (en) METHOD FOR DETERMINING AN ENSEMBLE OF AT LEAST ONE CARDIO-RESPIRATORY DESCRIPTOR OF AN INDIVIDUAL DURING ITS SLEEP AND CORRESPONDING SYSTEM
CN115886720A (en) Wearable eyesight detection device based on electroencephalogram signals
CN106361327B (en) Waking state detection method and system in sleep state analysis
CN215349053U (en) Congenital heart disease intelligent screening robot
US11723588B2 (en) Device for apnea detection, system and method for expediting detection of apnea events of a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant