CN113729640B - Wearable swallowing behavior identification method and system - Google Patents
Wearable swallowing behavior identification method and system Download PDFInfo
- Publication number
- CN113729640B CN113729640B CN202111179760.XA CN202111179760A CN113729640B CN 113729640 B CN113729640 B CN 113729640B CN 202111179760 A CN202111179760 A CN 202111179760A CN 113729640 B CN113729640 B CN 113729640B
- Authority
- CN
- China
- Prior art keywords
- swallowing
- data
- signal
- ppg
- inertial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/42—Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
- A61B5/4205—Evaluating swallowing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Signal Processing (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Physiology (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Endocrinology (AREA)
- Gastroenterology & Hepatology (AREA)
- Power Engineering (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Wearable swallowing deviceThe pharyngeal behavior identification method comprises the following steps: step (1), training a certain position l 1 Swallowing behavior recognition model c l1 (ii) a Step (2), swallowing inertial signals and two-way photoelectric pulse wave signals at the designated position of the throat are collected by using wearable equipment; step (3), the inertial data I and the double-path PPG data P are preprocessed to obtain new inertial signalsAnd PPG motion component p motion (ii) a Step (4), the preprocessed inertial data are processedAnd PPG signal p motion Dividing into fine-grained data and extracting features; step (5), inputting the extracted fine-grained features into a swallowing behavior recognition model to obtain a recognition result; step (6), swallowing data at different positions are collected, and the steps are repeated to construct swallowing identification models at different positions; and (7) acquiring swallowing data at different positions, and performing swallowing recognition on all the swallowing data by using a swallowing recognition model at a designated position. And a system for implementing the wearable swallowing behavior recognition method. The invention can be used for daily monitoring and has higher accuracy.
Description
Technical Field
The invention relates to the technical field of swallowing behavior identification, in particular to a swallowing behavior identification method and system based on wearable swallowing monitoring equipment.
Background
Swallowing is a complex physiological activity ubiquitous in daily life, requires coordination of various nerves and muscles, is a necessary condition for food intake and nutrition supply, and common diseases such as cerebral apoplexy and Parkinson cause swallowing disorder, so that serious consequences such as aspiration and lung infection can be caused. However, the traditional medical monitoring methods such as the video angiography (VFSS) and the videoendoscopic VESS (VESS) may radiate or cause pain to the subject, and the testing environment is limited to hospitals. Mobile and even wearable swallowing monitoring is therefore essential, and swallowing behaviour recognition is also very important as a fundamental step in mobile swallowing studies.
The current wearable swallowing behavior recognition research mainly comprises the major links of swallowing signal sensing, preprocessing of swallowing signals, feature extraction and judgment of whether the signals contain swallowing behaviors. For swallowing signal sensing, myoelectric, image, sound, respiratory flow, piezoelectric, acceleration and other signals are mainly adopted at present. Common ways for swallowing signal preprocessing comprise band-pass filtering, wavelet threshold denoising and the like, characteristics are extracted from a time domain, a frequency domain, a time-frequency domain and an information domain in a large number of researches, and then training is carried out to carry out secondary classification on whether signals contain swallowing behaviors or not by using models such as linear discriminant analysis and a support vector machine.
The existing research has some problems, wherein the important point is that many researches only collect single-kind swallowing signals, which results in insufficient information abundance, in order to solve the problem, the research adopts a method for increasing the number of measurement channels of the signals, such as measuring high-density surface myoelectric signals and the like, but the measurement is troublesome, the comfort level is low, and the contradiction that the information abundance and the measurement complexity can be simultaneously improved by adopting different kinds of swallowing signals can be coordinated to a certain degree.
Therefore, a wearable swallowing behavior identification method and system which are simple in data acquisition and can be used for daily monitoring are needed to be constructed.
Disclosure of Invention
In order to overcome the defects of complicated test process, poor test experience of a subject, inapplicability to daily monitoring and low accuracy rate of the conventional swallowing behavior recognition system, the invention provides a wearable swallowing behavior recognition method and a wearable swallowing behavior recognition system which can be used for daily monitoring and have high accuracy rate.
The technical scheme adopted by the invention for solving the technical problem is as follows:
a wearable swallowing behavior recognition method, comprising the steps of:
step (1), train a certain location l 1 Swallowing behavior recognition model c l1 ;
Step (2), a wearable swallowing signal acquisition device is utilized to be positioned at a designated position l 1 Collecting laryngeal swallowing inertia informationNumber I t And two-way photoelectric volume pulse wave PPG signalWherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (3) of respectively comparing the collected inertial data I t And two-way PPG data P t Preprocessing the signal to obtain a new inertial signalAnd PPG motion component
Step (4), preprocessed inertial data are processedAnd PPG motion componentPartitioning into fine-grained dataAndwhere m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each termAre data that contain N long samples;
step (5), according to the specified feature set F C Extracting fine-grained features of all long-segment dataWherein the first m columns are corresponding characteristics of the inertial signal, and the last column is PPG motion component corresponding characteristics;
step (6), extracting fine-grained characteristicsInput swallowing behavior recognition model c l1 Performing swallowing behavior recognition, and obtaining a fine-grained recognition resultSplicing to obtain the identification result of the complete original dataWhere N is the total number of long samples and r is the number of fine-grained short samples sliced per long sample.
Further, the step of step (1) comprises:
step (1-1), a wearable swallowing signal acquisition device is utilized to acquire a designated position l 1 Collecting laryngeal swallowing data X l1 = { I, P }, swallowing data comprising an inertial signal I and a two-way photoplethysmography (PPG) signalWherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (1-2), swallowing data X are separately processed l1 The inertial data I and the two-way PPG data P in the system are preprocessed to obtain a new inertial signalAnd PPG motion component p motion ;
Step (1-3), the preprocessed inertial data is processedAnd PPG motion component p motion Partitioning into fine-grained dataAnd P motion_s =[(p motion_s ) 1 ;(p motion_s ) 2 ;…;(p motion_s ) r ]Where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term (i) s ) uv =[i sp1 ,i sp2 ,…,i spM ],(p motion_s ) u =[p motion_sp1 ,p motion_sp2 ,…,p motion_spM ]Are all data containing M long samples;
step (1-4), swallowing data X l1 Determining a sample labelWherein r is the number of fine-grained samples contained in each long sample, andan ith fine-grained label containing M long samples;
step (1-5), extracting fine-grained characteristics of all long fragment data to obtain characteristic setWhereinM is the total number of long samples, H is the total number of features extracted for each fine-grained data, and a sample label B l1 Form a training set s 1 ={F A ,B l1 };
Step (1-6), training set s 1 ={F A ,B l1 Inputting the classification model to perform swallowing behavior identification, and identifying the fine-grained identification resultIdentification result Y = [ Y ] spliced into complete original data 1 y 2 … y M ]Wherein M is the total number of long samples, r is the number of fine-grained short samples sliced per long sample, and the method comprisesDifference result from s 1 Actual label B in (1) l1 Comparing and updating the model hyper-parameter to obtain a swallowing behavior recognition model c l1 And selecting the characteristics adopted by the recognition model to form a swallowing characteristic set under the data acquisition positionWhereinQ is the total characteristic number extracted from each fine-grained data, Q is less than or equal to H, and the classification model is a random forest.
Further, the step (3) comprises:
step (3-1), removing the noise of the inertial signal in the non-swallowing frequency band to obtain the inertial signal after noise reductionEach of which isAll the inertial signals contain N long samples, the noise reduction method is band-pass filtering, and then the modulus value of each inertial signal is taken to obtain the preprocessed inertial signal
Step (3-2), removing baseline drift of the two-path PPG signal, and reducing noise of the PPG signal by using band-pass filtering, wherein the frequency band W of the band-pass filtering is related to the concerned swallowing frequency, the normal swallowing frequency is 1Hz, and the band-pass filtering range is near the frequency, so that the filtered two-path PPG signal is obtained
Step (3-3), carrying out two-path PPG signal after noise reductionIs subjected to separation to obtainFirstly, blind source separation is carried out on two PPG signals to obtain a signal with a larger ratio of motion componentsA signal with a larger ratio of heart rate componentsThe blind source separation mode is independent component analysis;
step (3-4), for two-path PPG signalPerforming 'difference amplification' to obtainWhereinTwo paths of signals are combinedMultiplying time domain, reducing fluctuation degree of non-swallowing signal segment, increasing difference of other swallowing segments, and obtaining PPG motion component
Further, in the step (1-2), the preprocessing includes denoising the inertial signals, and taking a modulus value for each type of inertial signal to obtain preprocessed inertial signalsFiltering the two-path PPG signal to obtain a filtered two-path PPG signal P d =[(p d ) 1 ,(p d ) 2 ](ii) a Double-path PPG signal P after noise reduction d By "separation" to obtain P S =[(p S ) 1 ,(p S ) 2 ](ii) a For two-path PPG signal P S Carry out' differenceHetero-amplification toFinally obtaining PPG motion component p motion 。
Still further, in the step (1-5), the extracted features specifically include a time domain feature F time ={(f time ) 1 ,(f time ) 2 …, frequency domain feature F f ={(f f ) 1 ,(f f ) 2 …, time-frequency domain feature F time-f ={(f time-f ) 1 ,(f time-f ) 2 …, domain feature F i ={(f i ) 1 ,(f i ) 2 …, i.e., a complete feature set, may be denoted as F A ={F time ,F f ,F time-f ,F i }。
Further, in the present invention, the designated data acquisition position in the step (1) and the step (2) is L = { L = { (L) 1 ,l 2 ,…l n And (5) constructing n data sets S = { S } according to the data acquired at the positions in the step (1-5), wherein n is the number of the acquired positions and includes the intersection point of a horizontal line where the depression between the thyroid cartilage and the cricoid cartilage and the lower edge of the thyroid cartilage are located and the upper thyroid artery 1 ,s 2 ,…s n Applied to each position data set, resulting in a different classifier C = { C) for swallowing behaviour recognition l1 ,c l1 ,…,c ln }。
Still further, the position of the neck where the fluctuation of the thyroid cartilage or cricoid cartilage occurs and the PPG signal changes during swallowing is within the applicable range L of the method, and the measurement position L includes a lateral position, i.e. the intersection point of the horizontal line of the lower edge of the thyroid cartilage and the upper thyroid artery.
A system for realizing a wearable swallowing behavior identification method comprises a wearable swallowing signal acquisition module and a data processing module.
Further, the wearable swallowing signal acquisition module comprises an inertia sensing submodule, a two-way PPG sensing submodule and a microcontroller submodule, and the microcontroller submodule is used for controlling reading and writing of the sensing submodule and data packaging; the equipment supported by the signal acquisition module comprises a battery and/or a battery charging interface, and the battery or a rechargeable battery is adopted to supply power to each submodule; the device is attached to the neck by attaching a viscoelastic band to the neck to position the neck for swallowing behavior data acquisition.
Furthermore, the data processing module realizes the data processing method in the wearable swallowing behavior identification method, the data processing module runs on an upper computer in the form of software, comprises a smart phone, a smart tablet or a personal computer, exchanges data with the swallowing signal acquisition equipment in a wireless or wired communication mode, comprises Bluetooth, wiFi or a flexible flat cable, or is integrated into the swallowing signal acquisition equipment in the form of a special hardware chip circuit.
The invention has the following beneficial effects: the influence of different measurement positions on the swallowing behaviour recognition performance was explored. In particular, the system comprises a two-way PPG motion component extraction module which is beneficial to improving the swallowing behavior identification performance. Compared with the prior art, the system provided by the invention collects the heterologous swallowing signals, ensures certain information abundance, is relatively simple in equipment wearing and signal collection process, is high in identification accuracy, can realize better performance on different position data sets, and can be used for daily monitoring.
Drawings
Fig. 1 is a flowchart of a swallowing behavior recognition method according to the present invention.
Fig. 2 is a block diagram of a wearable swallowing behavior recognition system of the invention.
Fig. 3 is a block diagram of the device side embedded program and bluetooth packet structure according to the present invention.
FIG. 4 is a flowchart of the present invention for decoding Bluetooth data packets.
FIG. 5 is a block diagram of a portion of the computer data processing of the present invention.
Fig. 6 is a schematic diagram of a two-way PPG motion component extraction module of the present invention.
Fig. 7 is a schematic diagram of a wearing position and a schematic waveform of the wearable device of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 7, a wearable swallowing behavior recognition method includes the following steps:
step (1), training the intermediate position l 1 Swallowing behavior recognition model c (depression between thyroid cartilage and cricoid cartilage) l1 ;
Step (2), the wearable swallowing signal acquisition equipment is utilized to be at the middle position l 1 Collecting laryngeal swallowing inertia signal I t And two-way photoelectric volume pulse wave PPG signalWherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (3), respectively comparing the collected inertial data I t And two-way PPG data P t Pre-processing to obtain new inertial signalAnd PPG motion component
Step (4), preprocessed inertial data are processedAnd PPG motion componentPartitioning into fine-grained dataAndwherein m is the number of inertial signal types, r isThe number of fine-grained short samples per long sample cut, and each entryAre data that contain N long samples;
step (5), according to the specified feature set F C Extracting fine-grained features of all long-segment dataWherein the first m columns are corresponding characteristics of inertial signals, and the last column is corresponding characteristics of PPG motion components;
step (6), extracting fine-grained characteristicsInput swallowing behavior recognition model c l1 Carrying out swallowing behavior recognition and obtaining a fine-grained recognition resultSplicing to obtain the identification result of the complete original dataWhere N is the total number of long samples and r is the number of fine-grained short samples sliced per long sample.
Further, the step of step (1) comprises:
step (1-1), a wearable swallowing signal acquisition device is utilized to acquire a designated position l 1 Collecting laryngeal swallowing data X l1 = { I, P }, swallowing data comprising an inertial signal I and a two-way photoplethysmography (PPG) signalWherein m is the number of the types of the inertial signals, k is the dimension of each type of the inertial signals, and the number of the single acquisition points of each path of signals is N d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (1-2), swallowing data X are separately processed l1 The inertial data I and the double-path PPG data P in the system are preprocessed to obtainTo new inertia signalAnd PPG motion component p motion ;
Step (1-3), the preprocessed inertial data is processedAnd PPG motion component p motion Partitioning into fine-grained dataAnd P motion_s =[(p motion_s ) 1 ;(p motion_s ) 2 ;…;(p motion_s ) r ]Where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term (i) s ) uv =[i sp1 ,i sp2 ,…,i spM ],(p motion_s ) u =[p motion_sp1 ,p motion_sp2 ,…,p motion_spM ]Are all data containing M long samples;
step (1-4), swallowing data X l1 Determining a sample labelWherein r is the number of fine-grained samples contained in each long sample, andan ith fine-grained label containing M long samples;
step (1-5), extracting fine-grained characteristics of all long-segment data to obtain a characteristic setWhereinM is the total number of long samples, H is the total number of features extracted for each fine-grained data, and a sample label B l1 Form a training set s 1 ={F A ,B l1 };
Step (1-6), training set s 1 ={F A ,B l1 Inputting a classification model to perform swallowing behavior identification, and identifying the fine-grained identification resultIdentification result Y = [ Y ] spliced into complete original data 1 y 2 … y M ]Wherein M is the total number of long samples, r is the number of fine-grained short samples segmented by each long sample, and the identification result and s are calculated according to 1 Actual label B in (1) l1 Comparing and updating the model hyper-parameters to obtain a swallowing behavior recognition model c l1 And selecting the characteristics adopted by the recognition model to form a swallowing characteristic set under the data acquisition positionWhereinAnd (4) extracting the total characteristic number of each fine-grained data, wherein Q is less than or equal to H, and the classification model is a random forest.
Further, the step of step (3) includes:
step (3-1), removing the noise of the inertial signal in the non-swallowing frequency band to obtain the inertial signal after noise reductionEach of which isAll containing N long samples, the noise reduction method comprises band-pass filtering, and then taking a modulus value of each inertial signal to obtain a preprocessed inertial signal
Step (3-2), removing baseline drift of the two-way PPG signal, and reducing noise of the PPG signal by band-pass filtering, wherein the frequency band W of the band-pass filtering is related to the concerned frequency bandThe swallowing frequency is related, the normal swallowing frequency is 1Hz, the band-pass filtering range is near the frequency, and a filtered two-path PPG signal is obtained
Step (3-3), carrying out two-path PPG signal after noise reductionIs subjected to separation to obtainFirstly, blind source separation is carried out on two PPG signals to obtain a signal with a larger ratio of motion componentsA signal with a larger ratio of heart rate componentsThe blind source separation mode comprises independent component analysis;
step (3-4), for two-path PPG signalPerforming 'difference amplification' to obtainWhereinIn particular, two signals are combinedTime domain multiplication is carried out to reduce the fluctuation degree of non-swallowing signal segments and increase the difference of other swallowing segments to obtain PPG motion components
Still further, in the step (1-2), pretreatmentIncludes such steps as reducing noise of inertial signals, and taking the module value of each inertial signal to obtain preprocessed inertial signalsFiltering the two-path PPG signal to obtain a filtered two-path PPG signal P d =[(p d ) 1 ,(p d ) 2 ](ii) a Double-path PPG signal P after noise reduction d By "separation" to obtain P S =[(p S ) 1 ,(p S ) 2 ](ii) a For two-path PPG signal P S Performing 'difference amplification' to obtainFinally obtaining PPG motion component p motion 。
Still further, in the step (1-5), the extracted features specifically include a time domain feature F time ={(f time ) 1 ,(f time ) 2 …, frequency domain feature F f ={(f f ) 1 ,(f f ) 2 …, time-frequency domain feature F time-f ={(f time-f ) 1 ,(f time-f ) 2 …, domain feature F i ={(f i ) 1 ,(f i ) 2 …, i.e., a complete feature set, may be denoted as F A ={F time ,F f ,F time-f ,F i }。
New swallowing data is collected at a lateral position, namely the intersection point of the horizontal line of the lower edge of the thyroid cartilage and the upper thyroid artery, and the steps are repeated to construct a swallowing behavior identification model c at the lateral position l2 And collecting new side position data, applying the trained side position model c l2 And carrying out swallowing behavior identification.
Referring to fig. 2, the system for implementing the wearable swallowing behavior recognition method includes a wearable swallowing signal acquisition module and a data processing module.
The wearable swallowing signal acquisition module comprises a specific hardware circuit and an embedded program. The system collects swallowing inertia signals and two-path PPG signals. The inertial sensor adopts MPU9250, the analog signal receiving module in the PPG sensing module adopts SFH7050 chip, a light source capable of emitting infrared light and green light and a photodiode for receiving reflected light are arranged in the chip, and the PPG analog-to-digital conversion module adopts AFE4490. In addition, the hardware is powered by a battery and has a battery charging function.
The data processing module receives and decodes the sensing data, then the swallowing data is subjected to preprocessing and feature extraction in sequence and then is subjected to secondary classification in a random forest (signal segments are divided into two types according to the fact that whether swallowing is contained or not), and the double-path PPG motion component extraction module in the preprocessing module can improve the swallowing behavior recognition rate.
Referring to fig. 3, the embedded program controls the reading and writing of the sensor, packages the data into a specified format and sends the data out in a bluetooth mode. The different types of swallowing sensing data are transmitted separately, each in a respective bluetooth packet. And setting the sampling rate of swallowing sensing data, and determining the number of sampling points in each Bluetooth packet and the data size of each sampling point. Each sensing data packet comprises a packet head of 2 bytes, a sequence number (id) where the current packet of 2 bytes is located, and specified type sensing data of the specified sampling point number. The sampling rates of the inertia sensing data and the two-path PPG data are respectively 250Hz and 500Hz, the number of sampling points in a corresponding type of Bluetooth packet is respectively 5 and 10, the number of bytes of each sampling point is respectively 18 and 6, so that the size of each type of Bluetooth packet is respectively 2+5 × 8=94Bytes and 2+10 × 6=64Bytes, and the packet header is respectively 0x5555 and 0xaaaa.
When the Bluetooth-to-serial module starts to receive data sent by the equipment, the computer end also starts to read and decode the sensing data transmitted by the serial port. When receiving, the computer end configures baud rate (115200 bps) matched with the host and serial port receiving cache size (4096 Bytes) and then opens the serial port, calls a serial port callback function to store a txt file with specified size (10 s sampling data), wherein inertial data in each sample is transmitted by 10 × 250=2500 sampling points, PPG data is transmitted by 10 × 500=5000 sampling points, and thus each file stores 500 pieces of two kinds of Bluetooth packet data, namely (94B 64B) × 500=79KB data.
Referring to fig. 4, the step of decoding data includes:
1) Decoding one type of data packets, finding all packet headers of the type, and determining each packet to be selected;
2) If the last packet is defective, the current packet to be selected is read by the last packet to be selected, and the last packet is deleted at the moment, otherwise, the next step is executed;
3) If the current packet is incomplete, the packet head of other types should be arranged in the packet to be selected, the current packet should be deleted, otherwise, the id of two bytes and the type of sensing data of the corresponding bytes are continuously read after the packet head;
4) Repeating the operation on other packets to be selected;
5) The decoding of the remaining types of packets is started in the same manner as above.
Referring to fig. 5, two types of swallowing signals are preprocessed separately. Inertial signal I * ={Acc x ,Acc y ,Acc z ,Gyro x ,Gyro y ,Gyro z And (4) denoising the inertial signal by using methods of wavelet decomposition reconstruction (12-level 'sym 8' wavelet basis decomposition), wavelet soft threshold denoising, moving smooth filtering (10 points window length) and the like in sequence, wherein the methods comprise triaxial acceleration and triaxial angular velocityRespectively taking module values of the acceleration and the angular velocity to obtain two paths of fused inertia signalsAnd for two-path PPG signalThe preprocessing mode mainly comprises normalization processing, band-pass filtering and two-path PPG motion component extraction, wherein the band-pass filtering frequency band isObtaining a new two-path PPG signal after filtering
Referring to fig. 6, the two-way PPG motion component extraction module mainly comprises two steps of "separation" and "differential amplification".
Separation: the "separation" step employs a classical time-domain ICA algorithm.
After ICA, two recovered signalsThe time-frequency difference between the two signals is increased, although the motion components are not completely extracted, the occupation ratios of the two signals in different areas are different, and one signal is a signalThe motion perception is more sensitive, the correlation with the original infrared light PPG signal is larger, and the other path isThe heart rate perception is more sensitive.
And (3) differential amplification: at the moment, one path of signalThe swallow amplitude of (1) is larger, but the fluctuation of the stationary segment is also larger (the signal of the path is more relevant to the infrared PPG signal), and the signal of the other path is largerThe swallowing amplitude is small, but the fluctuation of the static segment is small, therefore, two paths of signals are multiplied to reduce the fluctuation degree of the static segment and increase the difference between the static segment and the swallowing segment to obtain the swallowing signalWherein
Dividing the two paths of fused inertial signals and the extracted PPG motion component long fragment data into fine-grained short fragment data, and extracting the fine-grained short fragment dataThe characteristics cover time domain, frequency domain, time frequency domain and wavelet domain to form characteristic setWhere r =10, is the number of short segments into which each long segment is divided, N = m · H =3 × 96, is the total number of all features of the 3 signals. Where each feature of each short segment is divided by the sum of all corresponding short segment features of the long segment in which it resides, i.e.
The classification model adopts random forest, the short segment characteristics are used as input, and then the output short segment classification result is obtained(wherein r =10, M is the total number of samples) to splice long fragment sequencesAnd performing super-parameter updating and performance testing on the classification result sequence. The algorithm involves two cycles, firstly, the whole samples in the major cycle are divided into a training and verification set S according to a five-fold mode train-valid And test set S test Training set and verification set S train-valid To determine the optimal hyper-parameter HP on the set final Test set S test For evaluating the performance of the super parameter. In a small cycle, at each training + validation set S train-valid The training set S is also divided into five sections train And a verification set S valid In the training set S train Go up through the whole super reference combination HP = { HP = { HP } 1 ,HP 2 ,…HP num At the corresponding verification set S valid Evaluation of the group of Superginseng HP j The performance of each super parameter after five-fold is averaged to select the super parameter HP with the best performance final Here, the performance index takes the F1 score, and then the hyper-parameter is applied to the entire training + validation set S train-valid Using a corresponding measureTest set S test To evaluate the performance of the trained model. The hyper-parameters for updating include the number of trees, the minimum number of samples of leaf nodes, and the out-of-bag importance.
Referring to FIG. 7, the present example utilizes the above system to acquire two positions for 36 subjectsThe duration of each measurement is 10s, the identification resolution is 1s, and a data set is constructedOne position thereinIs a depression (intermediate position) between the thyroid cartilage and the cricoid cartilage, another position that has been used in swallowing studiesThe intersection point (side position) of the horizontal line of the lower edge of the thyroid cartilage and the upper thyroid artery is shown, and only the waveforms of a single-path inertial signal and a single-path PPG signal are shown in the figure.
The measuring process is as follows:
1) The testee sits quietly, the upper part of the body sits straight, the face faces the front, the neck is in a natural state, the device is fixed on the neck through an elastic band with buttons, the left (right) side of the SFH7050 and the thyroid cartilage are guaranteed to be in the upper thyroid artery of the same horizontal line, the device is started, and the serial port is opened after the device and the development board establish Bluetooth communication.
2) The subject holds 5mL of drinking water, presses a button on the device, swallows the water at any time within 10s after a serial port indicator lamp of the development board is turned on, and the single measurement lasts for 10s (the lamp is always on).
3) Step 2) was repeated 5 times.
4) Each subject remained still at the 6 th measurement, and remained as unchanged as possible.
5) After rest, each subject repeated the above 6 measurements, for a total of 2 groups.
6) The above measurements were repeated with SFH7050 aligned to a position intermediate between the thyroid and cricoid cartilage, and 2 groups were also tested.
The two data set cases constructed are shown in table 1:
TABLE 1
In this example, the data sets are classified twice by the above method, and the identification result and the evaluation index are defined as shown in table 2,3:
TABLE 2
TABLE 3
In-instance recognition resultsAs shown in table 4, table 4 is the performance of the method proposed in this embodiment on two position data sets;
TABLE 4
According to the embodiment, the swallowing behavior recognition function with higher performance can be realized by using a simpler data acquisition method.
It should be noted that the data segment length and the segmentation method in the embodiment of the present invention include, but are not limited to, the above situations. The data processing part can be used for other swallowing acquisition equipment comprising the laryngeal inertia signal and the two-way PPG signal.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.
Claims (8)
1. A wearable swallowing behaviour recognition method, the method comprising the steps of:
step (1), train a certain location l 1 Swallowing behavior recognition model c l1 ;
Step (2), a wearable swallowing signal acquisition device is utilized to be positioned at a designated position l 1 Collecting laryngeal swallowing inertia signal I t And two-way photoelectric volume pulse wave PPG signal Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (3) of respectively comparing the collected inertial signals I t And two-way PPG data P t Preprocessing the signal to obtain a new inertial signalAnd PPG motion component
The step of step (3) comprises:
step (3-1)) Removing noise of non-swallowing frequency band of inertial signal to obtain noise-reduced inertial signalEach of which isAll contain N long samples, and the noise reduction method includes but is not limited to band-pass filtering, and then modulus value is taken for each inertial signal to obtain the preprocessed inertial signal
And (3-2) removing baseline drift of the two-path PPG signal, and reducing the noise of the PPG signal by a method including but not limited to band-pass filtering, wherein the frequency band W of the band-pass filtering is related to the concerned swallowing frequency, the normal swallowing frequency is 1Hz, and the band-pass filtering range is around the frequency, so that the filtered two-path PPG signal is obtained
Step (3-3), carrying out two-path PPG signal after noise reductionIs subjected to separation to obtainFirstly, blind source separation is carried out on two PPG signals to obtain a signal with a larger ratio of motion componentsA signal having a larger ratio to a single heart rate componentBlind source separation approaches include, but are not limited to, independent component analysis;
step (3-4), forTwo-way PPG signalPerforming 'difference amplification' to obtainWhereinTwo paths of signals are combinedTime domain multiplication is carried out to reduce the fluctuation degree of non-swallowing signal segments and increase the difference of other swallowing segments to obtain PPG motion components
Step (4), the preprocessed inertial data are processedAnd PPG motion componentPartitioning into fine-grained dataAndwhere m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each termAre data that contain N long samples;
step (5), according to the specified feature set F C Extracting fine-grained features of all long-segment dataWherein the first m columns are corresponding characteristics of inertial signals, and the last column is corresponding characteristics of PPG motion components;
step (6), extracting fine-grained characteristicsInput swallowing behavior recognition model c l1 Carrying out swallowing behavior recognition and obtaining a fine-grained recognition resultSplicing to obtain the identification result of the complete original dataWhere N is the total number of long samples and r is the number of fine-grained short samples sliced per long sample.
2. A wearable deglutition behavior recognition method as claimed in claim 1, wherein the step of step (1) comprises:
step (1-1), a wearable swallowing signal acquisition device is utilized to acquire a designated position l 1 Collecting laryngeal swallowing data X l1 = { I, P }, swallowing data comprising an inertial signal I and a two-way photoplethysmographic (PPG) signal P = [ P = [ P ] 1 ,p 2 ],Wherein m is the number of the inertia signal types, k is the dimension of each inertia signal, and the single acquisition point number N of each path of signal d = f.t, where f is the sampling frequency and t is the single data acquisition duration;
step (1-2), swallowing data X is processed separately l1 The inertial data I and the two-way PPG data P in the system are preprocessed to obtain a new inertial signalAnd PPG motion component p motion ;
Step (1-3), the preprocessed inertial data is processedAnd PPG motion component p motion Partitioning into fine-grained dataAnd P motion_s =[(p motion_s ) 1 ;(p motion_s ) 2 ;…;(p motion_s ) r ]Where m is the number of inertial signal classes, r is the number of fine-grained short samples sliced per long sample, and each term (i) s ) uv =[i sp1 ,i sp2 ,…,i spM ],(p motion_s ) u =[p motion_sp1 ,p motion_sp2 ,…,p motion_spM ]Are all data containing M long samples;
step (1-4), swallowing data X l1 Determining a sample labelWherein r is the number of fine-grained samples contained in each long sample, andan ith fine-grained label containing M long samples;
step (1-5), extracting fine-grained characteristics of all long fragment data to obtain characteristic setWhereinM is the total number of long samples, H is F A The total characteristic number extracted from each fine-grained data and a sample label B l1 Form a training set s 1 ={F A ,B l1 };
Step (1-6), training set s 1 ={F A ,B l1 Inputting the classification model to perform swallowing behavior identification, and identifying the fine-grained identification resultIdentification result Y = [ Y ] spliced into complete original data 1 y 2 … y M ]Wherein M is the total number of long samples, r is the number of fine-grained short samples segmented by each long sample, and the identification result and s are calculated according to 1 Actual label B in (1) l1 Comparing and updating the model to obtain a swallowing behavior recognition model c l1 And selecting the characteristics adopted by the recognition model to form a swallowing characteristic set under the data acquisition positionWhereinQ is F C Wherein Q is less than or equal to H, and the classification model includes but is not limited to random forest.
3. A wearable swallowing behaviour recognition method according to claim 2, wherein in step (1-2), the pre-processing comprises denoising the inertial signals and taking a module value for each inertial signal to obtain pre-processed inertial signalsFiltering the two-path PPG signal to obtain a filtered two-path PPG signal P d =[(p d ) 1 ,(p d ) 2 ](ii) a Double-path PPG signal P after noise reduction d By "separation" to obtain P S =[(p S ) 1 ,(p S ) 2 ](ii) a For two-path PPG signal P S Performing 'difference amplification' to obtainFinally obtaining PPG motion component p motion 。
4. A wearable swallowing behavior recognition method as claimed in claim 2, wherein the designated data collection position in steps (1) and (2) is L = { L = 1 ,l 2 ,…l n The swallowing data acquisition method comprises the following steps of (1) acquiring swallowing data, wherein the swallowing data include but are not limited to a depression between thyroid cartilage and cricoid cartilage, an intersection point of a horizontal line where the lower edge of the thyroid cartilage is located and an upper thyroid artery, and other positions of swallowing data with better signal quality, wherein n is the number of the acquisition positions; thus, in step (1-5), n data sets S = { S } may be constructed from the data acquired at each position described above 1 ,s 2 ,…s n Applied to each position data set to obtain different classifiers C = { C) for swallowing behavior identification l1 ,c l1 ,…,c ln }。
5. A wearable method of swallowing behavior recognition as in claim 4 where the locations of the neck where the thyroid cartilage or cricoid fluctuates and the PPG signal changes during swallowing are within the scope of application L of the method, and the measured location L includes but is not limited to a lateral location, i.e., the intersection of the horizontal line of the inferior border of the thyroid cartilage and the upper thyroid artery.
6. The system implemented by the wearable swallowing behavior recognition method of claim 1, wherein the system comprises a wearable swallowing signal acquisition module and a data processing module.
7. The system of claim 6, wherein the wearable swallowing signal acquisition module mainly comprises an inertial sensing submodule, a two-way PPG sensing submodule and a microcontroller submodule, and the microcontroller submodule is used for controlling reading and writing of the sensing submodule and data packaging; the equipment supported by the signal acquisition module comprises a battery and/or a battery charging interface, and the battery or a rechargeable battery is adopted to supply power to each submodule; the device is placed on the neck position required for data acquisition of swallowing behavior by means of an adhesive elastic band worn around the neck.
8. The system of claim 6, wherein the data processing module implements the wearable swallowing behavior recognition method of claim 1, and the data processing module operates in the form of software on an upper computer, including a smart phone, a smart tablet or a personal computer, and exchanges data with the swallowing signal acquisition device in a wireless or wired communication manner, including bluetooth, wiFi or a flex cable, or is integrated into the device on which the swallowing signal acquisition module depends in the form of a dedicated hardware chip circuit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111179760.XA CN113729640B (en) | 2021-10-11 | 2021-10-11 | Wearable swallowing behavior identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111179760.XA CN113729640B (en) | 2021-10-11 | 2021-10-11 | Wearable swallowing behavior identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113729640A CN113729640A (en) | 2021-12-03 |
CN113729640B true CN113729640B (en) | 2023-03-21 |
Family
ID=78726272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111179760.XA Active CN113729640B (en) | 2021-10-11 | 2021-10-11 | Wearable swallowing behavior identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113729640B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107518896A (en) * | 2017-07-12 | 2017-12-29 | 中国科学院计算技术研究所 | A kind of myoelectricity armlet wearing position Forecasting Methodology and system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170249445A1 (en) * | 2014-09-12 | 2017-08-31 | Blacktree Fitness Technologies Inc. | Portable devices and methods for measuring nutritional intake |
US10856812B2 (en) * | 2015-08-12 | 2020-12-08 | Valencell, Inc. | Methods and apparatus for detecting motion via optomechanics |
-
2021
- 2021-10-11 CN CN202111179760.XA patent/CN113729640B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107518896A (en) * | 2017-07-12 | 2017-12-29 | 中国科学院计算技术研究所 | A kind of myoelectricity armlet wearing position Forecasting Methodology and system |
Also Published As
Publication number | Publication date |
---|---|
CN113729640A (en) | 2021-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bi et al. | AutoDietary: A wearable acoustic sensor system for food intake recognition in daily life | |
Mannini et al. | Activity recognition in youth using single accelerometer placed at wrist or ankle | |
Amft et al. | Methods for detection and classification of normal swallowing from muscle activation and sound | |
Zhang et al. | Diet eyeglasses: Recognising food chewing using EMG and smart eyeglasses | |
CN103687540B (en) | Use respiratory murmur amplitude spectrogram and the pitch contour diagnosis OSA/CSA of record | |
US20080243014A1 (en) | Breathing sound analysis for detection of sleep apnea/popnea events | |
CN103841888A (en) | Apnea and hypopnea detection using breath pattern recognition | |
CN103458777A (en) | Method and device for swallowing impairment detection | |
Krakow et al. | Instruments and techniques for investigating nasalization and velopharyngeal function in the laboratory: An introduction | |
KR102134154B1 (en) | Pattern Recognition System and Mehod of Ultra-Wideband Respiration Data Based on 1-Dimension Convolutional Neural Network | |
KR101967342B1 (en) | An exercise guide system by using wearable device | |
CN110200640A (en) | Contactless Emotion identification method based on dual-modality sensor | |
CN103263271A (en) | Non-contact automatic blood oxygen saturation degree measurement system and measurement method | |
CN113520343A (en) | Sleep risk prediction method and device and terminal equipment | |
Ouyang et al. | An asymmetrical acoustic field detection system for daily tooth brushing monitoring | |
WO2013086615A1 (en) | Device and method for detecting congenital dysphagia | |
WO2020165066A1 (en) | Methods and devices for screening swallowing impairment | |
CN113576475B (en) | Deep learning-based contactless blood glucose measurement method | |
US11766210B2 (en) | Methods and devices for determining signal quality for a swallowing impairment classification model | |
CN113729640B (en) | Wearable swallowing behavior identification method and system | |
CN106419884B (en) | A kind of rate calculation method and system based on wavelet analysis | |
FR3064463A1 (en) | METHOD FOR DETERMINING AN ENSEMBLE OF AT LEAST ONE CARDIO-RESPIRATORY DESCRIPTOR OF AN INDIVIDUAL DURING ITS SLEEP AND CORRESPONDING SYSTEM | |
CN115886720A (en) | Wearable eyesight detection device based on electroencephalogram signals | |
CN215349053U (en) | Congenital heart disease intelligent screening robot | |
CN110141266B (en) | Bowel sound detection method based on wearable body sound capture technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |