CN114098679B - Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing - Google Patents

Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing Download PDF

Info

Publication number
CN114098679B
CN114098679B CN202111665367.1A CN202111665367A CN114098679B CN 114098679 B CN114098679 B CN 114098679B CN 202111665367 A CN202111665367 A CN 202111665367A CN 114098679 B CN114098679 B CN 114098679B
Authority
CN
China
Prior art keywords
vital sign
waveform
deep learning
radio frequency
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111665367.1A
Other languages
Chinese (zh)
Other versions
CN114098679A (en
Inventor
陈哲
罗骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino Singapore International Joint Research Institute
Original Assignee
Sino Singapore International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino Singapore International Joint Research Institute filed Critical Sino Singapore International Joint Research Institute
Priority to CN202111665367.1A priority Critical patent/CN114098679B/en
Publication of CN114098679A publication Critical patent/CN114098679A/en
Application granted granted Critical
Publication of CN114098679B publication Critical patent/CN114098679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Cardiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pulmonology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency perception, which is based on radio frequency sensing and deep contrast learning technology, recovers vital sign monitoring waveforms in a contactless manner, builds a neural network by applying an encoder-decoder model, avoids loss of waveform details compared with a traditional comparison model, can recover vital sign waveforms with fine granularity more accurately in a motion state, and further improves the robustness of vital sign monitoring to motion; in addition, the vital sign waveform recovery method provided by the invention adopts a unified data format, so that the method can be almost deployed on any type of commercial grade radar in the prior art, can adapt to different application requirements, and is independent of bottom hardware.

Description

Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing
Technical Field
The invention relates to the technical field of artificial intelligence deep learning, in particular to a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing.
Background
Vital signs are one of the important indicators of human health, especially the indicators related to heart beat and respiration, and are representative indicators for assessing human physiological and psychological states. Currently, methods for monitoring vital signs can be divided into two types, contact and non-contact. Contact vital sign monitoring mainly relies on contact sensors, including intelligent wearable and medical sensors and the like. And the non-contact vital sign detection directly acquires vital sign information of the testee in a spaced state, so that compared with the contact detection, the contact detection has better user experience and better application prospect. With the development of non-contact sensing technology, non-contact vital sign monitoring is gradually going to practical application. Most of the non-contact vital sign monitoring at the present stage requires that the testee is in a relatively static state, and the long-time keeping of the static state still brings discomfort to the testee, and also causes psychological stress to the testee, so that continuous monitoring is difficult. Therefore, how to complete accurate vital sign monitoring under the exercise condition and improve the robustness of non-contact vital sign monitoring to exercise becomes a problem to be solved.
In the vital sign monitoring technology based on radio frequency sensing, a radio frequency signal transmitting source transmits a radio frequency signal to a testee, and micro-motion caused by vital signs on the body surface of the testee can influence the amplitude and the phase of a reflected signal, namely vital sign information of the testee is superimposed in the reflected signal, so that the vital signs of the testee can be detected by analyzing and processing the reflected signal. While the subject remains relatively stationary, these signals can be seen as linear stacks, with existing linear-based waveform separation techniques being able to successfully separate the signals. However, when the subject moves normally, the weak vital sign signals may be severely disturbed by the severe body movement and even submerged, the radio frequency reflected signals affected by the body movement and vital sign movement show complex statistical characteristics, and these nonlinear combination characteristics cannot be easily separated by the existing waveform separation technology, so that accurate vital sign parameters cannot be obtained. The invention finds in the research of vital sign monitoring: the problem of difficult waveform separation during exercise can be solved by deep contrast learning, the self-supervision method does not need real ground values in training, vital signs and body exercises can be distinguished by contrast signal characteristics, and the research result is specifically described in a 'vital sign monitoring action removal method based on deep learning and radio frequency perception'. The motion removal technology can suppress the influence of body motion to a great extent, but some residual noise still exists in vital sign signals, and further recovery processing is needed for the signals to obtain more accurate fine-grained vital sign signal waveforms, while the loss function adopted by the conventional neural network model is mostly based on an L1 norm or an L2 norm, which usually results in loss of detail components when respiratory and heartbeat waveforms are recovered, so that the waveforms are too smooth and approximate to sine waves, and the difference between the actual respiratory waveforms is too large. Therefore, how to lose as little detail as possible in the course of waveform recovery becomes a urgent problem to be solved.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing, which can eliminate the influence of motion on recovering fine-granularity vital sign waveforms. In the waveform recovery technology at the present stage, the common neural network loss function is often based on the L1 norm or the L2 norm, which usually results in losing waveform detail components when the respiratory and heartbeat waveforms are recovered, so that the waveforms are too smooth and approximate to sine waves, and the actual respiratory waveform difference is too large. The vital sign monitoring waveform recovery method overcomes the defect, improves the robustness of vital sign monitoring to movement, and has certain practicability.
The aim of the invention can be achieved by adopting the following technical scheme:
a method for recovering a vital sign monitoring waveform based on deep learning and radio frequency sensing, the method for recovering a vital sign monitoring waveform comprising the steps of:
s1, acquiring vital sign data to be processed, wherein the vital sign data to be processed is vital sign data obtained by processing radar radio frequency reflection signals through a waveform separation technology, and the specific separation technology is specifically described in the patent of 'vital sign monitoring action removal method based on deep learning and radio frequency sensing'.
The radio frequency induction radar transmits radio frequency signals to a testee, and micro-motion caused by vital signs on the body surface of the testee can influence the amplitude and the phase of the reflected signals, namely vital sign information of the testee is superimposed in the reflected signals, so that the vital signs of the testee can be detected by analyzing and processing the reflected signals. According to different motion types, each testee carries out data acquisition on radar echoes in a slow time dimension for a preset time length at a sampling frequency of 512Hz, and vital sign data obtained after the acquired data are processed by a waveform separation technology is vital sign data to be processed.
FMCW radar of slush or FMCW radar of TI. When the FMCW radar of TI is used for collecting data, a data collecting interface is added, the data collected by the radar is processed by the data collecting interface for subsequent application, and the data collecting interface is composed of a DCA1000 module and is used for capturing real-time data.
The ground truth value is obtained by collecting vital sign waveforms under all scenes by the wearable equipment NeuLog and is used for subsequent sample data training and discrimination comparison.
S2, preparing a training sample set and a test sample set:
preprocessing the data to be processed, including: and (2) carrying out FFT (fast Fourier transform) on the vital sign data obtained from the step (S1), calculating the ratio of the peak value to the rest part, and carrying out hypothesis testing by using an empirical threshold value to obtain vital sign waveform data. And then dividing the acquired vital sign waveform data according to a certain proportion to construct a training sample and a test sample for training the neural network. For example, 30% of the vital sign waveform data is used as training sample data for offline training of the deep learning module, and the remaining 70% of the vital sign waveform data is used as test sample data for online recovery of vital sign waveforms by the training module.
S3, setting a deep learning neural network:
the deep learning neural network adopts an encoder-decoder model and consists of an encoder, a decoder and a discriminator, and vital sign data to be processed are sequentially processed by the encoder, the decoder and the discriminator to complete waveform recovery. The encoder and decoder use the same kernel structure: the convolution neural network is composed of three convolution neural network kernels in a parallel mode, wherein convolution sizes are respectively as follows: 3×3, 7×7, 11×11, stride=1, padding=0, extension rule=1. The outputs of the three convolutional neural network cores with different kernel sizes are sent as inputs to the maxpooling layer, i.e. the max pooling layer, whose kernel size is 2; the discriminator consists of three convolution layers with the input size matched with the waveform length, the condition-based Markov discriminator of the countermeasure network is used for discriminating, ground truth values acquired by the wearable equipment NeuLog and the output of the encoder-decoder are used as two inputs, a sliding window convolution is carried out between the two input waveforms, the output results obtained by the convolution are aggregated to generate discrimination, and waveform recovery is completed.
S4, training and evaluating the deep learning neural network:
inputting a training sample set into a constructed neural network model, firstly, performing pre-training in an unsupervised learning mode by using a feature extraction method, and initializing parameters and weights of the neural network; the adam adaptive moment estimation algorithm, an algorithm that performs a step-wise optimization on the random objective function, is then used to minimize the loss function and update the parameters and weights of the neural network. And (3) sending a group of data training data each time during training until all sample data are input, and finishing training to obtain optimal parameters and weights. The set parameters include: the number of samples selected for one training is the Batch size, learning rate, momentum, attenuation step and attenuation factor step factor.
S5, applying a trained encoder-decoder model generated based on deep learning to complete waveform recovery:
and the test sample completes waveform recovery through a trained encoder-decoder model based on deep learning and a Markov discriminator to obtain fine-granularity vital sign signals. The vital sign signals comprise a respiration signal and/or a heart rate signal, and may also provide real-time output and display of a respiration waveform and/or a heart rate waveform as desired.
Further, in the step S3, the deep contrast learning neural network may be further constructed based on an MLP neural network model.
Further, in the step S4, a minimum mean square error MSE may be used as the loss function.
Further, in the step S4, the Xavier method may be further applied to train the neural network, and initialize the parameters and weights of the neural network.
Further, in the step S4, a small-batch sample gradient descent method may be further applied to minimize the loss function and update the neural network parameters and weights.
Compared with the prior art, the invention has the following advantages and effects:
compared with the traditional comparison model, the method avoids the loss of waveform details, and can recover the vital sign waveform with fine granularity more accurately by judging the network Markov discriminator based on conditions, thereby further improving the robustness of vital sign monitoring to movement. In addition, the vital sign waveform recovery method provided by the invention adopts a unified data format, so that the method can be almost deployed on any type of existing commercial grade radar, and can adapt to different application requirements, so that the waveform recovery method provided by the invention is independent of bottom hardware, and test results on 3 main stream radar platforms show that: the vital sign waveform recovery method provided by the invention can accurately recover fine-granularity vital sign waveforms in a motion state.
Drawings
FIG. 1 is a general flow chart of vital sign monitoring waveform recovery based on deep learning and radio frequency sensing as disclosed in this embodiment;
FIG. 2 is a graph showing the comparison of the heartbeat waveform recovered by a subject walking on a treadmill at a speed of 1m/s with the ground truth value disclosed in this example;
FIG. 3 is a graph showing the heartbeat waveform recovered when the subject is stationary in comparison with the ground truth value disclosed in this example;
FIG. 4 is a comparison of various waveforms of the subject disclosed in this example while typing;
FIG. 5 is a graph comparing various waveforms of a subject disclosed in this example while walking on a treadmill at a speed of 1 m/s;
FIG. 6 is a graph of relative error of respiratory and heartbeat frequency versus ground truth for the present embodiment disclosed;
FIG. 7 is a graph of cosine similarity of respiratory and heartbeat waveforms with respect to ground truth values as disclosed in this embodiment;
fig. 8 is a neural network training flowchart disclosed in this embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
In order to deepen the further understanding of the invention, the destructive influence of body movement on vital sign extraction and the effectiveness evaluation of the waveform recovery technology proposed by the invention are introduced as follows, and it is to be noted that the description is only for better understanding of the invention and not for limitation of the invention.
(1) Destructive effects of body movement on extraction of vital signs
In order to better understand the destructive influence of body movement on vital sign extraction, the invention respectively researches the heart beat waveform recovery of a testee walking on a running machine at the speed of 1m/s and standing still, applies the same radar to acquire radio frequency data, and adopts the same waveform separation and recovery technology to process the radio frequency signals, and the result is shown in fig. 2 and 3, wherein fig. 2 is a graph comparing the heart beat waveform recovered by the testee walking on the running machine at the speed of 1m/s with ground truth, and fig. 3 is a graph comparing the heart beat waveform recovered by the testee standing still with ground truth, and the results can be seen from fig. 2 and 3: the recovered waveform was substantially consistent with the ground truth while stationary, while the recovered waveform was significantly different from the ground truth while walking on the treadmill at a speed of 1 m/s. Therefore, the body movement can interfere with waveform recovery, and the ground truth value in fig. 2 and 3 is a heartbeat waveform obtained by the waveform recovery technology of the heartbeat data acquired by the wearable device, and is regarded as a true value; the RF waveform is the waveform of a radio frequency signal acquired by an infrared ultra-wideband radar; the recovery waveform is a heartbeat waveform obtained by the waveform recovery technology of the radio frequency signals acquired by the infrared ultra-wideband radar.
(2) The invention provides a validity assessment of waveform recovery technology
In order to evaluate the effectiveness of the waveform recovery technology provided by the invention in exercise, the invention takes typing and exercise on a flat plate of a testee as an example, gives recovered heartbeat and respiration waveforms, ground truth values are obtained by applying measurement photoplethysmography PPG to earlobe or fingertip of a wearing device NeuLog, the real values of the heartbeat and respiration can be considered, radio frequency data are collected by an IR-UWB (X4M 05) radar, a datum line is recovered by adopting the latest RF-SCG radio frequency radar-cardiogram algorithm of the Massachusetts institute of technology, and the results are shown in fig. 4 and 5, wherein fig. 4 and 5 are respectively comparative diagrams of various waveforms when the testee types and walks on a running machine at a speed of 1M/s, and each waveform comprises: a comparison graph of a respiratory recovery waveform and a ground truth value, and a comparison graph of a heartbeat recovery waveform and a ground truth value and a reference line, wherein RF is a radio frequency waveform, rBreath, gBreath is a respiratory recovery waveform and a ground truth value respectively, and rHeart, gHeart, bHeart is a heartbeat recovery waveform, a ground truth value and a reference line respectively. It can be seen from fig. 4 and fig. 5 that, when the user walks on the running machine at a speed of 1m/s or while typing, the recovery waveform and the true value of the waveform recovery technology provided by the invention can be matched, so that the robustness to movement is shown, while the reference line of the recovery waveform of the RF-SCG radio frequency radar-cardiogram algorithm of the millboard institute of technology can not recover the correct waveform basically, and the ground truth value of fig. 4 and fig. 5 can be compared, so that the heartbeat waveform obtained by the wearable device even loses some heartbeat period, which indicates that the heartbeat and respiration monitoring by the wearable device can be interfered by movement. The waveforms shown in fig. 4 and fig. 5 can intuitively show the effectiveness of waveform recovery, and as can be seen from the diagrams, the effectiveness of the waveform recovery technology provided by the invention is relatively good.
The overall performance of the waveform recovery technique can be measured by relative error and cosine similarity, the relative error can be used to represent the accuracy of the respiration and heartbeat frequency, and the relative error of the respiration and heartbeat frequency to the ground truth, as shown in fig. 6; the cosine similarity, i.e. normalized correlation coefficient, can be used to measure the similarity between the recovery waveform and its corresponding ground truth waveform, and the cosine similarity of the respiration and heartbeat waveforms with respect to the ground truth is shown in fig. 7. As can be seen from fig. 5 and fig. 6, the waveform recovery technique provided by the present invention has smaller relative error, higher similarity between the recovered waveform and the true value, and better overall performance.
The invention provides a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency perception. The radar radio frequency echo signals containing vital sign information are subjected to waveform separation technology to obtain vital sign signals, vectors obtained by preprocessing the vital sign signals are used as vital sign data to be processed, and a certain amount of data to be processed are selected for processing. The processed samples are divided into training samples and test samples according to a certain proportion, the training samples are used as the total input of the whole model, the weight and parameters of the model are updated and adjusted, and the test samples are input into the trained model to complete waveform recovery.
In the practice of the invention, the radar is preferably placed in front of the subject, since the pulse BVP of blood volume associated with the heart beat is likely to be caused by the common carotid artery, while the respiratory signal is mainly dependent on chest vibration. It has been shown that aiming the radar at the body side misses the breathing signal to a large extent, but not the heartbeat signal.
Example 2
The embodiment discloses a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing. The implementation of the invention will be described in detail below using IR-UWB radar X4M05 as an example.
In order to obtain enough sample data, a total of 12 healthy monitored subjects of 6 men and 6 women were recruited without any significant impact. The experiments of this example basically followed the IRB ethical review board protocol of our institute. The subject is required to maintain a quasi-static sitting position in a daily living environment, or to perform 7 common human actions: playing a mobile phone, typing, swinging the body, shaking the legs, walking on a running machine, standing/sitting, turning over (while sleeping). The wearable NeuLog is used to collect ground truth values in all scenarios. The rf inductive radar is placed in a range of 0.5 to 2m from the subject, the precise range may vary from person to person. Data collection was performed using different time spans, but ensuring that the total time for each subject was approximately the same: including one minute walking on a treadmill, one hour typing, and one night sleep monitoring, 12 subjects total 80 hours of RF and ground truth record data sets including about 330k heart beat cycles and 68k breath cycles. 30% of the data are used as training sample data for offline training of the deep learning module, and the remaining 70% of the data are used as test sample data for online recovery of vital sign waveforms by using the training module. To ensure the rationality of the data, the data of the training samples were collected from 4 subjects, including 2 females, 2 males, for 24 hours, with 3 subjects physical movements including typing, rocking, standing/sitting. Test sample data is derived from all subjects and all physical actions.
According to different body actions, training sample data and test sample data are applied to respectively perform experiments, wherein the experiments are performed on a PC (personal computer) based on Python 3.7 and TensorFlow 2.0, and the PC is provided with an i9-10900KF (3.7 GHz) CPU, a 16GB DDR4 RAM and a GeForce RTX 2070 display card. The clocks between hardware components are synchronized based on a precision time protocol using ethernet. Novelda's IR-UWB radar X4M05 operates at 7.3 or 8.7GHz, with a bandwidth of 1.5GHz; it has a pair of tx-rx (transmitter, receiver) antennas with a field angle (FoV) of 120 ° (azimuth and elevation).
The whole implementation flow is shown in fig. 1, and the specific implementation steps are as follows:
s1, acquiring vital sign data to be processed, wherein the vital sign data to be processed is obtained by processing radar radio frequency reflection signals through a waveform separation technology.
The radio frequency induction radar is placed in a range of 0.5 to 2m away from the testee, data acquisition is carried out on the testee for a preset time period according to different motion types, and vital sign data obtained after the acquired data are processed by a waveform separation technology are to-be-processed data.
In this embodiment, the collected data includes: sleep monitoring of 12 subjects walking on the treadmill for one minute, typing for one hour, and one night, respectively, totaled for 80 hours, including about 330k heart beat cycle and 68k respiration cycle, ground truth values were collected by the wearable device NeuLog.
S2, preparing a training sample set and a test sample set:
and processing the vital sign data to be processed to generate a training sample set and a testing sample set. Since the periodicity of respiration and heartbeat is much higher than that of other waveforms, we perform FFT transformation on each waveform in the data to be processed, calculate the ratio of peak to remainder, use an empirical threshold for hypothesis testing, in this embodiment, the empirical threshold is about 2.1-3.0, and select vital sign waveforms.
S3, setting a deep learning neural network:
a deep learning-based encoder-decoder model is constructed to recover vital sign signal waveforms, and the model adopts a parallel mode of three convolution neural network cores of 3×3, 7×7 and 11×11, and a Markov discriminator is added, so that the multi-resolution capability and higher robustness are provided. The radar radio frequency echo signals containing vital sign information are subjected to waveform separation technology to obtain vital sign signals, vectors obtained by preprocessing the vital sign signals are used as vital sign data to be processed, and a certain amount of vital sign data to be processed are selected for processing. The processed samples are divided into training samples and test samples, the training samples are used as the input of an encoder-decoder model, the weights and parameters of the model are updated and adjusted, the test samples are input into the trained model, and waveforms are output. And carrying out sliding window convolution between the waveform and the ground truth waveform acquired by NeuLog, and aggregating all feedback to generate discrimination, so as to complete waveform recovery and obtain fine-granularity vital sign signals.
S4, training and evaluating the deep learning neural network:
the training sample set is input into a built deep learning network for training, a specific training flow is shown in fig. 8, firstly, an unsupervised learning is carried out by applying a whole model feature extraction method for pre-training, neural network parameters and weights are initialized, then, an adam adaptive moment estimation algorithm is applied to minimize a loss function, and the network parameters and weights are updated to achieve the optimized neural network parameters. When training the neural network, forward propagation and backward propagation are interdependent, parameters and weights of each layer of the neural network are calculated and stored according to the forward propagation sequence, namely, the sequence from an input layer to an output layer, and the calculation of the neural network parameter gradient is performed according to the backward propagation sequence, namely, the sequence from the output layer to the input layer. And (3) sending a group of data training data each time during training until all sample data are input, and finishing the training. In this embodiment, the number of samples selected for one training is set to 512, learning rate, momentum movement, attenuation step and attenuation factor are set to 0.001, 0.9, 5e5 and 0.999, respectively.
S5, applying a deep learning neural network to complete waveform recovery:
the test sample outputs the final waveform through the trained encoder-decoder model, waveform recovery is completed, fine-granularity vital sign signals are recovered, the vital signs comprise respiratory signals and/or heart rate signals, and the respiratory waveforms and/or heart rate waveforms can be output and displayed in real time according to the requirement.
The heart beat and the respiration can use the model to respectively carry out waveform recovery. Intercepting data with preset time length on the waveform of vital sign data signals, taking two continuous waveforms with equal time length each time as the input of a model, wherein the two waveforms are partially overlapped, such as: the last 25% of the first waveform is the first 25% of the second waveform, and the continuous waveform output by the model is the desired heartbeat or respiration waveform, in this embodiment, the predetermined time period is 20 seconds.
In particular, the radar in the above embodiment may also select FMCW radar IWR1443BOOST of FMCW radar Position2Go, TI of inflorescence. FMCW radar Position2Go of inflight operates at 24GHz, 200MHz bandwidth, with 1 tx antenna and 2 rx antennas at 76 ° azimuth and 19 ° elevation. TI FMCW radar IWR1443BOOST operates at 77GHz, a maximum bandwidth of 4GHz, with 3 tx antennas and 4 rx antennas, with an azimuth angle of 56 DEG and an elevation angle of 28 deg. When TI's FMCW radar IWR1443BOOST is selected, since the radar UBS serial port rate is low, high-rate transmission of radar data cannot be supported, so a DCA1000 module needs to be added to achieve high-speed data acquisition. The module receives data from radar IWR1443BOOST through LVDS high-speed interface and sends the data to PC through USB serial port, the driver of the module can be developed on raspberry party by C/C++ language.
In particular, in the above embodiment, the deep contrast learning neural network may be further constructed based on an MLP neural network model, the Xavier method may be further applied to initialize the neural network parameters and weights, and the minimum mean square error MSE may be further used as a loss function.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (7)

1. The method for recovering the vital sign monitoring waveform based on deep learning and radio frequency sensing is characterized by comprising the following steps of:
s1, acquiring vital sign data to be processed: the vital sign data to be processed are obtained after the radar radio frequency reflected signal is processed by a waveform separation technology;
s2, preparing a training sample set and a test sample set: preprocessing the vital sign data to be processed to generate a training sample set and a test sample set;
s3, setting a deep learning neural network: the deep learning neural network adopts an encoder-decoder model and consists of an encoder, a decoder and a discriminator, and vital sign data to be processed sequentially pass through the encoder, the decoder and the discriminator to complete waveform recovery; the core structure of the encoder is as follows: the encoder is composed of three convolution neural network kernels in a parallel mode, and convolution sizes are respectively: the outputs of the 3×3, 7×7, 11×11 convolutional neural network cores are sent to a max pooling layer, the kernel size of which is 2; the decoder uses the same core structure as the encoder;
s4, training and evaluating the deep learning neural network: inputting the training sample set into a deep learning network, performing unsupervised learning by applying a feature extraction method, and initializing parameters and weights of a neural network; minimizing a loss function by applying an adam adaptive moment estimation algorithm, updating parameters and weights of the neural network, and finishing training;
s5, applying a deep learning neural network to complete waveform recovery: the test sample completes waveform recovery through a trained deep learning neural network, and a fine-granularity vital sign signal is recovered, wherein the vital sign signal comprises a respiratory signal and/or a heart rate signal.
2. The method for recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing according to claim 1, wherein the preprocessing in step S2 includes performing FFT fast fourier transform on vital sign data to be processed, calculating a ratio of peak value to remaining part, performing hypothesis testing using an empirical threshold, and obtaining vital sign waveform data.
3. The method for recovering vital sign monitor waveforms based on deep learning and radio frequency sensing according to claim 1, wherein said discriminator in step S3 is a markov discriminator applying a condition-based countermeasure network, and is composed of three convolution layers.
4. A method of recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing as claimed in any one of claims 1 to 3, wherein the clocks between the hardware components are synchronized using ethernet based on a precision time protocol.
5. A deep learning and radio frequency aware based vital sign monitoring action removal method according to any of claims 1 to 3, wherein said step S5 further comprises: the respiration waveform and/or the heart rate waveform are output and displayed in real time.
6. A method for recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing according to any one of claims 1 to 3, wherein the radar in step S2 is Novelda' S IR-UWB radar, inflorescence FMCW radar or TI FMCW radar.
7. The method for recovering vital sign monitoring waveforms based on deep learning and radio frequency sensing of claim 6, wherein when said radar is a FMCW radar of TI, a data acquisition interface is further added, said data acquisition interface is composed of DCA1000 modules for capturing real-time data.
CN202111665367.1A 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing Active CN114098679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111665367.1A CN114098679B (en) 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111665367.1A CN114098679B (en) 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing

Publications (2)

Publication Number Publication Date
CN114098679A CN114098679A (en) 2022-03-01
CN114098679B true CN114098679B (en) 2024-03-29

Family

ID=80363648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111665367.1A Active CN114098679B (en) 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing

Country Status (1)

Country Link
CN (1) CN114098679B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098602B (en) * 2023-01-16 2024-03-12 中国科学院软件研究所 Non-contact sleep respiration monitoring method and device based on IR-UWB radar

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090121450A (en) * 2008-05-22 2009-11-26 (주)유비즈플러스 Bio radar
CN104605831A (en) * 2015-02-03 2015-05-13 南京理工大学 Respiration and heartbeat signal separation algorithm of non-contact vital sign monitoring system
CN105816163A (en) * 2016-05-09 2016-08-03 安徽华米信息科技有限公司 Method, device and wearable equipment for detecting heart rate
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN109363652A (en) * 2018-09-29 2019-02-22 天津惊帆科技有限公司 PPG signal reconfiguring method and equipment based on deep learning
CN109864714A (en) * 2019-04-04 2019-06-11 北京邮电大学 A kind of ECG Signal Analysis method based on deep learning
CN109965858A (en) * 2019-03-28 2019-07-05 北京邮电大学 Based on ULTRA-WIDEBAND RADAR human body vital sign detection method and device
CN110974217A (en) * 2020-01-03 2020-04-10 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
CN111046824A (en) * 2019-12-19 2020-04-21 上海交通大学 Time series signal efficient denoising and high-precision reconstruction modeling method and system
CN111568396A (en) * 2020-04-13 2020-08-25 广西万云科技有限公司 V2iFi is based on vital sign monitoring technology in compact radio frequency induction's car
CN112508110A (en) * 2020-12-11 2021-03-16 哈尔滨理工大学 Deep learning-based electrocardiosignal graph classification method
CN112656395A (en) * 2020-12-16 2021-04-16 问境科技(上海)有限公司 Method and system for detecting change trend of vital signs of patient based on microwave radar
CN112754431A (en) * 2020-12-31 2021-05-07 杭州电子科技大学 Respiration and heartbeat monitoring system based on millimeter wave radar and lightweight neural network
CN112754441A (en) * 2021-01-08 2021-05-07 杭州环木信息科技有限责任公司 Millimeter wave-based non-contact heartbeat detection method
CN112998701A (en) * 2021-03-27 2021-06-22 复旦大学 Vital sign detection and identity recognition system and method based on millimeter wave radar
CN113128772A (en) * 2021-04-24 2021-07-16 中新国际联合研究院 Crowd quantity prediction method and device based on sequence-to-sequence model
CN113126050A (en) * 2021-03-05 2021-07-16 沃尔夫曼消防装备有限公司 Life detection method based on neural network
CN113317798A (en) * 2021-05-20 2021-08-31 郑州大学 Electrocardiogram compressed sensing reconstruction system based on deep learning
CN113439218A (en) * 2019-02-28 2021-09-24 谷歌有限责任公司 Smart device based radar system for detecting human vital signs in the presence of body motion
EP3885786A1 (en) * 2020-03-27 2021-09-29 Origin Wireless, Inc. Method, apparatus, and system for wireless vital monitoring using high frequency signals
CN113729641A (en) * 2021-10-12 2021-12-03 南京润楠医疗电子研究院有限公司 Non-contact sleep staging system based on conditional countermeasure network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180144241A1 (en) * 2016-11-22 2018-05-24 Mitsubishi Electric Research Laboratories, Inc. Active Learning Method for Training Artificial Neural Networks
EP3687392A4 (en) * 2017-10-17 2021-07-07 Whoop, Inc. Applied data quality metrics for physiological measurements
US11229404B2 (en) * 2017-11-28 2022-01-25 Stmicroelectronics S.R.L. Processing of electrophysiological signals
JP7106307B2 (en) * 2018-03-14 2022-07-26 キヤノンメディカルシステムズ株式会社 Medical image diagnostic apparatus, medical signal restoration method, medical signal restoration program, model learning method, model learning program, and magnetic resonance imaging apparatus
US20210093203A1 (en) * 2019-09-30 2021-04-01 DawnLight Technologies Systems and methods of determining heart-rate and respiratory rate from a radar signal using machine learning methods
US20210378597A1 (en) * 2020-06-04 2021-12-09 Biosense Webster (Israel) Ltd. Reducing noise of intracardiac electrocardiograms using an autoencoder and utilizing and refining intracardiac and body surface electrocardiograms using deep learning training loss functions

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090121450A (en) * 2008-05-22 2009-11-26 (주)유비즈플러스 Bio radar
CN104605831A (en) * 2015-02-03 2015-05-13 南京理工大学 Respiration and heartbeat signal separation algorithm of non-contact vital sign monitoring system
CN105816163A (en) * 2016-05-09 2016-08-03 安徽华米信息科技有限公司 Method, device and wearable equipment for detecting heart rate
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN109363652A (en) * 2018-09-29 2019-02-22 天津惊帆科技有限公司 PPG signal reconfiguring method and equipment based on deep learning
CN113439218A (en) * 2019-02-28 2021-09-24 谷歌有限责任公司 Smart device based radar system for detecting human vital signs in the presence of body motion
CN109965858A (en) * 2019-03-28 2019-07-05 北京邮电大学 Based on ULTRA-WIDEBAND RADAR human body vital sign detection method and device
CN109864714A (en) * 2019-04-04 2019-06-11 北京邮电大学 A kind of ECG Signal Analysis method based on deep learning
CN111046824A (en) * 2019-12-19 2020-04-21 上海交通大学 Time series signal efficient denoising and high-precision reconstruction modeling method and system
CN110974217A (en) * 2020-01-03 2020-04-10 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
EP3885786A1 (en) * 2020-03-27 2021-09-29 Origin Wireless, Inc. Method, apparatus, and system for wireless vital monitoring using high frequency signals
CN111568396A (en) * 2020-04-13 2020-08-25 广西万云科技有限公司 V2iFi is based on vital sign monitoring technology in compact radio frequency induction's car
CN112508110A (en) * 2020-12-11 2021-03-16 哈尔滨理工大学 Deep learning-based electrocardiosignal graph classification method
CN112656395A (en) * 2020-12-16 2021-04-16 问境科技(上海)有限公司 Method and system for detecting change trend of vital signs of patient based on microwave radar
CN112754431A (en) * 2020-12-31 2021-05-07 杭州电子科技大学 Respiration and heartbeat monitoring system based on millimeter wave radar and lightweight neural network
CN112754441A (en) * 2021-01-08 2021-05-07 杭州环木信息科技有限责任公司 Millimeter wave-based non-contact heartbeat detection method
CN113126050A (en) * 2021-03-05 2021-07-16 沃尔夫曼消防装备有限公司 Life detection method based on neural network
CN112998701A (en) * 2021-03-27 2021-06-22 复旦大学 Vital sign detection and identity recognition system and method based on millimeter wave radar
CN113128772A (en) * 2021-04-24 2021-07-16 中新国际联合研究院 Crowd quantity prediction method and device based on sequence-to-sequence model
CN113317798A (en) * 2021-05-20 2021-08-31 郑州大学 Electrocardiogram compressed sensing reconstruction system based on deep learning
CN113729641A (en) * 2021-10-12 2021-12-03 南京润楠医疗电子研究院有限公司 Non-contact sleep staging system based on conditional countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的人体生命体征和多目标检测算法研究;李剑菡;《中国优秀硕士学位论文全文数据库》;I136-633 *
李公法.《人工智能与计算智能及其应用》.武汉:华中科技大学出版社,2020,第132页. *
沈建飞,陈益强,谷洋.基于时频信息融合网络的非干扰呼吸检测方法.《高技术通讯》.2020,998-1009. *

Also Published As

Publication number Publication date
CN114098679A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN109875529B (en) Vital sign detection method and system based on ultra-wideband radar
WO2022257187A1 (en) Non-contact fatigue detection method and system
WO2021066918A1 (en) Systems and methods of determining heart-rate and respiratory rate from a radar signal using machine learning methods
CN107580471A (en) Wearable pulse sensor device signal quality estimation
CN101689219A (en) The system and method that is used for monitoring cardiorespiratory parameters
CN104644143A (en) Non-contact life sign monitoring system
Yang et al. Unsupervised detection of apnea using commodity RFID tags with a recurrent variational autoencoder
KR102201371B1 (en) Real-time vital sign detection apparatus based on signal decomposition in noisy environment and method thereof
Khamis et al. Cardiofi: Enabling heart rate monitoring on unmodified COTS WiFi devices
CN115474901A (en) Non-contact living state monitoring method and system based on wireless radio frequency signals
CN110520935A (en) Learn sleep stage from radio signal
CN114098679B (en) Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing
CN114818910B (en) Non-contact blood pressure detection model training method, blood pressure detection method and device
KR20210001575A (en) Real-time cardiac rate detection apparatus in noisy environment and method thereof
CN104783799A (en) Short-distance non-contact type single objective breathing rate and breathing amplitude detection method
KR20210066332A (en) Method and apparatus for determining biometric information of target
CN111685760B (en) Human body respiratory frequency calculation method based on radar measurement
Gao et al. Contactless sensing of physiological signals using wideband RF probes
CN113456061A (en) Sleep posture monitoring method and system based on wireless signals
Giordano et al. Survey, analysis and comparison of radar technologies for embedded vital sign monitoring
CN114642409B (en) Human body pulse wave sensing method, heart rate monitoring method and blood pressure monitoring device
CN114847931A (en) Human motion tracking method, device and computer-readable storage medium
CN114947771A (en) Human body characteristic data acquisition method and device
Zhao et al. T-HSER: Transformer Network Enabling Heart Sound Envelope Signal Reconstruction Based on Low Sampling Rate Millimeter Wave Radar
Mongan et al. Data fusion of single-tag rfid measurements for respiratory rate monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant