CN114098679A - Vital sign monitoring waveform recovery method based on deep learning and radio frequency perception - Google Patents

Vital sign monitoring waveform recovery method based on deep learning and radio frequency perception Download PDF

Info

Publication number
CN114098679A
CN114098679A CN202111665367.1A CN202111665367A CN114098679A CN 114098679 A CN114098679 A CN 114098679A CN 202111665367 A CN202111665367 A CN 202111665367A CN 114098679 A CN114098679 A CN 114098679A
Authority
CN
China
Prior art keywords
vital sign
waveform
deep learning
radio frequency
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111665367.1A
Other languages
Chinese (zh)
Other versions
CN114098679B (en
Inventor
陈哲
罗骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino Singapore International Joint Research Institute
Original Assignee
Sino Singapore International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino Singapore International Joint Research Institute filed Critical Sino Singapore International Joint Research Institute
Priority to CN202111665367.1A priority Critical patent/CN114098679B/en
Publication of CN114098679A publication Critical patent/CN114098679A/en
Application granted granted Critical
Publication of CN114098679B publication Critical patent/CN114098679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency

Abstract

The invention discloses a method for restoring vital sign monitoring waveforms based on deep learning and radio frequency sensing, which is based on radio frequency sensing and deep comparison learning technologies, restores the vital sign monitoring waveforms in a non-contact mode, and applies an encoder-decoder model to construct a neural network, so that compared with a traditional comparison model, loss of waveform details is avoided, fine-grained vital sign waveforms can be restored more accurately in a motion state, and the robustness of vital sign monitoring on motion is further improved; in addition, the vital sign waveform recovery method provided by the invention adopts a uniform data format, so that the method can be almost deployed on any type of conventional commercial grade radar and can adapt to different application requirements, and the method for recovering the waveform provided by the invention is independent of bottom hardware.

Description

Vital sign monitoring waveform recovery method based on deep learning and radio frequency perception
Technical Field
The invention relates to the technical field of artificial intelligence deep learning, in particular to a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency perception.
Background
The vital signs are one of important indexes of human physical health, particularly relevant indexes of heartbeat and respiration, and are representative indexes for evaluating human physiological and psychological states. Currently, methods for monitoring vital signs can be classified into contact and non-contact. Contact vital sign monitoring relies on contact sensor mainly, including intelligent dress and medical sensor etc.. And the non-contact vital sign detection directly acquires the vital sign information of the testee in an isolated state, so that the user experience is better and the application prospect is better compared with the contact detection. With the development of non-contact sensing technology, non-contact vital sign monitoring is also gradually going to practical application. Most of non-contact vital sign monitoring at the present stage requires that a testee is in a relatively static state, the testee still can be uncomfortable after being kept in the static state for a long time, and psychological pressure can be generated on the testee, so that continuous monitoring is difficult to continue. Therefore, how to complete accurate vital sign monitoring under the motion condition and improve the robustness of the non-contact vital sign monitoring on the motion becomes a problem to be solved.
In the vital sign monitoring technology based on radio frequency perception, a radio frequency signal emission source sends a radio frequency signal to a tested person, and micro motion caused by the vital sign on the body surface of the tested person can influence the amplitude and the phase of a reflected signal, namely the vital sign information of the tested person is superposed in the reflected signal, so that the vital sign of the tested person can be detected by analyzing and processing the reflected signal. When the subject remains relatively still, these signals can be viewed as linearly superimposed, and existing linear-based waveform separation techniques can successfully separate the signals. However, when the subject moves normally, the weak vital sign signals may be severely interfered by the severe body movement and even submerged, the radio frequency reflection signals affected by the body movement and the vital sign activity show complex statistical characteristics, and these nonlinear combination characteristics cannot be easily separated by the existing waveform separation technology, and thus the accurate vital sign parameters cannot be obtained. The invention discovers in the research on the monitoring of vital signs that: the problem of difficult waveform separation during movement can be solved by adopting deep comparison learning, the self-supervision method does not need ground real values in training, vital signs and body movement can be distinguished by using contrast signal characteristics, and the research result is specifically set forth in a vital sign monitoring action removing method based on deep learning and radio frequency perception, which is provided by the inventor. The action removing technology can inhibit the influence of body action to a great extent, but some residual noise still exists in the vital sign signal, and further recovery processing needs to be performed on the signal to obtain a more accurate fine-grained vital sign signal waveform, while most of loss functions adopted by the currently common neural network model are based on an L1 norm or an L2 norm, which usually results in that when respiratory and heartbeat waveforms are recovered, detail components are lost, so that the waveform is too smooth to be close to a sine wave, and the difference between the true respiratory waveform is too large. Therefore, how to lose as little detail as possible in the process of waveform recovery becomes a problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a method for restoring vital sign monitoring waveforms based on deep learning and radio frequency perception, which can eliminate the influence of motion on restoring fine-grained vital sign waveforms. In the current waveform recovery technology, a commonly used neural network loss function is usually based on a norm L1 or a norm L2, which usually results in that when a respiration waveform and a heartbeat waveform are recovered, waveform detail components are lost, the waveform is too smooth to be close to a sine wave, and the difference between the waveform and a real respiration waveform is too large. The vital sign monitoring waveform recovery method provided by the invention overcomes the defects, improves the robustness of the vital sign monitoring to the movement, and has certain practicability.
The purpose of the invention can be achieved by adopting the following technical scheme:
a method for restoring vital sign monitoring waveforms based on deep learning and radio frequency perception comprises the following steps:
s1, vital sign data to be processed are obtained, the vital sign data to be processed are obtained after radar radio frequency reflection signals are processed through a waveform separation technology, and the specific separation technology is specifically explained in a patent of 'vital sign monitoring action removing method based on deep learning and radio frequency perception' provided by the inventor.
The radio frequency induction radar sends radio frequency signals to a testee, micro-motion caused by body surface vital signs of the testee can affect the amplitude and the phase of the reflected signals, namely the vital sign information of the testee is superposed in the reflected signals, and therefore, the vital signs of the testee can be detected by analyzing and processing the reflected signals. According to different motion types, data acquisition with preset time length is carried out on radar echoes of each testee in a slow time dimension at a sampling frequency of 512Hz, and the acquired data is processed by a waveform separation technology to obtain vital sign data which is the vital sign data to be processed.
FMCW radar of the flying or FMCW radar of TI. When the FMCW radar of the TI is used for acquiring data, a data acquisition interface is required to be added, the data acquired by the radar is processed by the data acquisition interface and then is used for subsequent application, and the data acquisition interface is composed of a DCA1000 module and is used for capturing real-time data.
The ground truth value is obtained by collecting vital sign waveforms under all scenes by wearable equipment NeuLog and is used for subsequent sample data training and discrimination comparison.
S2, preparing a training sample set and a testing sample set:
preprocessing the data to be processed, including: the vital sign data obtained in step S1 is subjected to FFT, i.e., fast fourier transform, the ratio of the peak to the remaining portion is calculated, and hypothesis test is performed using an empirical threshold to obtain the vital sign waveform data. Then, the acquired vital sign waveform data is divided according to a certain proportion, and a training sample and a testing sample are constructed for training the neural network. For example, 30% of the vital sign waveform data is used as training sample data for off-line training of the deep learning module, and the remaining 70% of the vital sign waveform data is used as test sample data for on-line recovery of the vital sign waveform by the training module.
S3, setting a deep learning neural network:
the deep learning neural network adopts a coder-decoder model and consists of a coder, a decoder and a discriminator, and the vital sign data to be processed is processed by the coder, the decoder and the discriminator in sequence to complete waveform recovery. The encoder and decoder use the same kernel structure: the method is characterized by comprising three convolution neural network kernels in a parallel mode, wherein the convolution sizes are respectively as follows: 3 × 3, 7 × 7, 11 × 11, stride equal to 1, padding equal to 0, and expansion equal to 1. The outputs of three convolutional neural network kernels with different kernel sizes are sent as inputs to the maxpololing layer, i.e., the max pooling layer, whose kernel size is 2; the discriminator consists of three convolution layers with input size matched with the waveform length, the Markov discriminator based on a conditional countermeasure network is used for discrimination, a ground truth value acquired by the wearable equipment NeuLog and the output of the encoder-decoder are used as two inputs, a sliding window convolution is carried out between the two input waveforms, the output results obtained by the convolution are aggregated to generate discrimination, and the waveform recovery is completed.
S4, training and evaluating the deep learning neural network:
inputting a training sample set into a constructed neural network model, firstly, pre-training in an unsupervised learning mode by using a feature extraction method, and initializing parameters and weight of the neural network; next, parameters and weights of the neural network are updated using adam adaptive moment estimation algorithm, i.e., an algorithm that performs first order gradient optimization on a random objective function, to minimize the loss function. And during training, sending a group of data training data each time, and ending the training until all sample data are input, thereby obtaining the optimal parameters and weight. The set parameters include: the selected sample number of one training, Batch size, learning rate, momentum, attenuation step and attenuation factor.
S5, applying the trained encoder-decoder model generated based on deep learning to complete waveform recovery:
and the test sample completes waveform recovery through a trained encoder-decoder model generated based on deep learning and a Markov discriminator to obtain a fine-grained vital sign signal. The vital sign signals comprise respiration signals and/or heart rate signals, and real-time output can be provided and respiration waveforms and/or heart rate waveforms can be displayed according to needs.
Further, in step S3, the deep contrast learning neural network may be further constructed based on the MLP neural network model.
Further, in step S4, a minimum mean square error MSE may be used as the loss function.
Further, in step S4, the Xavier method may also be applied to train the neural network, and initialize the neural network parameters and weights.
Further, in step S4, a small batch sample gradient descent method may be applied to minimize the loss function and update the neural network parameters and weights.
Compared with the prior art, the invention has the following advantages and effects:
the invention provides a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency perception, which extracts the vital sign waveforms through a coder-decoder model, avoids the loss of waveform details compared with a traditional comparison model, and can recover the vital sign waveforms with fine granularity more accurately by using a condition-based antagonistic network Markov discriminator to discriminate, thereby further improving the robustness of the vital sign monitoring on motion. In addition, because the vital sign waveform recovery method provided by the invention adopts a uniform data format, the method can be almost deployed on any type of conventional commercial grade radar and can adapt to different application requirements, so that the waveform recovery method provided by the invention is independent of bottom hardware, and test results on 3 mainstream radar platforms show that: the vital sign waveform recovery method provided by the invention can accurately recover the fine-grained vital sign waveform in a motion state.
Drawings
Fig. 1 is a general flowchart of vital signs monitoring waveform recovery based on deep learning and radio frequency sensing disclosed in the present embodiment;
FIG. 2 is a comparison graph of the waveform of the heartbeat recovered when the subject walks on the treadmill at a speed of 1m/s with the ground truth value as disclosed in the present embodiment;
FIG. 3 is a comparison graph of the recovered heartbeat waveform and the ground truth value when the subject stands still according to the present embodiment;
FIG. 4 is a comparison of various waveforms of a subject during typing as disclosed in the present embodiment;
FIG. 5 is a comparison graph of various waveforms of a subject disclosed in this embodiment walking on a treadmill at a speed of 1 m/s;
FIG. 6 is a diagram illustrating the relative error between the respiration and heartbeat frequencies and the ground truth value according to the present embodiment;
FIG. 7 is a schematic diagram illustrating cosine similarity of respiration and heartbeat waveforms to ground truth values according to the present embodiment;
fig. 8 is a flowchart of neural network training disclosed in the present embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The invention provides a method for restoring vital sign monitoring waveforms based on deep learning and radio frequency perception by applying deep contrast learning to non-contact vital sign monitoring, and in order to deepen further understanding of the invention, firstly, destructive influence of body movement on extracted vital signs and effectiveness evaluation of a waveform restoring technology provided by the invention are introduced as follows, and it needs to be noted that the described contents are only used for better understanding of the invention, and do not constitute limitation on the invention.
(1) Destructive effects of body motion on extracting vital signs
In order to better understand the destructive influence of body movement on the extraction of vital signs, the present invention respectively studies the recovery of heartbeat waveforms of a subject walking on a treadmill at a speed of 1m/s and standing still, wherein the same radar is used for acquiring radio frequency data and the same waveform separation and recovery technology is used for processing radio frequency signals, and the results are shown in fig. 2 and fig. 3, wherein fig. 2 is a comparison graph of the heartbeat waveforms recovered when the subject walks on the treadmill at the speed of 1m/s and a ground true value, fig. 3 is a comparison graph of the heartbeat waveforms recovered when the subject stands still and a ground true value, and it can be seen from fig. 2 and fig. 3 that: when the treadmill is still, the recovered waveform is basically consistent with the ground truth value, and the recovered waveform is obviously different from the ground truth value when the treadmill walks at the speed of 1 m/s. Therefore, the body movement interferes with the waveform recovery, and the ground truth value in fig. 2 and 3 is a heartbeat waveform obtained by heartbeat data acquired by the wearable device through the waveform recovery technology and is regarded as a true value; the RF waveform is the waveform of a radio frequency signal acquired by an infrared ultra-wideband radar; the recovery waveform is a heartbeat waveform obtained after a waveform recovery technology of a radio frequency signal acquired by the infrared ultra-wideband radar.
(2) Effectiveness evaluation of waveform recovery techniques proposed by the present invention
In order to evaluate the effectiveness of the waveform recovery technique proposed by the present invention in exercise, the present invention takes the examples of typing and exercising on a flat board of a subject, and gives the recovered heartbeat and respiration waveforms, ground truth values are obtained by applying photoplethysmography PPG at the earlobe or fingertip by a wearable device NeuLog, which can be regarded as the true values of heartbeat and respiration, radio frequency data is collected by IR-UWB (X4M05) radar, a baseline is used to recover the waveforms by using the latest RF-SCG radio frequency radar-seismogram algorithm of the massachusetts institute of technology, and the results are shown in fig. 4 and fig. 5, fig. 4 and fig. 5 are respectively a comparison graph of various waveforms when the subject is typing and walking on a treadmill at a speed of 1M/s, and each includes: a contrast graph of a recovery waveform of respiration and a ground truth value, and a contrast graph of a recovery waveform of heartbeat and a ground truth value and a reference line, wherein RF is a radio frequency waveform, rBreath and gBreath are respectively the recovery waveform of respiration and the ground truth value, and rHeart, gHeart and bHeart are respectively the recovery waveform of heartbeat, the ground truth value and the reference line. It can be seen from fig. 4 and 5 that, no matter when typing or walking on a treadmill at a speed of 1m/s, the restored waveform using the waveform restoration technique proposed by the present invention can be matched with the true value, showing robustness to exercise, whereas the baseline of the waveform restored using the RF-SCG radio frequency radar-cardiogram algorithm of the massachusetts institute of technology cannot restore the correct waveform basically, comparing the ground truth values of fig. 4 and 5, it can be seen that the heartbeat waveform obtained by means of the wearable device loses some heartbeat cycles, which means that the heartbeat and respiration monitoring performed by the wearable device will be interfered by the exercise. The waveforms shown in fig. 4 and 5 can intuitively show the effectiveness of waveform recovery, and it can be seen from the figures that the effectiveness of the waveform recovery technique proposed by the present invention is relatively good.
The overall performance of the waveform recovery technique can be measured by the relative error, which can be used to represent the accuracy of the respiratory and heartbeat frequencies, relative error of the respiratory and heartbeat frequencies with respect to the ground truth value, and cosine similarity, as shown in fig. 6; the cosine similarity, i.e. the normalized correlation coefficient, can be used to measure the similarity between the recovered waveform and the corresponding ground truth waveform, and the cosine similarity of the respiration and heartbeat waveforms with respect to the ground truth is shown in fig. 7. As can be seen from fig. 5 and 6, the waveform recovery technique provided by the present invention has smaller relative error, higher similarity between the recovered waveform and the true value, and better overall performance.
The invention provides a method for restoring vital sign monitoring waveforms based on deep learning and radio frequency perception, which constructs a coder-decoder model based on deep learning to restore the vital sign signal waveforms, wherein a coder and a decoder of the model adopt a mode of parallel kernels of three convolutional neural networks, have the characteristic of multi-resolution, and are distinguished by a Markov discriminator based on a conditional countermeasure network, so that the robustness of the vital sign monitoring on motion is further improved. The radar radio frequency echo signal containing the vital sign information is subjected to waveform separation technology to obtain a vital sign signal, a vector obtained after the vital sign signal is preprocessed is used as vital sign data to be processed, and a certain amount of data to be processed is selected to be processed. Dividing the processed sample into a training sample and a test sample according to a certain proportion, taking the training sample as the total input of the whole model, updating and adjusting the weight and parameters of the model, inputting the test sample into the trained model, and completing waveform recovery.
In the practice of the invention, the radar is preferably placed in front of the subject, since the blood volume pulses BVP associated with the heartbeat are likely to be caused by the common carotid artery, while the respiratory signal is mainly dependent on the vibrations of the chest. It has been shown that aiming the radar to one side of the body misses the breathing signal to a large extent, but not the heartbeat signal.
Example 2
The embodiment discloses a method for recovering vital sign monitoring waveforms based on deep learning and radio frequency perception. The following describes the implementation of the present invention in detail by taking IR-UWB radar X4M05 as an example.
In order to obtain enough sample data, a total of 12 physical health monitors of 6 men and 6 women were recruited without any strong impact. The experiments of this example essentially followed the IRB ethical review board protocol of our institute. The subject is required to maintain a quasi-static sitting position, or perform 7 common human actions in a daily living environment: playing mobile phone, typing, swinging body, shaking legs, walking on treadmill, standing/sitting, turning over (while sleeping). Wearable device NeuLog was used to collect ground truth in all scenarios. The rf induction radar is placed in a range of 0.5 to 2m from the subject, and the precise range may vary from person to person. Data collection was performed using different time spans, but ensuring that the total time for each subject was approximately the same: including one minute of walking on a treadmill, one hour of typing, and one night of sleep monitoring, 12 subjects totaling 80 hours of RF and ground truth records data sets, including about 330k heart cycles and 68k breathing cycles. Wherein 30% of the data is used as training sample data for off-line training of the deep learning module, and the remaining 70% of the data is used as test sample data for on-line recovery of the vital sign waveform by using the training module. In order to ensure the reasonability of the data, the data of the training sample is collected from 4 testees, including 2 females and 2 males, the collection time is 24 hours, and the body movement of the testee is 3 types including typing, swinging, standing/sitting. The test sample data is from all subjects and all body movements.
According to different body movements, training sample data and test sample data are applied to carry out experiments respectively, and the experiments are carried out on Python 3.7 and TensorFlow 2.0-based PC machines, wherein the PC machines are provided with i9-10900KF (3.7GHz) CPU, 16GB DDR4 RAM and GeForce RTX 2070 display cards. Clocks between hardware components are synchronized based on a precision time protocol using ethernet. Novelda's IR-UWB radar X4M05 operates at 7.3 or 8.7GHz with a bandwidth of 1.5 GHz; it has a pair of tx-rx (transmitter, receiver) antennas with a field of view (FoV) of 120 ° (azimuth and elevation).
The overall implementation flow is shown in fig. 1, and the specific implementation steps are as follows:
and S1, obtaining vital sign data to be processed, wherein the vital sign data to be processed is obtained after radar radio frequency reflection signals are processed by a waveform separation technology.
The radio frequency induction radar is placed in a range of 0.5 to 2m away from a testee, data acquisition with preset time is carried out on the testee according to different motion types, and vital sign data obtained after the acquired data are processed by a waveform separation technology are data to be processed.
In this embodiment, the collected data includes: sleep monitoring of 12 subjects walking one minute, typing one hour and one night respectively on a treadmill for a total duration of 80 hours, including about 330k heartbeat cycles and 68k respiratory cycles, ground truth values were collected by wearable device NeuLog.
S2, preparing a training sample set and a testing sample set:
and processing the vital sign data to be processed to generate a training sample set and a testing sample set. Because the periodicity of respiration and heartbeat is much higher than that of other waveforms, each waveform in data to be processed is subjected to FFT (fast Fourier transform), the ratio of a peak value to the remaining part is calculated, hypothesis testing is performed by using an empirical threshold, in the embodiment, the empirical threshold is about 2.1-3.0, and vital sign waveforms are selected.
S3, setting a deep learning neural network:
a coder-decoder model based on deep learning is constructed to recover the waveform of the vital sign signal, the model adopts a mode of 3 x 3, 7 x 7 and 11 x 11 convolution neural network kernel parallelism, and a Markov discriminator is added, so that the multi-resolution capability and higher robustness are provided. The radar radio frequency echo signal containing the vital sign information is subjected to waveform separation technology to obtain a vital sign signal, a vector obtained after the vital sign signal is subjected to preprocessing is used as vital sign data to be processed, and a certain amount of vital sign data to be processed is selected for processing. Dividing the processed sample into a training sample and a test sample, taking the training sample as the input of the encoder-decoder model, updating and adjusting the weight and the parameter of the model, inputting the test sample into the trained model, and outputting a waveform. And performing sliding window convolution between the waveform and two waveforms of the ground truth value acquired by the NeuLog, aggregating all feedbacks to generate judgment, and completing waveform recovery to obtain a fine-grained vital sign signal.
S4, training and evaluating the deep learning neural network:
inputting a training sample set into a constructed deep learning network for training, wherein a specific training process is shown in FIG. 8, firstly, an entire model feature extraction method is applied to unsupervised learning for pre-training, neural network parameters and weights are initialized, then, adam adaptive moment estimation algorithm is applied to minimize loss functions, and network parameters and weights are updated so as to achieve optimized neural network parameters. When training the neural network, the forward propagation and the backward propagation are mutually dependent, the parameters and the weight values of the neural network of each layer are calculated and stored according to the sequence of the forward propagation, namely the sequence from the input layer to the output layer, and the calculation of the parameter gradient of the neural network is carried out according to the sequence of the backward propagation, namely the sequence from the output layer to the input layer. And during training, one group of data training data is sent each time, and the training is finished until all sample data are input. In this embodiment, the sample number Batch size selected for one training is set to 512, the learning rate, the momentum moment, the attenuation step, and the attenuation factor are set to 0.001, 0.9, 5e5, and 0.999, respectively.
S5, applying a deep learning neural network to complete waveform recovery:
the test sample outputs the final waveform through the trained coder-decoder model, waveform recovery is completed, fine-grained vital sign signals are recovered, the vital signs comprise respiratory signals and/or heart rate signals, and real-time output and display of the respiratory waveforms and/or the heart rate waveforms can be provided according to needs.
Both heartbeat and respiration can be separately waveform restored using this model. Intercepting data with preset duration on the waveform of the vital sign data signal, taking two continuous waveforms with the same duration as the input of a model each time, and enabling the two waveforms to be partially overlapped, such as: the data of the last 25% of the first waveform is the data of the first 25% of the second waveform, and the continuous waveform output by the model is the desired heartbeat or respiration waveform, and in this embodiment, the predetermined time is 20 seconds.
In particular, the radar in the above embodiment may also be selected from FMCW radar Position2Go of the English flying and FMCW radar IWR1443BOOST of TI. The FMCW radar Position2Go from the England flying operates at 24GHz and 200MHz bandwidth, and has 1 tx antenna and 2 rx antennas, the azimuth angle of the antennas being 76 ° and the elevation angle being 19 °. The FMCW radar IWR1443BOOST of TI operates at 77GHz and a maximum bandwidth of 4GHz, and has 3 tx antennas and 4 rx antennas, and the antennas are 56 DEG in azimuth and 28 DEG in elevation. When selecting the FMCW radar IWR1443BOOST of TI, because the rate of a radar UBS serial port is low, the high-speed transmission of radar data cannot be supported, and therefore a DCA1000 module is required to be added to realize high-speed data acquisition. The module receives data from an IWR1443BOOST through an LVDS high-speed interface and sends the data to a PC through a USB serial port, and a driver of the module can be developed on a raspberry party by using C/C + + language.
In particular, in the above embodiment, the deep contrast learning neural network may also be constructed based on the MLP neural network model, may also employ an Xavier method to initialize neural network parameters and weights, and may also use the minimum mean square error MSE as a loss function.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (9)

1. A method for restoring vital sign monitoring waveforms based on deep learning and radio frequency perception is characterized by comprising the following steps:
s1, acquiring vital sign data to be processed: the vital sign data to be processed is obtained by processing radar radio frequency reflection signals through a waveform separation technology;
s2, preparing a training sample set and a testing sample set: preprocessing the vital sign data to be processed to generate a training sample set and a testing sample set;
s3, setting a deep learning neural network: the deep learning neural network adopts an encoder-decoder model and consists of an encoder, a decoder and a discriminator, and the vital sign data to be processed sequentially passes through the encoder, the decoder and the discriminator to complete waveform recovery;
s4, training and evaluating the deep learning neural network: inputting the training sample set into a deep learning network, performing unsupervised learning by using a feature extraction method, and initializing parameters and weight of a neural network; the adam adaptive moment estimation algorithm is applied to minimize a loss function, parameters and weight of the neural network are updated, and training is completed;
s5, applying a deep learning neural network to complete waveform recovery: the test sample completes waveform recovery through a trained deep learning neural network, and recovers fine-grained vital sign signals, wherein the vital sign signals comprise breathing signals and/or heart rate signals.
2. The method for restoring vital sign monitoring waveform based on deep learning and radio frequency sensing as claimed in claim 1, wherein the preprocessing in step S2 includes performing FFT fast fourier transform on the vital sign data to be processed, calculating a ratio of a peak to a remaining portion, and performing hypothesis testing using an empirical threshold to obtain the vital sign waveform data.
3. The method for restoring vital sign monitoring waveform based on deep learning and radio frequency perception according to claim 1, wherein the kernel structure of the encoder in the step S3 is as follows: the encoder is composed of three convolution neural network kernels in a parallel mode, and the convolution sizes are respectively as follows: the outputs of the three convolutional neural network kernels, 3 × 3, 7 × 7, 11 × 11, are sent to the max pooling layer, which has a kernel size of 2.
4. The method for recovering vital signs monitoring waveform based on deep learning and radio frequency perception according to claim 1, wherein the decoder in step S3 uses the same kernel structure as the encoder.
5. The method for recovering vital signs monitoring waveform based on deep learning and radio frequency perception according to claim 1, wherein the discriminator in step S3 is a markov discriminator using a conditional countermeasure network, and is composed of three convolutional layers.
6. The method for recovering vital signs monitoring waveform based on deep learning and radio frequency perception according to any one of claims 1 to 5, wherein the clock between each hardware component is synchronized based on a precise time protocol using Ethernet.
7. The vital signs monitoring action removing method based on deep learning and radio frequency perception according to any one of claims 1 to 5, wherein the step S5 further includes: and outputting and displaying the respiration waveform and/or the heart rate waveform in real time.
8. The method for recovering vital signs monitoring waveform according to any one of claims 1 to 5, wherein the radar in step S2 is Novelda ' S IR-UWB radar, England ' S FMCW radar or TI ' S FMCW radar.
9. The method for recovering vital sign monitoring waveform based on deep learning and radio frequency sensing as claimed in claim 8, wherein when the radar is FMCW radar of TI, a data collection interface is further added, and the data collection interface is formed by DCA1000 module and is used for capturing real-time data.
CN202111665367.1A 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing Active CN114098679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111665367.1A CN114098679B (en) 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111665367.1A CN114098679B (en) 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing

Publications (2)

Publication Number Publication Date
CN114098679A true CN114098679A (en) 2022-03-01
CN114098679B CN114098679B (en) 2024-03-29

Family

ID=80363648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111665367.1A Active CN114098679B (en) 2021-12-30 2021-12-30 Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing

Country Status (1)

Country Link
CN (1) CN114098679B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098602A (en) * 2023-01-16 2023-05-12 中国科学院软件研究所 Non-contact sleep respiration monitoring method and device based on IR-UWB radar

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090121450A (en) * 2008-05-22 2009-11-26 (주)유비즈플러스 Bio radar
CN104605831A (en) * 2015-02-03 2015-05-13 南京理工大学 Respiration and heartbeat signal separation algorithm of non-contact vital sign monitoring system
CN105816163A (en) * 2016-05-09 2016-08-03 安徽华米信息科技有限公司 Method, device and wearable equipment for detecting heart rate
US20180144241A1 (en) * 2016-11-22 2018-05-24 Mitsubishi Electric Research Laboratories, Inc. Active Learning Method for Training Artificial Neural Networks
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN109363652A (en) * 2018-09-29 2019-02-22 天津惊帆科技有限公司 PPG signal reconfiguring method and equipment based on deep learning
US20190110755A1 (en) * 2017-10-17 2019-04-18 Whoop, Inc. Applied data quality metrics for physiological measurements
US20190159735A1 (en) * 2017-11-28 2019-05-30 Stmicroelectronics S.R.L. Processing of electrophysiological signals
CN109864714A (en) * 2019-04-04 2019-06-11 北京邮电大学 A kind of ECG Signal Analysis method based on deep learning
CN109965858A (en) * 2019-03-28 2019-07-05 北京邮电大学 Based on ULTRA-WIDEBAND RADAR human body vital sign detection method and device
US20190282120A1 (en) * 2018-03-14 2019-09-19 Canon Medical Systems Corporation Medical image diagnostic apparatus, medical signal restoration method, and model training method
CN110974217A (en) * 2020-01-03 2020-04-10 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
CN111046824A (en) * 2019-12-19 2020-04-21 上海交通大学 Time series signal efficient denoising and high-precision reconstruction modeling method and system
CN111568396A (en) * 2020-04-13 2020-08-25 广西万云科技有限公司 V2iFi is based on vital sign monitoring technology in compact radio frequency induction's car
US20200397310A1 (en) * 2019-02-28 2020-12-24 Google Llc Smart-Device-Based Radar System Detecting Human Vital Signs in the Presence of Body Motion
CN112508110A (en) * 2020-12-11 2021-03-16 哈尔滨理工大学 Deep learning-based electrocardiosignal graph classification method
US20210093203A1 (en) * 2019-09-30 2021-04-01 DawnLight Technologies Systems and methods of determining heart-rate and respiratory rate from a radar signal using machine learning methods
CN112656395A (en) * 2020-12-16 2021-04-16 问境科技(上海)有限公司 Method and system for detecting change trend of vital signs of patient based on microwave radar
CN112754431A (en) * 2020-12-31 2021-05-07 杭州电子科技大学 Respiration and heartbeat monitoring system based on millimeter wave radar and lightweight neural network
CN112754441A (en) * 2021-01-08 2021-05-07 杭州环木信息科技有限责任公司 Millimeter wave-based non-contact heartbeat detection method
CN112998701A (en) * 2021-03-27 2021-06-22 复旦大学 Vital sign detection and identity recognition system and method based on millimeter wave radar
CN113128772A (en) * 2021-04-24 2021-07-16 中新国际联合研究院 Crowd quantity prediction method and device based on sequence-to-sequence model
CN113126050A (en) * 2021-03-05 2021-07-16 沃尔夫曼消防装备有限公司 Life detection method based on neural network
CN113317798A (en) * 2021-05-20 2021-08-31 郑州大学 Electrocardiogram compressed sensing reconstruction system based on deep learning
EP3885786A1 (en) * 2020-03-27 2021-09-29 Origin Wireless, Inc. Method, apparatus, and system for wireless vital monitoring using high frequency signals
CN113729641A (en) * 2021-10-12 2021-12-03 南京润楠医疗电子研究院有限公司 Non-contact sleep staging system based on conditional countermeasure network
US20210378597A1 (en) * 2020-06-04 2021-12-09 Biosense Webster (Israel) Ltd. Reducing noise of intracardiac electrocardiograms using an autoencoder and utilizing and refining intracardiac and body surface electrocardiograms using deep learning training loss functions

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090121450A (en) * 2008-05-22 2009-11-26 (주)유비즈플러스 Bio radar
CN104605831A (en) * 2015-02-03 2015-05-13 南京理工大学 Respiration and heartbeat signal separation algorithm of non-contact vital sign monitoring system
CN105816163A (en) * 2016-05-09 2016-08-03 安徽华米信息科技有限公司 Method, device and wearable equipment for detecting heart rate
US20180144241A1 (en) * 2016-11-22 2018-05-24 Mitsubishi Electric Research Laboratories, Inc. Active Learning Method for Training Artificial Neural Networks
US20190110755A1 (en) * 2017-10-17 2019-04-18 Whoop, Inc. Applied data quality metrics for physiological measurements
US20190159735A1 (en) * 2017-11-28 2019-05-30 Stmicroelectronics S.R.L. Processing of electrophysiological signals
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
US20190282120A1 (en) * 2018-03-14 2019-09-19 Canon Medical Systems Corporation Medical image diagnostic apparatus, medical signal restoration method, and model training method
CN109363652A (en) * 2018-09-29 2019-02-22 天津惊帆科技有限公司 PPG signal reconfiguring method and equipment based on deep learning
US20200397310A1 (en) * 2019-02-28 2020-12-24 Google Llc Smart-Device-Based Radar System Detecting Human Vital Signs in the Presence of Body Motion
CN113439218A (en) * 2019-02-28 2021-09-24 谷歌有限责任公司 Smart device based radar system for detecting human vital signs in the presence of body motion
CN109965858A (en) * 2019-03-28 2019-07-05 北京邮电大学 Based on ULTRA-WIDEBAND RADAR human body vital sign detection method and device
CN109864714A (en) * 2019-04-04 2019-06-11 北京邮电大学 A kind of ECG Signal Analysis method based on deep learning
US20210093203A1 (en) * 2019-09-30 2021-04-01 DawnLight Technologies Systems and methods of determining heart-rate and respiratory rate from a radar signal using machine learning methods
CN111046824A (en) * 2019-12-19 2020-04-21 上海交通大学 Time series signal efficient denoising and high-precision reconstruction modeling method and system
CN110974217A (en) * 2020-01-03 2020-04-10 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
EP3885786A1 (en) * 2020-03-27 2021-09-29 Origin Wireless, Inc. Method, apparatus, and system for wireless vital monitoring using high frequency signals
CN111568396A (en) * 2020-04-13 2020-08-25 广西万云科技有限公司 V2iFi is based on vital sign monitoring technology in compact radio frequency induction's car
US20210378597A1 (en) * 2020-06-04 2021-12-09 Biosense Webster (Israel) Ltd. Reducing noise of intracardiac electrocardiograms using an autoencoder and utilizing and refining intracardiac and body surface electrocardiograms using deep learning training loss functions
CN112508110A (en) * 2020-12-11 2021-03-16 哈尔滨理工大学 Deep learning-based electrocardiosignal graph classification method
CN112656395A (en) * 2020-12-16 2021-04-16 问境科技(上海)有限公司 Method and system for detecting change trend of vital signs of patient based on microwave radar
CN112754431A (en) * 2020-12-31 2021-05-07 杭州电子科技大学 Respiration and heartbeat monitoring system based on millimeter wave radar and lightweight neural network
CN112754441A (en) * 2021-01-08 2021-05-07 杭州环木信息科技有限责任公司 Millimeter wave-based non-contact heartbeat detection method
CN113126050A (en) * 2021-03-05 2021-07-16 沃尔夫曼消防装备有限公司 Life detection method based on neural network
CN112998701A (en) * 2021-03-27 2021-06-22 复旦大学 Vital sign detection and identity recognition system and method based on millimeter wave radar
CN113128772A (en) * 2021-04-24 2021-07-16 中新国际联合研究院 Crowd quantity prediction method and device based on sequence-to-sequence model
CN113317798A (en) * 2021-05-20 2021-08-31 郑州大学 Electrocardiogram compressed sensing reconstruction system based on deep learning
CN113729641A (en) * 2021-10-12 2021-12-03 南京润楠医疗电子研究院有限公司 Non-contact sleep staging system based on conditional countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李剑菡: "基于卷积神经网络的人体生命体征和多目标检测算法研究", 《中国优秀硕士学位论文全文数据库》, pages 136 - 633 *
沈建飞, 陈益强, 谷洋: "基于时频信息融合网络的非干扰呼吸检测方法", 《高技术通讯》, pages 998 - 1009 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098602A (en) * 2023-01-16 2023-05-12 中国科学院软件研究所 Non-contact sleep respiration monitoring method and device based on IR-UWB radar
CN116098602B (en) * 2023-01-16 2024-03-12 中国科学院软件研究所 Non-contact sleep respiration monitoring method and device based on IR-UWB radar

Also Published As

Publication number Publication date
CN114098679B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
Schires et al. Vital sign monitoring through the back using an UWB impulse radar with body coupled antennas
WO2022257187A1 (en) Non-contact fatigue detection method and system
EP3277162B1 (en) Wearable pulse sensing device signal quality estimation
WO2018013192A2 (en) Extraction of features from physiological signals
CN110353649B (en) Heart rate detection method
CN113261932B (en) Heart rate measurement method and device based on PPG signal and one-dimensional convolutional neural network
CN112686094B (en) Non-contact identity recognition method and system based on millimeter wave radar
CN113435283B (en) Ultra-wideband radar identity recognition method based on breath sample space
CN110520935A (en) Learn sleep stage from radio signal
CN114818910B (en) Non-contact blood pressure detection model training method, blood pressure detection method and device
WO2023093770A1 (en) Millimeter-wave radar-based noncontact electrocardiogram monitoring method
CN114098679B (en) Vital sign monitoring waveform recovery method based on deep learning and radio frequency sensing
CN104783799B (en) A kind of contactless single goal respiratory rate of short distance and amplitude of respiration detection method
Gao et al. Contactless sensing of physiological signals using wideband RF probes
Xie et al. Signal quality detection towards practical non-touch vital sign monitoring
CN113907727B (en) Beat-by-beat blood pressure measurement system and method based on photoplethysmography
Phinyomark et al. Applications of variance fractal dimension: A survey
Wan et al. Combining parallel adaptive filtering and wavelet threshold denoising for photoplethysmography-based pulse rate monitoring during intensive physical exercise
Wang et al. Ppg signal reconstruction using deep convolutional generative adversarial network
JP7438617B2 (en) Signal restoration system, signal restoration method, and program for causing a computer to execute the signal restoration method
CN115474901A (en) Non-contact living state monitoring method and system based on wireless radio frequency signals
CN111685760A (en) Human body respiratory frequency calculation method based on radar measurement
CN116269413A (en) Continuous electrocardiographic waveform reconstruction system and method using smart wristband motion sensor
Čuljak et al. A data-fusion algorithm for respiration rate extraction based on UWB transversal propagation method
CN114847931A (en) Human motion tracking method, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant