CN114767074B - Vital sign measuring method, equipment and storage medium - Google Patents

Vital sign measuring method, equipment and storage medium Download PDF

Info

Publication number
CN114767074B
CN114767074B CN202210282020.7A CN202210282020A CN114767074B CN 114767074 B CN114767074 B CN 114767074B CN 202210282020 A CN202210282020 A CN 202210282020A CN 114767074 B CN114767074 B CN 114767074B
Authority
CN
China
Prior art keywords
target
signal
vibration
vibrating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210282020.7A
Other languages
Chinese (zh)
Other versions
CN114767074A (en
Inventor
韩晶
蔡珍妮
童志军
丁小羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yuemian Technology Co ltd
Shenzhen Yixin Vision Technology Co ltd
Original Assignee
Nanjing Yuemian Technology Co ltd
Shenzhen Yixin Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yuemian Technology Co ltd, Shenzhen Yixin Vision Technology Co ltd filed Critical Nanjing Yuemian Technology Co ltd
Priority to CN202210282020.7A priority Critical patent/CN114767074B/en
Publication of CN114767074A publication Critical patent/CN114767074A/en
Application granted granted Critical
Publication of CN114767074B publication Critical patent/CN114767074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Pulmonology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application provides a vital sign measuring method, equipment and a storage medium, and relates to the technical field of medical treatment. The method comprises the following steps: determining position coordinates of M vibration targets in a target area in a three-dimensional radar coordinate system through an FMCW radar, wherein M is more than or equal to 1, and acquiring an image of the target area through a camera; aiming at each vibration target, mapping the position coordinates of the vibration target by using a trained MLP model to obtain two-dimensional coordinates of the vibration target in an image; screening out a target to be detected from M vibrating targets according to a first motion value of each vibrating target detected by an FMCW radar and a second motion value of each vibrating target in an image; the method comprises the steps of obtaining an echo signal of a target to be measured through an FMCW radar, separating a respiration signal and a heartbeat signal in the echo signal, and respectively determining the respiration frequency and the heartbeat frequency of the target to be measured according to the respiration signal and the heartbeat signal.

Description

Vital sign measuring method, equipment and storage medium
Technical Field
The present application relates to the field of medical technology, and in particular, to a vital sign measurement method, device, and storage medium.
Background
With the progress and development of society, people pay more and more attention to health management. The vital signs such as the heart and lung function, the respiratory function and the like of a human body are monitored in real time, and the abnormal conditions of the vital signs can be found in time, so that the early warning of diseases is effectively carried out. In the prior art, a single sensor is mainly used for collecting vital sign signals of a human body, and vital sign parameters are determined based on the vital sign signals, so that the health state of the human body is monitored. However, signals such as respiration and heartbeat of a human body are weak, and vital sign signals acquired by a single sensor are easily interfered by other vibration signals in the environment, so that the measurement accuracy is low.
Disclosure of Invention
The embodiment of the application provides a vital sign measuring method, equipment and a storage medium, which can solve the technical problem that the accuracy of the existing vital sign measuring method is low.
In a first aspect, an embodiment of the present application provides a vital sign measurement method, which is applied to a vital sign measurement device, where the device includes a camera and an FMCW radar, and the method includes:
acquiring position coordinates of M vibration targets in a target area in a three-dimensional radar coordinate system through an FMCW radar, wherein M is more than or equal to 1, and acquiring an image of the target area through a camera; aiming at each vibration target in the M vibration targets, processing the position coordinates of the vibration target by utilizing a trained multilayer perception network model to obtain two-dimensional coordinates of the vibration target in an image, wherein the multilayer perception network model is used for indicating the mapping relation between a three-dimensional radar coordinate system and a two-dimensional image coordinate system corresponding to the image; screening out a target to be detected from the M vibrating targets according to a first motion value of each vibrating target in the M vibrating targets detected by the FMCW radar and a second motion value of each vibrating target in the image; obtaining an echo signal of a target to be detected through an FMCW radar, separating a respiration signal and a heartbeat signal in the echo signal, determining the respiration frequency of the target to be detected according to the respiration signal, and determining the heartbeat frequency of the target to be detected according to the heartbeat signal.
Based on the vital sign measuring method provided by the application, after the position coordinates of M vibrating targets in a target area in a three-dimensional radar coordinate system are determined through an FMCW radar, the position coordinates of each vibrating target are processed through a trained multilayer perception network model, and then the two-dimensional coordinates of each vibrating target in an image acquired by a camera can be determined. For the same vibration target, a target to be detected can be screened out from M vibration targets according to a first motion value detected by the FMCW radar and a second motion value of the vibration target in an image, and therefore an echo signal of the target to be detected is determined according to the FMCW radar. And a breathing signal and a heartbeat signal of the target to be detected can be separated from the echo signal, and then the breathing frequency and the heartbeat frequency of the target to be detected are determined according to the breathing signal and the heartbeat signal respectively. The echo signal of the target to be measured in the target area is determined based on the multiple sensors, interference of the interference target on the echo signal of the target to be measured is eliminated, and accuracy of measurement on vital signs is improved.
Optionally, acquiring position coordinates of M vibrating targets in a target area in a three-dimensional radar coordinate system by an FMCW radar, including:
receiving an echo reflected by a target area through an FMCW radar to obtain an initial signal;
removing a background signal in the initial signal to obtain a vibration signal;
determining the position coordinates of M vibrating targets in a target area in a three-dimensional radar coordinate system according to the vibration signals, wherein the position coordinates of each vibrating target in the three-dimensional radar coordinate system comprise the horizontal distance, the horizontal azimuth angle and the pitch angle of the vibrating target relative to the FMCW radar;
obtaining an echo signal of a target to be measured through an FMCW radar, comprising:
and carrying out signal analysis on the vibration signal according to the position coordinate of the target to be detected to obtain an echo signal of the target to be detected.
Optionally, the processing of the position coordinates of the vibration target by using the trained multilayer perception network model to obtain two-dimensional coordinates of the vibration target in the image includes:
inputting the horizontal distance, horizontal azimuth angle and pitch angle of the vibrating target relative to the FMCW radar into an input layer; respectively extracting features of the horizontal distance, the horizontal azimuth angle and the pitch angle through the first hidden layer and the second hidden layer to obtain a first feature corresponding to the horizontal distance, a second feature corresponding to the horizontal azimuth angle and a third feature corresponding to the pitch angle; performing fusion processing on the first feature, the second feature and the third feature through a third hidden layer to obtain a fusion feature; and processing the fusion characteristics through an output layer to obtain a two-dimensional coordinate of the vibration target in the image.
Optionally, screening the target to be measured from the M vibrating targets according to the first motion value of each vibrating target in the M vibrating targets detected by the FMCW radar and the second motion value of each vibrating target in the image, includes:
processing the image through the trained first neural network model aiming at each of the M vibration targets, determining a second motion value of the vibration target in the image, determining a first motion value of the vibration target through an echo signal of the vibration target detected by the FMCW radar, and determining the vibration target as a target to be detected if the first motion value is smaller than a preset first motion threshold and the second motion value is smaller than a preset second motion threshold.
Optionally, processing the image through the trained first neural network model to determine a second motion value of the vibration target, including:
determining the area of the vibration target in the image by using the trained detection model; and inputting the area into the trained optical flow network model for processing, and determining a second motion value of the vibration target.
Optionally, separating the respiration signal and the heartbeat signal in the echo signal comprises: filtering the echo signal of the target to be detected through a band-pass filter to obtain a respiration initial signal; determining an in-phase component and a quadrature component of a respiration initiation signal; inputting the respiration initial signal, the in-phase component and the quadrature component into a trained second neural network model for processing to obtain a respiration signal; the difference signal between the echo signal and the respiration signal is a heartbeat signal.
Optionally, the second neural network model comprises: a gating cycle module and a fusion module; inputting the respiration initial signal, the in-phase component and the quadrature component into a trained second neural network model for processing to obtain a respiration signal, wherein the method comprises the following steps:
inputting a frequency corresponding to an i-2 moment, a frequency corresponding to an i-1 moment and a frequency corresponding to the i moment in a respiration initial signal into a gating circulating module for processing to obtain a first parameter and a second parameter corresponding to the i moment, wherein the respiration initial signal comprises n moments, i is more than or equal to 1 and less than or equal to n, and n is a positive integer more than 2;
and performing fusion processing on the first parameters at the n moments, the second parameters at the n moments, the in-phase component and the orthogonal component through a fusion module to obtain the respiratory signal.
Optionally, determining the respiratory frequency of the target to be measured according to the respiratory signal includes: segmenting the respiratory signal based on a preset time length to obtain a plurality of subsequence signals; performing fast Fourier transform on the plurality of subsequence signals to obtain a two-dimensional spectrogram; and determining a main frequency value of the two-dimensional frequency spectrogram by using the trained third neural network model, wherein the main frequency value is the respiratory frequency of the target to be detected.
In a second aspect, embodiments of the present application provide a vital sign measurement device, which includes a camera, an FMCW radar, a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the camera and the FMCW radar are respectively connected to the processor, and the processor, when executing the computer program, implements the method according to any one of the first aspect.
In a third aspect, an embodiment of the present application provides a vital sign measurement apparatus, including:
the acquisition unit is used for acquiring position coordinates of M vibration targets in a target area in a three-dimensional radar coordinate system through an FMCW radar, wherein M is more than or equal to 1, and acquiring an image of the target area through a camera;
the system comprises a mapping unit, a multi-layer sensing network model and a control unit, wherein the mapping unit is used for processing the position coordinates of the vibrating targets by using a trained multi-layer sensing network model aiming at each vibrating target in M vibrating targets to obtain two-dimensional coordinates of the vibrating targets in an image, and the multi-layer sensing network model is used for indicating the mapping relation between a three-dimensional radar coordinate system and a two-dimensional image coordinate system corresponding to the image;
the screening unit is used for screening the target to be tested from the M vibration targets according to the first motion value of each vibration target in the M vibration targets detected by the FMCW radar and the second motion value of each vibration target in the image;
the measuring unit is used for acquiring an echo signal of the target to be measured through the FMCW radar, separating a respiratory signal and a heartbeat signal in the echo signal, determining the respiratory frequency of the target to be measured according to the respiratory signal, and determining the heartbeat frequency of the target to be measured according to the heartbeat signal.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when executed by a processor, the computer program implements the method according to any one of the above first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the method described in any one of the above first aspects.
It is to be understood that, for the beneficial effects of the second aspect to the fifth aspect, reference may be made to the relevant description in the first aspect, and details are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a vital sign measuring device according to an embodiment of the present application;
fig. 2 is a flowchart of a vital sign measurement method according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a first neural network model according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a mapping relationship between a three-dimensional radar coordinate system and a two-dimensional image coordinate system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an MLP network model according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a second neural network model provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a two-dimensional spectrogram according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a vital sign measurement apparatus according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
With the progress and development of society, people pay more and more attention to health management. The vital signs of the heart and lung function, the respiratory function and the like of the human body are monitored in real time, and the abnormal conditions of the vital signs can be found in time, so that the early warning of diseases is effectively carried out. In the prior art, a single sensor is mainly used for collecting vital sign signals of a human body, and vital sign parameters are determined based on the vital sign signals, so that the health state of the human body is monitored. However, signals such as respiration and heartbeat of a human body are weak, and vital sign signals acquired by a single sensor are easily interfered by other vibration signals in the environment, so that the measurement accuracy is low.
In order to solve the above technical problem, embodiments of the present application provide a vital sign measurement method, a device, and a storage medium. The method comprises the steps of obtaining an image containing at least one vibration target through a camera, removing the targets with violent movement from the at least one vibration target according to the movement degree of each vibration target detected by a radar and the movement intensity of each vibration target in the image, obtaining echo signals of the targets to be detected through an FMCW radar, and avoiding interference of the targets with violent movement on the echo signals of the targets to be detected. The respiration signal can be separated from the echo signal of the target to be measured, and the respiration frequency of the target to be measured is determined based on the respiration signal. The vital sign parameters of the human body are determined through the multiple sensors, and the accuracy of vital sign detection can be improved.
The technical scheme of the application is described in detail in the following with reference to the accompanying drawings. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
In one possible implementation manner, the embodiment of the application provides a vital sign measuring device. As shown in fig. 1, the vital signs measurement device 1 comprises a camera 11, a FMCW radar 12, a memory 13, a processor 14 and a memory 13. The memory 13 has stored therein a computer program 15 that is executable on the processor 14. The camera 11 and the FMCW radar (Frequency Modulated Continuous Wave radar) 12 are respectively connected to the processor 14, and the processor 14 executes the computer program 15 to implement the vital sign measurement method provided in the embodiment of the present application.
It will be understood by the person skilled in the art that fig. 1 is only an example of a vital signs measurement device 1 and does not constitute a limitation of the vital signs measurement device 1, and that the vital signs measurement device 1 may comprise more or less components than shown, or combine certain components, or different components. The Processor 14 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 13 may be an internal storage unit of the vital signs measurement device 1, such as a hard disk or a memory of the vital signs measurement device 1. The memory 13 may also be an external storage device of the vital sign measurement device 1, such as a plug-in hard disk provided on the vital sign measurement device 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
The vital sign measurement method provided by the present application is exemplarily described below with reference to a flowchart shown in fig. 2. In a possible implementation manner, the vital sign measurement method provided by the embodiment of the present application includes the following steps:
s201, acquiring position coordinates of M vibration targets in a target area in a three-dimensional radar coordinate system through an FMCW radar, wherein M is larger than or equal to 1, and acquiring an image of the target area through a camera.
It should be noted that the camera provided in the embodiments of the present application has an infrared imaging function and a visible light imaging function. The camera can automatically switch the infrared imaging function and the visible light imaging function according to the brightness degree of the target area, so that the image containing the vibration target in the target area can be clearly acquired through the camera.
Illustratively, when the target area is bright, the camera may acquire an image of the target area through a visible light imaging function, where the acquired image is a visible light image; when the target area is dark, the camera can acquire an image of the target area through an infrared imaging function, and the acquired image is an infrared image. Therefore, the vital sign measurement method and device provided by the application can measure the vital signs of the target in a resting state (such as sleeping) in the day and can also measure the vital signs of the target in a resting state at night.
In addition, the general human body does not have violent movement in a resting state, and only breathing and heartbeat enable the thoracic cavity of the human body to generate fine vibration. At the same time, there may be a large moving object in the target area, i.e. the limbs of the object will produce a large movement. Therefore, the vibration target described in the embodiment of the present application includes a target in a resting state and a target in which there is a large motion.
In one possible implementation, the method for acquiring the position coordinates of M vibrating targets in a three-dimensional radar coordinate system in a target area through an FMCW radar comprises: and receiving the echo reflected by the target area through an FMCW radar to obtain an initial signal. And removing the background signal in the initial signal to obtain a vibration signal. And determining the position coordinates of each of the M vibrating targets in the target area in the three-dimensional radar coordinate system according to the vibration signals, wherein the position coordinates of each vibrating target in the three-dimensional radar coordinate system comprise the horizontal distance, the horizontal azimuth angle and the pitch angle of the vibrating target relative to the FMCW radar.
Based on the FMCW radar in the vital sign side-measuring device provided by the application, the horizontal distance, horizontal azimuth angle and elevation angle of all targets relative to the FMCW radar in the target area can be determined. The FMCW radar includes at least one transmitting antenna and at least two receiving antennas, and the horizontal distance between any two adjacent receiving antennas is the same. The transmitting antenna is used for transmitting frequency modulation continuous wave pulse signals to a target area, the receiving antennas can receive echoes reflected by the target, and initial signals are determined based on the echoes received by each receiving antenna, wherein the initial signals are a set of initial signals detected by the FMCW radar along a plurality of azimuth angles in the target area.
In one embodiment, the initial signal received by the FMCW radar includes a background signal generated by reflection of a stationary background object in the target area, such as a wall, a wardrobe, or the like. The background signal interferes with the echo signal generated by the vibrating target, and therefore, the background signal in the initial signal corresponding to each azimuth angle needs to be removed.
Illustratively, the vibration signal can be obtained by removing the background signal from the initial signal by a differential method. Specifically, assuming that the initial signal is rp and the frequency of the initial signal at the ith time is rp (i), a moving average of the initial signals at the i-1 th time is calculated
Figure BDA0003558150360000081
The frequency value of the ith time instant in the vibration signal can be represented
Figure BDA0003558150360000082
In another embodiment, after removing the background signal in the initial signal based on the method provided in the foregoing embodiment, the obtained vibration signal is a set of vibration signals corresponding to each azimuth angle and pitch angle in the target area acquired by the multiple receiving antennas. The vibration signals can be processed according to a signal analysis algorithm in a radar multi-sending and multi-receiving mode, the position coordinates of each vibration target in the M vibration targets in a three-dimensional radar coordinate system are determined, and then the echo signals of each vibration target are obtained.
In one embodiment, the position coordinates of each vibrating target in the three-dimensional radar coordinate system include the horizontal distance, horizontal azimuth angle and elevation angle of the vibrating target relative to the FMCW radar. The method for determining the position coordinates of each of M vibrating targets in the target area in the three-dimensional radar coordinate system according to the vibration signals comprises the following steps: the FMCW radar can detect the horizontal distance of each vibrating target with respect to the FMCW radar. For all horizontal azimuth angles of the vibrating target in the target area relative to the FMCW radar, a beam forming algorithm can be adopted to analyze the vibrating signals so as to determine the horizontal azimuth angle of the vibrating target in the target area relative to the FMCW radar. The horizontal azimuth angle of each target in the target area relative to the FMCW radar may also be determined based on the horizontal distance between two adjacent receiving antennas and the response values obtained by a plurality of (i.e., greater than or equal to two) receiving antennas, for example, assuming that the FMCW radar includes a plurality of transmitting antennas and a plurality of receiving antennas, each transmitting antenna is used for transmitting a chirp signal, and a plurality of range bin signals received by the plurality of receiving antennas are arranged in columns to form a matrix sequence, that is, each column signal in the matrix sequence represents an initial signal reflected by a target received by one receiving antenna based on one chirp signal. Forming bin phase signals corresponding to the distance of the target into a vector, performing fast Fourier transform on the vector to obtain a main frequency value f, and obtaining a main frequency value f according to a formula theta = sin -1 (λ w/2 π D) yields the azimuth of the target with respect to the FMCW radar. Where λ is a wavelength, D is a horizontal distance between two adjacent receiving antennas, and ω =2 π f.
Based on the above principle, the horizontal distance and the horizontal azimuth of each vibrating target relative to the FMCW radar can be determined by the FMCW radar provided with a plurality of receiving antennas and a plurality of transmitting antennas, thereby constructing a corresponding distance-azimuth distribution heatmap. Further, a constant false alarm detection algorithm may be utilizedProcessing the acquired distance-azimuth distribution heatmap to determine cluster center coordinates (d) of each vibrating target in the target region i ,a i ) Wherein d is i Represents the distance between the ith vibrating target and the FMCW radar in the target area, a i Indicating the horizontal azimuth angle of the ith vibrating target in the target area relative to the FMCW radar.
Further, with reference to the above principle, the vibration signals may be processed by fast fourier transform method or beam forming algorithm by utilizing the equidistant arrangement of the multiple receiving antennas in the vertical direction, so as to determine the pitch angle θ of each vibration target in the target area relative to the FMCW radar i And then the coordinate position (d) of each vibration target in a three-dimensional coordinate system (namely, distance-azimuth angle-pitch angle) can be determined i ,a ii )。
S202, aiming at each vibration target in the M vibration targets, processing the position coordinates of the vibration target by using a trained multilayer perception network model to obtain two-dimensional coordinates of the vibration target in an image, wherein the multilayer perception network model is used for indicating the mapping relation between a three-dimensional radar coordinate system and a two-dimensional image coordinate system corresponding to the image.
In the embodiment of the application, a corresponding mapping relationship exists between a three-dimensional radar coordinate system of an FMCW radar in a vital sign side-measuring device and a two-dimensional image coordinate system of an image of a target area acquired by a camera. In order to construct the mapping relationship between the three-dimensional radar coordinate system and the two-dimensional image coordinate system, initial calibration work needs to be carried out by means of a preset vibration source.
Exemplarily, referring to (a) in fig. 4, the image plane is divided into n × n regions shown in (a) in fig. 4, exemplary n =7. Placing a vibration source with a fixed vibration frequency f at a certain distance in front of the FMCW radar, acquiring an image containing the vibration source by a camera, recording two-dimensional coordinates (x, y) of the center of the vibration source in the image, simultaneously acquiring an echo signal by the FMCW radar, determining the horizontal distance, horizontal azimuth angle and pitch angle of the center of the vibration source relative to the FMCW radar by the method described in the above embodiment,i.e. the coordinate position (d) of the center of the vibration source in the three-dimensional coordinate system of the FMCW radar i ,a ii ) Thereby using two-dimensional coordinates (x, y) and (d) i ,a ii ) A set of samples is constructed. By changing the horizontal distance, horizontal azimuth angle and/or pitch angle of the vibration source relative to the FMCW radar, the two-dimensional coordinates of a plurality of groups of vibration sources in the image and the coordinate positions in a corresponding three-dimensional coordinate system are collected, and a plurality of groups of samples are formed, so that the radar and image imaging in practical application can be covered under various conditions.
In one embodiment, after acquiring a plurality of sets of samples, the neural network initial model may be trained based on a Stochastic Gradient Descent (SGD) method using the plurality of sets of samples to obtain a trained neural network model, so as to determine a mapping relationship between a coordinate position of each vibration target in the target region in the three-dimensional radar coordinate system and a two-dimensional coordinate in the two-dimensional image coordinate system using the trained neural network model.
Illustratively, as shown in fig. 5, the trained neural network model may be an MLP (Multi-Layer Perception) network model. Specifically, the MLP network model includes an input layer (i.e., input in fig. 5), three hidden layers (i.e., lay1, lay2, and lay3 in fig. 5), and an output layer (i.e., output in fig. 5) connected in sequence. The input layer comprises three input branches, the horizontal distance, the horizontal azimuth angle and the pitch angle of a vibration target in a target area relative to the FMCW radar are respectively input into the three input branches, and the first two hidden layers (namely, lay1 and lay2 in fig. 5) are used for sequentially and respectively extracting the features of the horizontal distance, the horizontal azimuth angle and the pitch angle through convolution processing to obtain a first feature corresponding to the horizontal distance, a second feature corresponding to the horizontal azimuth angle and a third feature corresponding to the pitch angle. The third hidden layer (i.e., lay3 in fig. 5) is used to perform fusion processing on the first feature, the second feature, and the third feature, so as to obtain a fused feature. And the output layer is used for outputting the two-dimensional coordinates of the vibration target in the two-dimensional image coordinate system according to the fusion characteristics. In a three-dimensional radar coordinate system for the ith vibrating target in the target areaCoordinate position radar (d) i ,a ii ) The mapping relation with the two-dimensional coordinate image (x, y) in the two-dimensional image coordinate system may be expressed as image (x, y) = F (radar (d) i ,a ii ))。
In practical use, the multilayer perception network model obtained by the calibration mode can be mapped to obtain the two-dimensional coordinates of the same vibrating target in the two-dimensional image coordinate system based on the coordinate position of the vibrating target in the three-dimensional radar coordinate system, so that the motion information of the vibrating target in the image acquired by the camera is judged based on the subsequent steps, and thus, some targets with violent motion are avoided in an auxiliary manner. Exemplarily, an exemplary description will be made with reference to a mapping relationship between a three-dimensional radar coordinate system (range-azimuth-pitch angle) corresponding to an FMCW radar and a two-dimensional image coordinate system corresponding to an image acquired by a camera, which are shown in fig. 4. The horizontal distance, horizontal azimuth angle and pitch angle of the vibrating target with respect to the FMCW radar can be detected based on the FMCW radar, thereby determining the coordinate position of the vibrating target in the three-dimensional radar coordinate system, i.e., the black region in (b) of fig. 4. The image of the target area captured by the camera is uniformly divided into 7 × 7 grids, and if the vibrating target in the target area displayed in the image is the chest cavity of a child, the two-dimensional coordinates of the vibrating target in the image captured by the camera, that is, the black area in fig. 4 (a), can be determined based on the above mapping relationship.
S203, screening the target to be measured from the M vibration targets according to the first motion value of each vibration target in the M vibration targets detected by the FMCW radar and the second motion value of each vibration target in the image.
In a possible implementation manner, for each of the M vibration targets, the image is processed through the trained first neural network model, so that a second motion value of the vibration target in the image can be determined, then the echo signal of the vibration target detected by the FMCW radar can determine a first motion value of the vibration target, and if the first motion value is smaller than a preset first motion threshold and the second motion value is smaller than a preset second motion threshold, the vibration target is determined to be the target to be detected.
In the embodiment of the application, the camera can acquire images of two different types, namely visible light and infrared light, according to the brightness of the target area. Different types of images can be processed by different neural network models, and the same neural network model can also be used for processing different types of images, so that the second motion value of each vibration target in the images is determined.
In one embodiment, assuming that the first neural network model provided herein can process different types of images, the images can be processed through the trained first neural network model to determine a second motion value of each vibration target in the images, and the motion value is used to indicate the intensity of motion of the vibration target. Specifically, the trained detection model may be used to determine the area in the image where each vibration object is located, the area corresponding to the vibration object may be input into the trained optical flow network model for processing, and the second motion value of each vibration object may be determined. The first neural network model includes a detection model and an optical flow network model.
In one example, the region in which the vibration target is located in the image may be determined based on a method of human pose detection. Thus, the detection model may be a pose detection model. The method for determining the area of the vibration target in the image by using the trained detection model comprises the following steps: and determining the region of the chest cavity of the target in the image by using the trained posture detection model. The vibration target may be a chest cavity of a human body.
By way of example and not limitation, the pose detection model is structured as shown in FIG. 3. The gesture detection model comprises an input module, at least two feature extraction modules and an output module which are sequentially connected. Specifically, the input modules include a convolutional layer and an active layer, i.e., the Conv + ReLU module in fig. 3. The feature extraction module includes a convolution layer, a BN (batch norm) layer, and an active layer, which are connected in sequence, i.e., the Conv + BN + ReLU module in fig. 3. The output module comprises a convolutional layer, i.e. the Conv module in fig. 3. As shown in fig. 3, the image with the baby is input into the trained posture detection model for processing, and an output map marked with the position of the chest of the baby (i.e. the area in the black frame in the output map) can be obtained, so as to determine the area of the vibration target in the image in the target area.
In an embodiment, if the image includes a plurality of vibration targets, a plurality of regions corresponding to the plurality of vibration targets in the image may be determined by using the method for acquiring the region where the vibration target is located, provided by the above embodiment. And processing the plurality of areas by using the trained optical flow network model to obtain motion estimation parameters, wherein the motion estimation parameters comprise a second motion value of the vibration target corresponding to each area, and the second motion value is used for indicating the motion intensity of the corresponding vibration target.
In an alternative implementation, processing the plurality of regions using the trained optical flow network model includes: and preprocessing the plurality of areas, and inputting the plurality of preprocessed areas into the trained optical flow network model for processing. The pretreatment method comprises the following steps: when the acquired size of each region is inconsistent, the size of each region can be adjusted to a preset fixed size by an upsampling method.
Illustratively, the optical flow network model may be a UNet network.
In one embodiment, the method for acquiring the echo signal corresponding to each vibration target in the target area through the FMCW radar comprises the following steps: and performing signal analysis on the vibration signals according to the position coordinates of each vibration target in the three-dimensional radar coordinate system to obtain the echo signals of each vibration target.
Specifically, each vibration target is located in a different area in the three-dimensional space, and the coordinate position of each vibration target in the target area in the radar three-dimensional coordinate system can be determined by the method provided in the above step S201. Supposing that N receiving antennas and a plurality of transmitting antennas are arranged on the FMCW radar, N is more than or equal to 2, wherein the nth receiving antenna can receive vibration signals reflected by a plurality of vibration targets based on a plurality of chirp signals transmitted by the plurality of transmitting antennas, N is more than or equal to 1 and less than or equal to N, and the distance d between the nth receiving antenna and the vibration signals i The corresponding bin phase signals form a vector x n Then correspond to N receiving antennasCan be expressed as X = { X 1 ,x 2 ,...,x N } T
Further, it can be based on the azimuth angle a i Corresponding beam weight and elevation angle theta i The corresponding beam weights separate the echo signal of each vibrating target from the matrix X. Wherein, the azimuth angle a i Corresponding beam weight of
Figure BDA0003558150360000121
Wherein D is the distance of the receiving antenna in the horizontal direction, i represents the ith vibrating target, and h represents the horizontal direction. Similarly, the pitch angle θ can be obtained i Corresponding beam weight of
Figure BDA0003558150360000122
Wherein v represents a vertical direction. Finally, the echo Signal of the ith vibration target can be represented as Signal (i) = w v (a i )*w hi ) X, which echo signal can be used for subsequent separation of the vital sign signals and measurement of the vital signs, the echo signal of the vibrating target representing a corresponding first motion value of the vibrating target in the radar domain.
The second motion value of each vibration target in the image is determined based on the method provided in the above embodiment, and the first motion value determined based on the echo signal of each vibration target follows. For the same vibration target, if the first motion value is smaller than a preset first motion threshold and the second motion value is smaller than a preset second motion threshold, it may be determined that the vibration target is a target to be detected in a resting state, and the number of the targets to be detected may be one or more.
S204, acquiring an echo signal of the target to be detected through the FMCW radar, separating a respiration signal and a heartbeat signal in the echo signal, determining the respiration frequency of the target to be detected according to the respiration signal, and determining the heartbeat frequency of the target to be detected according to the heartbeat signal.
After at least one target to be measured in a resting state is screened out from the M vibrating targets in the target area based on the step S203, the position coordinates of each target to be measured in the three-dimensional radar coordinate system can be determined, and then the echo signal corresponding to each target to be measured is determined. And the position coordinates of each target to be measured in the three-dimensional radar coordinate system comprise the horizontal distance, the horizontal azimuth angle and the pitch angle of the target to be measured relative to the FMCW radar.
The echo signal of the target to be detected is used for determining the vital sign parameters of the target to be detected. In one possible implementation, the vital sign parameter of the object to be measured includes a breathing frequency. Correspondingly, the method for separating the respiratory signal in the echo signal for the echo signal of each target to be measured comprises the following steps: filtering the echo signal by a band-pass filter to obtain a respiration initial signal; determining an in-phase component and a quadrature component of a respiration initiation signal; and inputting the respiration initial signal, the in-phase component and the quadrature component into a trained second neural network model for processing to obtain a respiration signal.
It will be appreciated that the I (Inphase) vector, which consists of the Inphase component of the respiratory initiation signal, can be expressed as
Figure BDA0003558150360000131
The Q (Quadrature-phase) vector consisting of the Quadrature components of the respiration initial signal can be represented as
Figure BDA0003558150360000132
Wherein the content of the first and second substances,
Figure BDA0003558150360000133
the phase of the k-th harmonic being
Figure BDA0003558150360000134
Sample interval delta _ t =1.0/F s ,F s Denotes the sampling frequency, angular velocity w =2 pi f b Fast Fourier transform is carried out on the initial respiration signal to obtain the main frequency value f of the fundamental wave b
In one embodiment, the breathing signal may be expressed as the following equation:
Figure BDA0003558150360000135
wherein s is breath (i) Representing the respiration signal at the i-th instant, a k (i) A first parameter representing the kth in-phase harmonic component corresponding to the ith time instant, b k (i) A second parameter representing a kth quadrature harmonic component corresponding to the ith time instant.
Referring to fig. 6, a second neural network model is provided in embodiments of the present application. The second neural network model comprises a gated loop module and a fusion module. Inputting a frequency s _ break (i-2) corresponding to an i-2 th moment, a frequency s _ break (i-1) corresponding to an i-1 th moment and a frequency s _ break (i) corresponding to an i-th moment in a respiration initial signal s _ break into a gating cycle module for processing to obtain a first parameter and a second parameter corresponding to the i-th moment, wherein the respiration initial signal comprises n moments, the value range of i is {1, n }, and n is a positive integer greater than or equal to 2. And performing fusion processing on the first parameters at the n moments, the second parameters at the n moments, the in-phase component and the orthogonal component through a fusion module to obtain a respiration signal break.
The gating cycle module comprises m hidden layer units, wherein m is more than or equal to 2. The first hidden layer unit s l (i)=f(W*s l (i-1)+U*s l-1 (i) L is more than or equal to 1 and less than or equal to m. W is a characteristic transfer parameter transmitted from the ith-1 moment to the ith moment, and U is a transmission parameter transmitted from the (l-1) th hidden layer at the ith moment to the (l) th hidden layer at the ith moment. Only two hidden layer units are shown in fig. 6.
The method for training the second neural network model comprises the following steps: and carrying out iterative training on the second neural network initial model according to the training set until the loss function reaches a stable state. The training set includes samples of the respiration signal and samples of an in-phase component and a quadrature component of the samples of the respiration signal. Loss function
Figure BDA0003558150360000141
Wherein s is breath (i) Is the frequency value at the ith time instant in the sample of the respiration signal,
Figure BDA0003558150360000142
the respiration signal sample of the ith time, the in-phase component sample of the ith time and the quadrature component sample of the ith time are input into the estimated respiration signal of the ith time obtained in the second neural network initial model. The parameter update learning rate may be set to λ k =(1/k)*λ 0 And performing iterative training on the second neural network initial model by adopting a gradient descent method to obtain a second neural network model.
After the respiratory signal in the echo signal of each target to be measured is determined based on the method provided in the above embodiment, the respiratory frequency of the corresponding target to be measured may be determined based on the respiratory signal.
In one embodiment, the method for determining the respiratory frequency of the object to be measured according to the respiratory signal comprises the following steps: segmenting the respiratory signal based on a preset time length to obtain a plurality of subsequence signals; performing fast Fourier transform on the plurality of subsequence signals to obtain a two-dimensional spectrogram; and determining a main frequency value of the two-dimensional frequency spectrogram by using the trained third neural network model, wherein the main frequency value is the respiratory frequency of the target to be detected.
Illustratively, assume that the total length of the respiratory signal is H. Starting from the head of the respiration signal, a plurality of subsequence signals of length H, H < < H, are extracted based on a sliding window of step size s and size H (i.e. a preset duration). Fast fourier transform is performed on each sub-sequence signal to obtain a frequency spectrum, and the frequency spectrums corresponding to all the sub-sequence signals are arranged in sequence to form a two-dimensional spectrogram as shown in fig. 7. The third neural network model is used for processing the two-dimensional spectrogram, so that a main frequency value corresponding to a highlight area (namely a white area) in the two-dimensional spectrogram can be determined, and further the respiratory frequency of the target to be detected is determined. The third neural network model may be any one of the image convolution neural network models.
Illustratively, the third neural network model may be a VGG (Visual Geometry Group) network model or a rest (Deep residual network) model.
In another possible implementation manner, the vital sign parameters of the object to be measured further include a breathing frequency. And the difference signal between the echo signal and the respiratory signal of the target to be detected is a heartbeat signal. The method for determining the heartbeat frequency of the target to be detected based on the heartbeat signal is the same as the method for determining the respiratory frequency of the target to be detected based on the respiratory signal provided in the above embodiment, and specifically includes: segmenting the heartbeat signal based on preset time length to obtain a plurality of subsequence signals; performing fast Fourier transform on the plurality of subsequence signals to obtain a two-dimensional spectrogram; and determining a main frequency value of the two-dimensional spectrogram by using the trained third neural network model, wherein the main frequency value is the heartbeat frequency of the target to be detected. For a specific implementation, reference may be made to the above detailed description of the process for determining the respiratory frequency of the target to be detected based on the respiratory signal, which is not described herein again.
According to the vital sign measuring method provided by the embodiment of the application, vital sign parameters are detected based on two types of sensors, namely a camera and an FMCW radar. Firstly, a trained multilayer perception network model is determined through a calibration process, and the trained multilayer perception network model is used for representing the mapping relation between a three-dimensional radar coordinate system of an FMCW radar and a two-dimensional image coordinate system of an image acquired by a camera. The position of each vibration target in the target area in the three-dimensional radar coordinate system and the position of each vibration target in the two-dimensional image coordinate system can be determined by utilizing the mapping relation. And screening the target to be detected from the M vibrating targets according to the second motion value of the same vibrating target in the image domain and the first motion value of the same vibrating target in the radar domain, so as to reduce the interference of the target with larger motion amplitude on the echo signal of the target to be detected, and determine the echo signal of the target to be detected through the FMCW radar. In the process of separating the respiratory signal from the echo signal, a filter is used for extracting a respiratory initial signal, then correlation characteristics between signals at adjacent moments are fully considered, frequency values corresponding to the adjacent three moments are used as input of a second neural network model to accurately estimate the respiratory signal, and then a third neural network model is used for processing the respiratory signal to determine the respiratory frequency of the target to be detected, so that the robustness of the algorithm is improved. And determining a heartbeat signal through the difference value between the echo signal and the respiration signal, and further determining the heartbeat frequency of the target to be detected. The vital sign measuring method provided by the application can eliminate the interference of the interference target in the target area on the echo signal of the target to be measured, and improves the accuracy of measuring the vital sign.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Based on the same inventive concept, as shown in fig. 8, the present application provides a vital sign measuring apparatus 3, where the apparatus 3 includes:
the acquiring unit 31 is configured to acquire position coordinates of M vibration targets in a target area in a three-dimensional radar coordinate system through an FMCW radar, where M is greater than or equal to 1, and acquire an image of the target area through a camera.
And the mapping unit 32 is configured to, for each of the M vibration targets, process the position coordinates of the vibration target by using a trained multilayer sensing network model to obtain two-dimensional coordinates of the vibration target in the image, where the multilayer sensing network model is used to indicate a mapping relationship between a three-dimensional radar coordinate system and a two-dimensional image coordinate system corresponding to the image.
The screening unit 33 is configured to screen out a target to be tested from the M vibration targets according to a first motion value of each vibration target in the M vibration targets detected by the FMCW radar and a second motion value of each vibration target in the image;
the measuring unit 34 is configured to acquire an echo signal of the target to be measured by using the FMCW radar, separate a respiratory signal and a heartbeat signal from the echo signal, determine a respiratory frequency of the target to be measured according to the respiratory signal, and determine a heartbeat frequency of the target to be measured according to the heartbeat signal.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method described in the foregoing method embodiments.
The embodiment of the present application further provides a computer program product, which when running on a terminal device, enables the terminal device to implement the method described in the above method embodiment when executed.
Reference throughout this application to "one embodiment" or "some embodiments," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
In addition, in the present application, unless explicitly stated or limited otherwise, the terms "connected" and the like are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection; the terms may be directly connected or indirectly connected through an intermediate medium, and may be used for communicating between two elements or for interacting between two elements, unless otherwise specifically defined, and the specific meaning of the terms in the present application may be understood by those skilled in the art according to specific situations.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A vital sign measurement method, characterized in that it is applied to a vital sign measurement device comprising a camera and an FMCW radar, the method comprising:
receiving echoes reflected by a target area through the FMCW radar to obtain an initial signal;
removing a background signal in the initial signal to obtain a vibration signal;
determining the position coordinate of each vibrating target in M vibrating targets in the target area in a three-dimensional radar coordinate system according to the vibrating signal, and acquiring an image of the target area through the camera, wherein the position coordinate of each vibrating target in the three-dimensional radar coordinate system comprises the horizontal distance, the horizontal azimuth angle and the pitch angle of the vibrating target relative to the FMCW radar, and M is more than or equal to 1;
for each vibration target in the M vibration targets, processing the position coordinates of the vibration target by using a trained multilayer perception network model to obtain two-dimensional coordinates of the vibration target in the image, wherein the multilayer perception network model is used for indicating a mapping relation between a three-dimensional radar coordinate system and a two-dimensional image coordinate system corresponding to the image;
screening a target to be measured from the M vibrating targets according to a first motion value of each vibrating target in the M vibrating targets detected by the FMCW radar and a second motion value of each vibrating target in the image;
acquiring an echo signal of the target to be detected through the FMCW radar, separating a breathing signal and a heartbeat signal in the echo signal, determining the breathing frequency of the target to be detected according to the breathing signal, and determining the heartbeat frequency of the target to be detected according to the heartbeat signal.
2. The method of claim 1, wherein the acquiring echo signals of the target to be measured by the FMCW radar includes:
and performing signal analysis on the vibration signal according to the position coordinate of the target to be detected in the three-dimensional radar coordinate system to obtain an echo signal of the target to be detected.
3. The method of claim 2, wherein the multi-layered aware network model comprises: the input layer, the first hidden layer, the second hidden layer, the third hidden layer and the output layer are connected in sequence;
the processing the position coordinates of the vibrating target by using the trained multilayer perception network model to obtain the two-dimensional coordinates of the vibrating target in the image comprises:
inputting a horizontal distance, a horizontal azimuth angle, and a pitch angle of the vibrating target with respect to the FMCW radar to the input layer;
respectively extracting features of the horizontal distance, the horizontal azimuth angle and the pitch angle through the first hidden layer and the second hidden layer to obtain a first feature corresponding to the horizontal distance, a second feature corresponding to the horizontal azimuth angle and a third feature corresponding to the pitch angle;
performing fusion processing on the first feature, the second feature and the third feature through the third hidden layer to obtain a fusion feature;
and processing the fusion features through the output layer to obtain the two-dimensional coordinates of the vibration target in the image.
4. The method as claimed in claim 1, wherein the screening out the target to be measured from the M vibrating targets according to the first motion value of each vibrating target of the M vibrating targets and the second motion value of each vibrating target in the image detected by the FMCW radar comprises:
for each vibration target in the M vibration targets, processing the image through a trained first neural network model, determining a second motion value of the vibration target in the image, determining a first motion value of the vibration target through an echo signal of the vibration target detected by the FMCW radar, and determining the vibration target as a target to be detected if the first motion value is smaller than a preset first motion threshold and the second motion value is smaller than a preset second motion threshold.
5. The method of claim 4, wherein the processing the image through the trained first neural network model to determine a second motion value of the vibrating target comprises:
determining the area of the vibration target in the image by using the trained detection model;
inputting the area into a trained optical flow network model for processing, and determining a second motion value of the vibration target.
6. The method of claim 1, wherein the separating the respiration signal and the heartbeat signal in the echo signal comprises:
filtering the echo signal of the target to be detected through a band-pass filter to obtain a respiration initial signal;
determining an in-phase component and a quadrature component of the respiration initiation signal;
inputting the respiration initial signal, the in-phase component and the quadrature component into a trained second neural network model for processing to obtain the respiration signal;
and the difference signal between the echo signal and the respiration signal is a heartbeat signal.
7. The method of claim 6, wherein the second neural network model comprises: a gated loop module and a fusion module;
inputting the respiration initial signal, the in-phase component and the quadrature component into a trained second neural network model for processing to obtain the respiration signal, including:
inputting a frequency corresponding to an i-2 th moment, a frequency corresponding to an i-1 th moment and a frequency corresponding to an i-th moment in the respiration initial signal into the gating circulating module for processing to obtain a first parameter and a second parameter corresponding to the i-th moment, wherein the respiration initial signal comprises n moments, i is greater than or equal to 1 and less than or equal to n, and n is a positive integer greater than 2;
and performing fusion processing on the first parameters at the n moments, the second parameters at the n moments, the in-phase component and the quadrature component through the fusion module to obtain the respiratory signal.
8. The method of any one of claims 1 to 7, wherein determining the breathing frequency of the object to be measured from the breathing signal comprises:
segmenting the respiration signal based on preset time length to obtain a plurality of subsequence signals;
performing fast Fourier transform on the plurality of subsequence signals to obtain a two-dimensional spectrogram;
and determining a main frequency value of the two-dimensional spectrogram by using the trained third neural network model, wherein the main frequency value is the respiratory frequency of the target to be detected.
9. Vital sign measurement device, characterized in that the device comprises a camera, an FMCW radar, a memory, a processor and a computer program stored in the memory and executable on the processor, the camera and the FMCW radar being respectively connected to the processor, the processor implementing the method according to any of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202210282020.7A 2022-03-22 2022-03-22 Vital sign measuring method, equipment and storage medium Active CN114767074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210282020.7A CN114767074B (en) 2022-03-22 2022-03-22 Vital sign measuring method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210282020.7A CN114767074B (en) 2022-03-22 2022-03-22 Vital sign measuring method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114767074A CN114767074A (en) 2022-07-22
CN114767074B true CN114767074B (en) 2023-03-03

Family

ID=82424760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210282020.7A Active CN114767074B (en) 2022-03-22 2022-03-22 Vital sign measuring method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114767074B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110200607A (en) * 2019-05-14 2019-09-06 南京理工大学 Method for eliminating body motion influence in vital sign detection based on optical flow method and LMS algorithm
CN112684424A (en) * 2020-12-30 2021-04-20 同济大学 Automatic calibration method for millimeter wave radar and camera
CN113017590A (en) * 2021-02-26 2021-06-25 清华大学 Physiological data monitoring method and device, computer equipment and storage medium
CN114041767A (en) * 2021-10-11 2022-02-15 宁波春建电子科技有限公司 Heart rate detection method based on depth camera and millimeter wave radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110200607A (en) * 2019-05-14 2019-09-06 南京理工大学 Method for eliminating body motion influence in vital sign detection based on optical flow method and LMS algorithm
CN112684424A (en) * 2020-12-30 2021-04-20 同济大学 Automatic calibration method for millimeter wave radar and camera
CN113017590A (en) * 2021-02-26 2021-06-25 清华大学 Physiological data monitoring method and device, computer equipment and storage medium
CN114041767A (en) * 2021-10-11 2022-02-15 宁波春建电子科技有限公司 Heart rate detection method based on depth camera and millimeter wave radar

Also Published As

Publication number Publication date
CN114767074A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN110584631B (en) Static human heartbeat and respiration signal extraction method based on FMCW radar
Jin et al. Multiple patients behavior detection in real-time using mmWave radar and deep CNNs
CN109875529B (en) Vital sign detection method and system based on ultra-wideband radar
CN110428008A (en) A kind of target detection and identification device and method based on more merge sensors
CN111432733A (en) Apparatus and method for determining motion of an ultrasound probe
Wang et al. Noncontact heart rate measurement based on an improved convolutional sparse coding method using IR-UWB radar
JP2001286474A (en) Dynamic measurement of subject&#39;s parameter
CN116148801B (en) Millimeter wave radar-based target detection method and system
Goldfine et al. Respiratory rate monitoring in clinical environments with a contactless ultra-wideband impulse radar-based sensor system
CN111401180A (en) Neural network recognition model training method and device, server and storage medium
Senigagliesi et al. Contactless walking recognition based on mmWave radar
CN114847931A (en) Human motion tracking method, device and computer-readable storage medium
CN114767074B (en) Vital sign measuring method, equipment and storage medium
CN107527316B (en) The method and system of arbitrary point building point cloud data on two-dimensional ultrasound image sequence
CN112741617A (en) CSI-based omnidirectional gait detection algorithm
JP7377494B2 (en) Attribute identification device and identification method
CN115204221A (en) Method and device for detecting physiological parameters and storage medium
Raeis et al. InARMS: Individual activity recognition of multiple subjects with FMCW radar
CN112617748B (en) Method and system for processing sign data based on frequency modulation continuous wave radar
CN113827216A (en) Method and system for sensorless heart rate monitoring based on micro-motion algorithm
CN112617786A (en) Heart rate detection device and method based on tof camera
CN111369618A (en) Human body posture estimation method and device based on compressed sampling RF signals
CN117017276B (en) Real-time human body tight boundary detection method based on millimeter wave radar
CN117647788B (en) Dangerous behavior identification method and device based on human body 3D point cloud
Chebrolu FallWatch: A novel approach for through-wall fall detection in real-time for the elderly using artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant