CN113197558B - Heart rate and respiratory rate detection method and system and computer storage medium - Google Patents
Heart rate and respiratory rate detection method and system and computer storage medium Download PDFInfo
- Publication number
- CN113197558B CN113197558B CN202110325026.3A CN202110325026A CN113197558B CN 113197558 B CN113197558 B CN 113197558B CN 202110325026 A CN202110325026 A CN 202110325026A CN 113197558 B CN113197558 B CN 113197558B
- Authority
- CN
- China
- Prior art keywords
- heart rate
- data sequence
- grid
- rate
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Physiology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Cardiology (AREA)
- Artificial Intelligence (AREA)
- Pulmonology (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to the technical field of image processing, and discloses a heart rate and respiration rate detection method, a heart rate and respiration rate detection system and a computer storage medium, which are used for improving robustness. The method comprises the following steps: extracting an image data set and training a model based on four regions of the forehead, the left cheek, the right cheek and the side face, on one hand, analyzing whether 4 regions of interest move or not, and eliminating image noise influenced by facial expression, action, posture and the like in the set; on the other hand, considering that the illumination collected by the video is not uniform and is changed in global low frequency, the illumination of a local area can be regarded as a constant, the mean value of the grid data is removed, and the local constant value of the illumination is also removed, so that the influence of illumination nonuniformity can be weakened by a signal data sequence obtained after the mean value processing; on the other hand, the data sequences with high correlation screened out from all the interested regions are collected into an effective data sequence set, so that the multidimensional property and the accuracy of the data are further ensured.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a heart rate and respiration rate detection method and system and a computer storage medium.
Background
Heart rate and respiratory rate are important physiological parameters of the human body and are closely related to the health of the cardiovascular system of the human body. Sudden onset and high mortality are important features of cardiovascular diseases. The heart rate and the respiratory rate change of the human body are accurately monitored for a long time, and the heart rate and the respiratory rate change can effectively early warn the sudden cardiovascular diseases, so that the intervention and the treatment are timely carried out, and the death rate is effectively reduced. The method for detecting the heart rate and the respiratory rate in a non-contact or remote mode is a current research hotspot, is mainly realized by a photo-plethysmography based on a video image, and has the advantages of remote real-time detection, no need of wearing a detection device, small influence on a detection object, wide application range and the like.
By using the principle of photoplethysmography, the heart rate and the respiratory rate can be remotely detected, and higher accuracy can be obtained. However, this method is sensitive to the test environment and is susceptible to interference factors. In particular, in the detection process, a video image of the skin of a human face is often used. The human face must be kept still, and the human face cannot have larger expression change, head movement displacement and the like; secondly, the requirement for illumination is high, and uneven illumination may cause interference signals. In actual detection, a long detection time is required to acquire enough image frames. At this time, the changes of the facial expression, the actions of opening the mouth, blinking and the like occur frequently, and the illumination condition may change at any time, so that strong interference factors are brought, a considerable measurement error is caused to the actual detection result, and the high robustness requirement of the remote real-time measurement of the physiological parameters is difficult to meet.
In summary, the existing remote detection method lacks strong robustness and adaptability, accurate results can be obtained only under relatively ideal conditions, and influence factors of actual detection reduce the popularization and use value of the method.
Disclosure of Invention
The invention aims to disclose a heart rate and respiration rate detection method, a heart rate and respiration rate detection system and a computer storage medium, so as to improve robustness.
In order to achieve the above object, the present invention discloses a heart rate and respiration rate detecting method, which comprises:
acquiring m frames of face video images, and measuring heart rate and respiration rate values synchronous with the face video images by using a measuring device;
uniformly dividing the collected video image into n subsets according to time sequence; extracting four interesting regions of forehead, left cheek, right cheek and side face from each frame image in each subset by square selection frame, comparing the coordinates, height and width of the upper left corner of the serial selection frame corresponding to the same interesting region in the same subset to judge whether each selection frame is shifted, if yes, deleting all the image data extracted from the interesting region in the same subset which is shifted; dividing the reserved image data in the square selection frame which is not shifted into p multiplied by q grids, wherein the pixel number of each grid is k multiplied by k;
calculating the gray average value of each grid in the same region of interest corresponding to the same subset, performing mean value removing processing on each pixel point of the corresponding grid according to the corresponding gray average value, then calculating the mean value of grid pixel data after mean value removing, and combining the mean values of the grid pixel data after mean value removing of the same region of interest and the same grid position corresponding to the same subset into a signal data sequence corresponding to the grid position;
calculating the correlation among signal data sequences corresponding to p multiplied by q grid positions of the same interest region of the same subset to obtain H grid data sequences with high correlation;
collecting the data sequences with high correlation screened out from all the regions of interest into an effective data sequence set; establishing a prediction model according to the effective data sequence set and the actually measured synchronous heart rate and respiration rate values;
and extracting an effective data sequence set according to the steps for the newly acquired face video image, and inputting the effective data sequence set into the prediction model to obtain corresponding heart rate and respiration rate prediction values.
Optionally, the present invention determines the location of the square box with the located neural network Faster R-CNN.
Optionally, the method for obtaining data sequences of H grids with high correlation according to the present invention includes: calculating correlation coefficients of the data sequences of the p multiplied by q networks pairwise to obtain a correlation coefficient matrix; comparing each matrix element in the correlation coefficient matrix with a set threshold, setting the correlation coefficient smaller than the threshold as 0, and dividing the correlation coefficient matrix into at least one region which is not 0; h grids corresponding to grids included in the maximum region not equal to 0 in the correlation number matrix are extracted as a set of data sequences having high correlation.
Optionally, the prediction model of the present invention employs a residual network, google network or DenseNet convolutional network architecture, LSTM's recurrent neural network.
In order to achieve the above object, the present invention further discloses a heart rate and respiration rate detection system, which includes a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the corresponding method when executing the computer program.
To achieve the above object, the present invention also discloses a computer storage medium having a computer program stored thereon, which when executed by a processor, implements the steps corresponding to the above method.
The invention has the following beneficial effects:
according to the method, the extraction and model training of the image data set are carried out on the basis of four regions including the forehead, the left cheek, the right cheek and the side face, and in order to ensure the robustness of image data processing, on one hand, whether 4 regions of interest move or not is analyzed, and image noise influenced by facial expression, movement, posture and the like in the set is eliminated; on the other hand, considering that the illumination collected by the video is not uniform and is changed in global low frequency, the illumination of a local area can be regarded as a constant, the mean value of the grid data is removed, and the local constant value of the illumination is also removed, so that the influence of illumination nonuniformity can be weakened by a signal data sequence obtained after the mean value processing; on the other hand, the data sequences with high correlation screened out from all the interested regions are collected into an effective data sequence set, so that the multidimensional property and the accuracy of the data are further ensured. Therefore, the overall robustness is ensured through multiple screening and corresponding separation and combination processing.
The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a main flow chart of a high-robustness remote heart rate and respiration rate detection method in an embodiment of the present invention;
FIG. 2 is a region of interest for acquiring heart rate and respiration rate;
FIG. 3 is a schematic diagram of meshing;
FIG. 4 is a schematic diagram of a shift analysis of a square selection box;
fig. 5 is a schematic diagram of heart rate and respiratory rate prediction by a recurrent neural network and final result averaging.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
Example 1
The embodiment discloses a heart rate and respiration rate detection method, as shown in fig. 1 to 5, including the following steps:
(1) video and heart and respiratory rate data acquisition. The face video is recorded through the camera, and a video data set formed by a plurality of frame images is obtained from the obtained video stream. Setting the video recording frame rate as x frames/second, recording t duration, and obtaining m as x multiplied by t frame images in total. And when the video is collected, the heart rate value and the respiration rate value which are synchronous with the video are obtained in real time by using a heart rate and respiration rate measuring device.
(2) And dividing the subset of the video images. The video image acquisition time is long, and the change of the position and the illumination condition of the human face is difficult to avoid. Dividing the m frames of images into n subsets in time sequence, and recording the subsets as D1,D2,D3,D4,...,DnEach set includingAnd (5) frame images. Marking the ith frame image in the jth set as IjiI.e. byIn subsequent analysis, the images within each subset are analyzed.
(3) For any one video image set DjAnd analyzing the interested region in each frame of image. For each frame of image, only four regions of interest (as in FIG. 2) were analyzed, including the forehead, left cheek, and right in each frame of imageCheek and side face, respectively marked as Aji1,Aji2,Aji3,Aji4J is the number of the set, i is the serial number of the frame image in the set,the position of the region of interest is represented by a square box, including AjihCoordinates in the upper left corner on the image and height, width: xjih,Yjih,Hjih,Wjih,h=1,2,3,4。
Preferably, the location of the square box is determined using a neural network for localization, including but not limited to Faster R-CNN. And selecting a square box of each frame of image marked by the human as a data set. And training a positioning network to position the square selection box from the new image.
Preferably, the data set is made by collecting human face data, manually outlining the forehead, left cheek, right cheek and side face regions, with 70% of the data as the training set, 10% of the data as the verification set, and the remaining 20% as the test set.
Preferably, the data is fed into the network for training, and the extracted features are fed into the RPN network using ResNet as a feature extractor to generate possible candidate boxes.
(4) And performing displacement analysis on the square selection frame. Determining whether the square selection box is influenced by the facial expression, facial movement or posture by using shift analysis, and judging according to that the selection box A in the same setjihIn the upper left corner of the set, height, widthWhether there is a large shift on the frame image. Set DjAll thereinThe h-th frame on the frame image is marked as:extracting selection frames on each frame imageThe coordinate, height and width of the selection box are compared with the variation of the coordinate, height and width. If the variation does not exceed the preset threshold, it is determined that the h-th frame in the set is not affected by the human face action, and the h-th frame should be keptFor subsequent analysis; otherwise, the selection frame is deletedThe entire image data of the interior.
Preferably, the threshold is set to K, and the selection box is calculatedIf the square difference value of the coordinates, the height and the width of the upper left corner exceeds a threshold value, the selection frame is influenced by the change of the facial expression or the posture, and image data in the selection frame is abandoned; if the variance value is less than or equal to the threshold, it is reserved for subsequent processing.
(5) And local analysis of square selection frames. Selecting any selection frame which is not influenced by the facial expression postureWhich belongs to the jth set, the h box, includingAnd (5) frame images. The illumination of the local area can be considered as a constant, considering that the illumination of the video acquisition is not uniform and is globally low-frequency-varying. The data of the local area of the frame is equivalent to real data superposed with a constant value of illumination. The influence of illumination nonuniformity can be weakened by locally analyzing the square selection frame.
(6) And extracting the data sequence of the grid. Select the square frameDividing into p × q grids, the number of pixels k × k of each grid:(see fig. 3). For each frame image, carrying out weighted summation on RGB three channels of each pixel in the w grid of the h frame selection to obtain the intensity (gray scale) value of brightness, and extracting a signal data sequence of each pixel of the grid:where r and s are the pixel coordinates in grid w and I represents the image.
(7) And carrying out mean value removing treatment. Calculating the gray average value of each grid on the frame image which is not shifted in the same region of interest of the same subset:where r and s are the pixel coordinates in grid w. Removing the mean of the grid data:local constant values of the illumination are also removed at this time.
(8) And analyzing the correlation. Calculating an average value of the grid data after mean removal:obtaining a signal data sequence of grid w, havingA data point. And calculating the correlation pairwise for all the data sequences of p multiplied by q grids of the h region in the same subset. And removing the grid data with low correlation with other sequences to obtain data sequences of H grids with high correlation, wherein H is less than p multiplied by q. The data sequence of the H grids is boxed from a squareThe data sequence of p × q grids is selected as the data sequence of H grids which is not affected by the uneven illumination and has the highest correlation.
Preference is given toAnd pairwise calculating correlation coefficients of the data sequences of the p multiplied by q networks to obtain a correlation coefficient matrix M epsilon Rpq ×pqEach element m of the matrixijIs the correlation coefficient of the data sequence of the ith and jth grids. A threshold value is set, and the correlation coefficient smaller than the threshold value is set to 0. The correlation coefficient matrix M is divided into several regions that are not 0. Taking out the grids included in the largest region, wherein the region includes H grids and comprises a set of data sequences with high correlation:
(9) and acquiring a set of valid data sequences. After the positioning identification, the shift analysis and the local analysis of the square selection frame, the grids which do not pass the shift and the local analysis are eliminated, and the set of the data sequence after the preprocessing is obtainedH is a grid with high correlation obtained by local analysis of the corresponding square box.The data sequence is not affected by human face actions and uneven illumination, and an effective data sequence set for predicting the heart rate and the respiratory rate is formed.
(10) And establishing a prediction model. By usingAnd corresponding heart rate and respiration rate measurements, making a training set. Training a prediction model, and establishing a relation between the data sequence and the measured values of the heart rate and the respiratory rate. The prediction model will be used to predict heart rate and respiration rate values for the new data sequence.
Preferably, the prediction model uses a recurrent neural network, inputs a data sequence, and outputs values of heart rate and respiratory rate. The cyclic neural network can use a multilayer neural network, a long-range and short-range memory network and other structures, the data sequence is embedded and processed, and the values of the heart rate and the respiratory rate are output on an output layer of the network through conversion of a hidden layer.
Preferably, the prediction model uses a multilayer convolutional neural network, and uses the multilayer convolutional core to perform convolution, pooling and other processing steps on the data sequence, so as to convert the input data sequence into the heart rate and respiration rate values. The convolutional neural network architecture can use a residual error network, a google network, a DenseNet and other common practical convolutional network architectures.
Preferably, use is made ofThe prediction model is trained with the aggregated data sequence and corresponding heart rate and respiration rate values, with 70% of the data as the training set, 10% of the data as the validation set, and the remaining 20% as the test set. And adjusting the weight by using a machine learning and neural network training method to enable the predicted value of the trained model to approach the true value.
(11) Robust use of predictive models. For a new video image, a set of the new video image, which comprises a plurality of data sequences, is obtained after a subset is divided, a square area is extracted, and displacement analysis and correlation analysis are carried out:and inputting each data sequence in the set into a prediction model to obtain heart rate and respiration rate values. And averaging the heart rate and respiration rate values of all the sequences, and taking the average value as the value of the heart rate and the respiration rate of the video image.
Example 2
Corresponding to the above method embodiments, this embodiment also discloses a heart rate and respiration rate detection system, which includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and when the processor executes the computer program, the following steps of the corresponding method are implemented:
collecting m frames of face video images, and measuring heart rate and respiration rate values synchronous with the face video images by using a measuring device;
uniformly dividing the collected video image into n subsets according to time sequence; extracting four interesting regions of forehead, left cheek, right cheek and side face from each frame image in each subset by square selection frame, comparing the coordinates, height and width of the upper left corner of the serial selection frame corresponding to the same interesting region in the same subset to judge whether each selection frame is shifted, if yes, deleting all the image data extracted from the interesting region in the same subset which is shifted; dividing the reserved image data in the square selection frame which is not shifted into p multiplied by q grids, wherein the pixel number of each grid is k multiplied by k;
calculating the gray average value of each grid in the same region of interest corresponding to the same subset, performing mean value removing processing on each pixel point of the corresponding grid according to the corresponding gray average value, then calculating the mean value of grid pixel data after mean value removing, and combining the mean values of the grid pixel data after mean value removing of the same region of interest and the same grid position corresponding to the same subset into a signal data sequence corresponding to the grid position;
calculating the correlation among signal data sequences corresponding to p multiplied by q grid positions of the same region of interest of the same subset to obtain H grid data sequences with high correlation;
collecting the data sequences with high correlation screened out from all the regions of interest into an effective data sequence set; establishing a prediction model according to the effective data sequence set and the actually measured synchronous heart rate and respiration rate values;
and extracting an effective data sequence set according to the steps for the newly acquired face video image, and inputting the effective data sequence set into the prediction model to obtain corresponding heart rate and respiration rate prediction values.
Optionally, the method for obtaining data sequences of H grids with high correlation according to the present invention includes: calculating correlation coefficients of the data sequences of the p multiplied by q networks pairwise to obtain a correlation coefficient matrix; comparing each matrix element in the correlation coefficient matrix with a set threshold, setting the correlation coefficient smaller than the threshold as 0, and dividing the correlation coefficient matrix into at least one area which is not 0; h grids corresponding to grids included in the maximum region not equal to 0 in the correlation number matrix are extracted as a set of data sequences having high correlation. The specific implementation of the method in this embodiment refers to embodiment 1, and will not be described in detail.
Example 3
The present embodiment discloses a computer storage medium having a computer program stored thereon, which when executed by a processor, performs the steps corresponding to the above-described method and system.
In summary, the method, the system and the computer storage medium for detecting the heart rate and the respiratory rate disclosed by the embodiment of the invention have the following beneficial effects:
according to the method, the extraction and model training of the image data set are carried out on the basis of four regions including the forehead, the left cheek, the right cheek and the side face, and in order to ensure the robustness of image data processing, on one hand, whether 4 regions of interest move or not is analyzed, and image noise influenced by facial expression, movement, posture and the like in the set is eliminated; on the other hand, considering that the illumination collected by the video is not uniform and is changed in global low frequency, the illumination in a local area can be regarded as a constant, the average value of grid data is removed, and the local constant value of the illumination is also removed, so that the influence of illumination nonuniformity can be weakened by a signal data sequence obtained after averaging processing; on the other hand, the data sequences with high correlation screened out from all the interested regions are collected into an effective data sequence set, so that the multidimensional property and the accuracy of the data are further ensured. Therefore, the overall robustness is ensured through multiple screening and corresponding separation and combination processing.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. A method for detecting heart rate and respiratory rate, comprising:
s1: collectingFraming a face video image, and measuring heart rate and respiration rate values synchronous with the face video image by using a measuring device;
s2: uniformly dividing the collected video images into time sequencesA subset of the groups; extracting four interesting regions of forehead, left cheek, right cheek and side face from each frame image in each subset by square selection frame, comparing the coordinates, height and width of the upper left corner of the serial selection frame corresponding to the same interesting region in the same subset to judge whether each selection frame is shifted, if yes, deleting all the image data extracted from the interesting region in the same subset which is shifted; the image data within the remaining unshifted square frame is then partitionedA grid, each grid having a number of pixels;
S3: calculating the gray average value of each grid in the same region of interest corresponding to the same subset, performing mean value removing processing on each pixel point of the corresponding grid according to the corresponding gray average value, then calculating the mean value of grid pixel data after mean value removing, and combining the mean values of the grid pixel data after mean value removing of the same region of interest and the same grid position corresponding to the same subset into a signal data sequence corresponding to the grid position;
s4: computing the same region of interest for the same subsetThe correlation between the signal data sequences corresponding to the grid positions is obtained to obtain high correlationA data sequence of a grid;
s5: collecting the data sequences with high correlation screened out from all the regions of interest into an effective data sequence set;
s6: establishing a prediction model according to the effective data sequence set and the actually measured synchronous heart rate and respiration rate values;
and (4) extracting effective data sequence sets from the newly acquired human face video images according to the steps S2-S5, and inputting the effective data sequence sets into the prediction model to obtain corresponding heart rate and respiratory rate prediction values.
2. The heart rate and respiration rate detection method of claim 1 wherein the square boxes are located with a localized neural network.
3. The method of claim 1 or 2, wherein the correlation is highThe method of data sequence of a grid comprises:
two by two calculationObtaining a correlation coefficient matrix by using the correlation coefficient of the data sequence of each network;
comparing each matrix element in the correlation coefficient matrix with a set threshold, setting the correlation coefficient smaller than the threshold as 0, and dividing the correlation coefficient matrix into at least one region which is not 0; corresponding to the grids included in the area not equal to 0 in the maximum of the correlation number matrixEach grid is extracted as a set of highly correlated data sequences.
4. The heart rate and respiratory rate detection method of claim 3, wherein the prediction model employs a residual network, Google network, DenseNet convolutional network architecture or LSTM recurrent neural network structure.
5. The heart rate and respiration rate detection method according to any one of claims 1, 2 and 4, wherein the heart rate and respiration rate prediction value calculation method comprises:
calculating a predicted value corresponding to each effective data sequence in the effective data sequence set on the basis of the prediction model;
and taking the predicted average value of the effective data sequence set as the final result of the heart rate and respiration rate predicted value.
6. The heart rate and respiratory rate detection method according to claim 3, wherein the heart rate and respiratory rate prediction value calculation method comprises:
calculating a predicted value corresponding to each effective data sequence in the effective data sequence set on the basis of the prediction model;
and taking the predicted average value of the effective data sequence set as the final result of the heart rate and respiration rate predicted value.
7. A heart rate and respiration rate detection system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 6 are carried out when the computer program is executed by the processor.
8. A computer storage medium having a computer program stored thereon, wherein the program is adapted to perform the steps of any of the methods of claims 1 to 6 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110325026.3A CN113197558B (en) | 2021-03-26 | 2021-03-26 | Heart rate and respiratory rate detection method and system and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110325026.3A CN113197558B (en) | 2021-03-26 | 2021-03-26 | Heart rate and respiratory rate detection method and system and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113197558A CN113197558A (en) | 2021-08-03 |
CN113197558B true CN113197558B (en) | 2022-06-17 |
Family
ID=77025739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110325026.3A Active CN113197558B (en) | 2021-03-26 | 2021-03-26 | Heart rate and respiratory rate detection method and system and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113197558B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114343612B (en) * | 2022-03-10 | 2022-05-24 | 中国科学院自动化研究所 | Non-contact respiration rate measuring method based on Transformer |
WO2023184832A1 (en) * | 2022-03-31 | 2023-10-05 | 上海商汤智能科技有限公司 | Physiological state detection method and apparatus, electronic device, storage medium, and program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012239661A (en) * | 2011-05-20 | 2012-12-10 | Fujitsu Ltd | Heart rate/respiration rate detection apparatus, method and program |
CN110647815A (en) * | 2019-08-25 | 2020-01-03 | 上海贝瑞电子科技有限公司 | Non-contact heart rate measurement method and system based on face video image |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050122409A1 (en) * | 2003-12-08 | 2005-06-09 | Nikon Corporation | Electronic camera having color adjustment function and program therefor |
CN104036278B (en) * | 2014-06-11 | 2017-10-24 | 杭州巨峰科技有限公司 | The extracting method of face algorithm standard rules face image |
TWI646941B (en) * | 2017-08-09 | 2019-01-11 | 緯創資通股份有限公司 | Physiological signal measurement system and method for measuring physiological signal |
CN107692997B (en) * | 2017-11-08 | 2020-04-21 | 清华大学 | Heart rate detection method and device |
-
2021
- 2021-03-26 CN CN202110325026.3A patent/CN113197558B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012239661A (en) * | 2011-05-20 | 2012-12-10 | Fujitsu Ltd | Heart rate/respiration rate detection apparatus, method and program |
CN110647815A (en) * | 2019-08-25 | 2020-01-03 | 上海贝瑞电子科技有限公司 | Non-contact heart rate measurement method and system based on face video image |
Also Published As
Publication number | Publication date |
---|---|
CN113197558A (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11151721B2 (en) | System and method for automatic detection, localization, and semantic segmentation of anatomical objects | |
JP4860749B2 (en) | Apparatus, system, and method for determining compatibility with positioning instruction in person in image | |
CN113197558B (en) | Heart rate and respiratory rate detection method and system and computer storage medium | |
CN109620244B (en) | Infant abnormal behavior detection method based on condition generation countermeasure network and SVM | |
Datcu et al. | Noncontact automatic heart rate analysis in visible spectrum by specific face regions | |
CN105701331A (en) | Computer-aided diagnosis apparatus and computer-aided diagnosis method | |
CN112001122B (en) | Non-contact physiological signal measurement method based on end-to-end generation countermeasure network | |
CN109766838B (en) | Gait cycle detection method based on convolutional neural network | |
CN108416276B (en) | Abnormal gait detection method based on human lateral gait video | |
CN112580552B (en) | Murine behavior analysis method and device | |
CN110532850B (en) | Fall detection method based on video joint points and hybrid classifier | |
US11450148B2 (en) | Movement monitoring system | |
CN111210415B (en) | Method for detecting facial expression hypo of Parkinson patient | |
Jaroensri et al. | A video-based method for automatically rating ataxia | |
CN115641364B (en) | Embryo division period intelligent prediction system and method based on embryo dynamics parameters | |
US11980491B2 (en) | Automatic recognition method for measurement point in cephalo image | |
CN104331705B (en) | Automatic detection method for gait cycle through fusion of spatiotemporal information | |
US11779260B2 (en) | Cognitive function evaluation method, cognitive function evaluation device, and non-transitory computer-readable recording medium in which cognitive function evaluation program is recorded | |
CN116530976A (en) | Human gait monitoring method | |
CN115147769A (en) | Physiological parameter robustness detection method based on infrared video | |
JP2020091535A (en) | Preprocessing device, preprocessing method and preprocessing program | |
CN118053209B (en) | Chest surgery operation motion capturing method for virtual reality | |
KR102418073B1 (en) | Apparatus and method for artificial intelligence based automatic analysis of video fluoroscopic swallowing study | |
CN117883074A (en) | Parkinson's disease gait quantitative analysis method based on human body posture video | |
CN118279193A (en) | Intelligent removal method, medium and system for heart ultrasonic image artifacts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |