WO2023005402A1 - 基于热成像的呼吸率检测方法、装置及电子设备 - Google Patents

基于热成像的呼吸率检测方法、装置及电子设备 Download PDF

Info

Publication number
WO2023005402A1
WO2023005402A1 PCT/CN2022/096186 CN2022096186W WO2023005402A1 WO 2023005402 A1 WO2023005402 A1 WO 2023005402A1 CN 2022096186 W CN2022096186 W CN 2022096186W WO 2023005402 A1 WO2023005402 A1 WO 2023005402A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
scene
feature
sample
Prior art date
Application number
PCT/CN2022/096186
Other languages
English (en)
French (fr)
Inventor
覃德智
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023005402A1 publication Critical patent/WO2023005402A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular to a thermal imaging-based respiration rate detection method, device and electronic equipment.
  • Respiration rate is an important physiological data used to analyze information such as human health status and emotions.
  • the respiration rate measurement method in related technologies is usually contacted, such as the commonly used breathing belt, which needs to connect the subject to the breathing belt , but the applicable scenarios of this contact measurement method are limited. For outdoor scenes, scenes with isolation requirements and other scenes that require non-contact measurement, this contact measurement method cannot be used. Therefore, it is difficult for related technologies to meet the demand for non-contact measurement of respiration rate.
  • the disclosure proposes a thermal imaging-based respiration rate detection method, device and electronic equipment.
  • a method for detecting respiration rate based on thermal imaging which includes: acquiring at least two thermal images, the at least two thermal images including the target rendered based on the temperature information of the target object The outline of the object; for each of the at least two thermal images, the target area in the thermal image is extracted based on a neural network; the temperature information corresponding to the target area is extracted; wherein, the at least two thermal images are The temperature information of the target object changes periodically following the respiration of the target object; and the respiration rate of the target object is determined according to the extracted temperature information.
  • the respiration rate of the target object can be determined by analyzing the thermal image, so that the respiration rate detection result can be obtained without touching the target object, realizing non-contact detection, filling the blank of the non-contact detection scene, and having Good detection speed and detection accuracy.
  • the neural network is obtained based on the following method: obtain a sample thermal image set and labels corresponding to multiple sample thermal images in the sample thermal image set; wherein, for each sample thermal image, the sample The thermal image includes the outline of the sample target object rendered based on the temperature information of the sample target object, and the label corresponding to the sample thermal image represents the target area of the sample target object, and the target area is the mouth of the sample target object. Nose area or mask area; feature extraction is performed on the multiple sample thermal images in the sample thermal image set to obtain sample feature information; predict the target area according to the sample feature information to obtain the target area prediction result; according to the target Region prediction results and the labels are used to train the neural network.
  • the neural network can be equipped with the ability to directly determine the target area, and by quickly and accurately determining the position for detecting the respiration rate, the accuracy and speed of respiration rate detection can be improved.
  • the performing feature extraction on the plurality of sample thermal images in the sample thermal image set to obtain sample feature information includes: for each sample thermal image, initializing the sample thermal image Feature extraction to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction includes channel feature extraction; based on the salient features in the first feature information, the Filtering the first feature map to obtain a filtering result; extracting second feature information from the filtering result; fusing the first feature information and the second feature information to obtain sample feature information of the sample thermal image.
  • the validity and discriminative power of the second feature information can be improved, thereby improving the richness of information in the final sample feature information.
  • the neural network includes a first neural network and a second neural network; for each of the at least two thermal images, extracting the target area in the thermal image based on the neural network , including: extracting a face target in the thermal image based on the first neural network; extracting a target area in the face target based on the second neural network, the target area being a mask in the face target area.
  • the mask area can be determined on the basis of determining the face, avoiding the analysis of the breathing rate of the mask not worn on the face, and improving the accuracy and speed of the breathing rate detection.
  • extracting the target area in the thermal image based on the neural network includes: extracting the key points in the thermal image based on the neural network area, the key area is a mask area or nose and mouth area; determine the breathing rate detection scene; according to the breathing rate detection scene, determine the target mapping relationship between the key area and the target area; according to the target mapping relationship and The key area is to determine the target area.
  • the target area can be determined according to the key area directly identified from the neural network, thereby indirectly and automatically determining the target area, which further expands the application scenarios of the embodiments of the present disclosure.
  • the determining the breathing rate detection scene includes: acquiring scene mapping information, the scene mapping information characterizing the corresponding relationship between scene feature information and scene categories; A thermal image is subjected to scene feature extraction to obtain target scene feature information; according to the target scene feature information and the scene mapping information, the target scene category corresponding to the target scene feature information is obtained, and the target scene category points to the Respiration rate detection scene.
  • the respiration rate detection scene can be automatically determined according to the preset scene mapping information and the extracted target scene feature information, and the respiration rate detection scene can be determined efficiently and accurately without manual intervention.
  • the determining the target mapping relationship between the key area and the target area according to the breathing rate detection scene includes: acquiring mapping relationship management information, where the mapping relationship management information represents a mapping relationship The corresponding relationship with the scene category; the target mapping relationship is obtained according to the target scene category and the mapping relationship management information. Based on the above configuration, the target mapping relationship can be automatically determined according to the preset mapping relationship management information and the breathing rate detection scene, and the target mapping relationship can be determined efficiently and accurately without manual intervention.
  • the extracting scene features from at least one of the at least two thermal images to obtain target scene feature information includes: extracting at least one of the at least two thermal images Multi-scale feature extraction is performed on the thermal image to obtain feature extraction results of multiple levels; according to the order of increasing levels, the feature extraction results are fused to obtain feature fusion results of multiple levels; according to the order of decreasing levels, the features are The fusion results are fused to obtain the feature information of the target scene.
  • the feature information of the target scene not only contains relatively rich feature information, but also contains sufficient context information.
  • the extracting the temperature information corresponding to the target area includes: determining the temperature information corresponding to the pixel points in the target area; calculating the temperature corresponding to the target area according to the temperature information corresponding to the pixel points information. Based on the above configuration, by calculating the temperature information of each target area, the respiration rate of the target object can be further determined.
  • the determining the respiration rate of the target subject according to the extracted temperature information includes: sorting the temperature information in time order to obtain a temperature sequence; descending the temperature sequence noise processing to obtain a target temperature sequence; based on the target temperature sequence, determine the respiration rate of the target subject. Based on the above configuration, by determining the temperature sequence and performing noise reduction processing on the temperature sequence, the noise that affects the calculation of the respiration rate can be filtered out, so that the obtained respiration rate is more accurate.
  • the performing noise reduction processing on the temperature sequence to obtain the target temperature sequence includes: determining a noise reduction processing strategy and a noise reduction processing method; according to the noise reduction processing strategy, based on the The temperature sequence is processed in a noise mode to obtain the target temperature sequence; wherein, the noise reduction processing strategy includes at least one of the following: noise reduction based on high-frequency threshold, noise reduction based on low-frequency threshold, random noise filtering, post- Experimental noise reduction; the noise reduction processing is implemented based on at least one of the following methods: independent component analysis, Laplacian pyramid, bandpass filter, wavelet, Hamming window. Based on the above configuration, the obtained target temperature sequence can be smoother, with less noise, and the peaks and valleys are very clear, so that the respiration rate determined based on the target temperature sequence can be more accurate.
  • the determining the respiration rate of the target subject based on the target temperature sequence includes: determining a plurality of key points in the target temperature sequence, and the key points are all peak points or mean points. is the valley point; for any two adjacent key points, determine the time interval between the two adjacent key points; according to the time interval, determine the breathing rate. Based on the above configuration, the respiration rate of the target subject can be accurately determined based on the obtained time intervals.
  • a device for detecting respiration rate based on thermal imaging comprising: a thermal image acquisition module, configured to acquire at least two thermal images, the at least two thermal images include The outline of the target object rendered by the temperature information; for each of the at least two thermal images, the target area extraction module is used to extract the target area in the thermal image based on the neural network; the temperature information extraction module , used to extract the temperature information corresponding to the target area; wherein, the temperature information of the target object in the at least two thermal images follows the breathing of the target object and presents periodic changes; the respiration rate determination module is used to extract The received temperature information is used to determine the breathing rate of the target object.
  • the neural network is obtained based on the following method: obtain a sample thermal image set and labels corresponding to multiple sample thermal images in the sample thermal image set; wherein, for each sample thermal image, the sample The thermal image includes the outline of the sample target object rendered based on the temperature information of the sample target object, and the label corresponding to the sample thermal image represents the target area of the sample target object, and the target area is the mouth of the sample target object. Nose area or mask area; feature extraction is performed on the multiple sample thermal images in the sample thermal image set to obtain sample feature information; predict the target area according to the sample feature information to obtain the target area prediction result; according to the target Region prediction results and the labels are used to train the neural network.
  • the device further includes a sample feature information extraction module, configured to perform initial feature extraction on the sample thermal image for each sample thermal image to obtain a first feature map; the first feature map Perform composite feature extraction to obtain first feature information, wherein the composite feature extraction includes channel feature extraction; based on the salient features in the first feature information, filter the first feature map to obtain a filtering result; extract the filtering result The second feature information in the sample; fusing the first feature information and the second feature information to obtain sample feature information of the sample thermal image.
  • a sample feature information extraction module configured to perform initial feature extraction on the sample thermal image for each sample thermal image to obtain a first feature map
  • the first feature map Perform composite feature extraction to obtain first feature information, wherein the composite feature extraction includes channel feature extraction; based on the salient features in the first feature information, filter the first feature map to obtain a filtering result; extract the filtering result The second feature information in the sample; fusing the first feature information and the second feature information to obtain sample feature information of the sample thermal image.
  • the neural network includes a first neural network and a second neural network; the target area extraction module extracts the face target in the thermal image based on the first neural network; based on the The second neural network extracts the target area in the human face target, and the target area is the mask area in the human face target.
  • the target area extraction module is used to extract the key area in the thermal image based on the neural network, the key area is the mask area or the mouth and nose area; determine the breathing rate detection scene; according to In the respiration rate detection scene, determine a target mapping relationship between the key area and the target area; determine the target area according to the target mapping relationship and the key area.
  • the target area extraction module is further configured to obtain scene mapping information, where the scene mapping information represents the corresponding relationship between scene feature information and scene categories; for at least one of the at least two thermal images A thermal image is subjected to scene feature extraction to obtain target scene feature information; according to the target scene feature information and the scene mapping information, the target scene category corresponding to the target scene feature information is obtained, and the target scene category points to the Respiration rate detection scene.
  • the target area extraction module is further configured to obtain mapping relationship management information, the mapping relationship management information characterizes the corresponding relationship between the mapping relationship and the scene category; according to the target scene category and the mapping relationship management information to obtain the target mapping relationship.
  • the target area extraction module is further configured to perform multi-scale feature extraction on at least one of the at least two thermal images to obtain feature extraction results of multiple levels; according to the level In an increasing order, the feature extraction results are fused to obtain feature fusion results of multiple levels; in a descending order of levels, the feature fusion results are fused to obtain the target scene feature information.
  • the temperature information extraction module is configured to determine the temperature information corresponding to the pixels in the target area; and calculate the temperature information corresponding to the target area according to the temperature information corresponding to the pixels.
  • the respiration rate determination module is configured to sort the temperature information in time order to obtain a temperature sequence; perform noise reduction processing on the temperature sequence to obtain a target temperature sequence; based on the A target temperature sequence to determine the target subject's respiration rate.
  • the respiration rate determination module is also used to determine a noise reduction processing strategy and a noise reduction processing method; according to the noise reduction processing strategy, the temperature sequence is processed based on the noise reduction method , to obtain the target temperature sequence;
  • the noise reduction processing strategy includes at least one of the following: noise reduction based on high-frequency thresholds, noise reduction based on low-frequency thresholds, random noise filtering, and posterior noise reduction;
  • the noise reduction processing is based on the following Implement at least one of the above methods: independent component analysis, Laplacian pyramid, bandpass filter, wavelet, Hamming window.
  • the respiration rate determination module is also used to determine a plurality of key points in the target temperature sequence, and the key points are all peak points or valley points; Adjacent key points, determine the time interval between the two adjacent key points; according to the time interval, determine the respiration rate.
  • an electronic device including at least one processor, and a memory communicatively connected to the at least one processor; wherein, the memory stores information executable by the at least one processor. instructions, the at least one processor implements the thermal imaging-based respiration rate detection method according to any one of the first aspect by executing the instructions stored in the memory.
  • a computer-readable storage medium is provided, at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or at least one program is loaded by a processor and Execute to realize the method for detecting respiration rate based on thermal imaging described in any one of the first aspect or the method for detecting respiration rate based on thermal imaging described in any one of the second aspect.
  • FIG. 1 shows a schematic flowchart of a method for detecting respiration rate based on thermal imaging according to an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of a thermal image according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of extracting a target region based on a neural network according to an embodiment of the present disclosure
  • FIG. 4 shows a schematic flow diagram of a feature extraction method according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic flowchart of a method for determining a target area according to an embodiment of the present disclosure
  • FIG. 6 shows a schematic flowchart of a method for determining a breathing rate detection scene according to an embodiment of the present disclosure
  • FIG. 7 shows a schematic flowchart of a scene feature extraction method according to an embodiment of the present disclosure
  • Fig. 8 shows a schematic diagram of a feature extraction network according to an embodiment of the present disclosure
  • Fig. 9 shows a schematic flow chart of a method for determining the respiration rate of a target object according to the extracted temperature information according to an embodiment of the present disclosure
  • Fig. 10 shows a schematic flowchart of a method for determining the respiration rate of a target subject according to the extracted target temperature information sequence according to an embodiment of the present disclosure
  • Fig. 11 shows a block diagram of a device for detecting respiration rate based on thermal imaging according to an embodiment of the present disclosure
  • Fig. 12 shows a block diagram of an electronic device according to an embodiment of the present disclosure
  • FIG. 13 shows a block diagram of another electronic device according to an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides a method for detecting respiration rate based on thermal imaging, which can analyze the respiration rate of the subject based on the thermal image captured by the thermal imaging camera, and obtain the respiration rate of the subject without direct contact with the subject. Respiration rate, meeting people's objective needs for non-contact measurement of respiration rate.
  • the embodiments of the present disclosure may be used in various specific scenarios that require non-contact measurement of the respiration rate, and the embodiments of the present disclosure are not specifically limited to the specific scenarios.
  • the method provided by the embodiments of the present disclosure can be used to detect the non-contact breathing rate in scenes requiring isolation, in crowded scenes, in some public places with special requirements, and the like.
  • the respiration rate detection method based on thermal imaging can be executed by a terminal device, a server or other types of electronic devices, where the terminal device can be a user equipment (User Equipment, UE), a mobile device, a user terminal, a cellular Telephones, cordless phones, Personal Digital Assistant (PDA), handheld devices, computing devices, automotive devices, wearable devices, etc.
  • the method for detecting respiration rate based on thermal imaging may be implemented by a processor invoking computer-readable instructions stored in a memory. The method for detecting respiration rate based on thermal imaging according to the embodiment of the present disclosure will be described below by taking an electronic device as an execution subject.
  • Fig. 1 shows a schematic flow chart of a method for detecting respiration rate based on thermal imaging according to an embodiment of the present disclosure. As shown in Fig. 1, the above method includes:
  • S101 Acquire at least two thermal images, where the thermal images include the outline of the target object rendered based on the temperature information of the target object.
  • the temperature information of the above-mentioned target object can be obtained based on the thermal imaging camera shooting the target object.
  • the outline of the target object is rendered based on the temperature information of each position of the target object, and a thermal image including the outline is obtained.
  • the embodiment of the present disclosure detects the respiration rate according to the periodic variation of temperature in the thermal image, and at least two thermal images are required.
  • FIG. 2 shows a schematic diagram of a thermal image according to an embodiment of the present disclosure.
  • the temperature information of the face can be obtained by pointing the thermal imaging camera at the face for shooting, and based on the temperature information, the outline of the face in Figure 2 can be rendered.
  • Embodiments of the present disclosure do not limit the thermal imaging camera, which may be a fixed thermal imaging camera or a rotatable thermal imaging camera.
  • the control method of the thermal imaging camera is also not limited, and it may be triggered in response to a preset command, for example, a controller or a related sensor triggers a related control, and the thermal imaging camera can start shooting.
  • the thermal imaging camera can also be triggered in response to sensing information, for example, when the ambient temperature rises to a preset threshold, the thermal imaging camera can automatically start taking pictures.
  • the thermal imaging camera can also be triggered periodically.
  • the embodiment of the present disclosure does not limit the photographing mode of the thermal imaging camera, for example, its photographing frame rate, photographing resolution mode, photographing result output mode, etc. can be set according to actual conditions.
  • the thermal imaging camera can output the captured thermal image in the form of a video stream, or output multiple thermal images in the form of a picture.
  • the embodiments of the present disclosure may analyze the video stream to obtain the above multiple thermal images.
  • Embodiments of the present disclosure are intended to measure respiration rate, which is a physiological parameter, and the above-mentioned target object is a living body, such as a human being.
  • S102 Extract target areas in each of the aforementioned thermal images based on a neural network.
  • This disclosure extracts the target area in each of the above-mentioned thermal images based on a neural network. Since the embodiment of the disclosure uses a thermal image that renders the outline of the target object based on temperature information, this thermal image is produced with the improvement of thermal imaging technology and rendering technology. At present, there are relatively few image processing methods for this kind of thermal image, and it is difficult to automatically extract sufficient information from this thermal image, so most rely on manual analysis. Embodiments of the present disclosure can process such a thermal image, and automatically analyze the target area therein.
  • a neural network in the field of machine learning is a deep learning model that imitates the structure and function of biological neural networks.
  • Machine learning (Machine Learning, ML) is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Specializes in the study of how computers simulate or implement human learning behaviors to acquire new knowledge or skills, and reorganize existing knowledge structures to continuously improve their performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent, and its application pervades all fields of artificial intelligence.
  • Machine learning and deep learning usually include techniques such as artificial neural network, belief network, reinforcement learning, transfer learning, inductive learning, and teaching learning.
  • Deep learning (Deep Learning, DL) is a branch of machine learning, which is an algorithm that attempts to perform high-level abstraction on data using multiple processing layers that contain complex structures or consist of multiple nonlinear transformations.
  • the embodiment of the present disclosure does not limit the target area, which may point to the breathing position of the target object.
  • the target area may be the mouth and nose area of the face or the area where the face wears a mask.
  • the target area may be an area corresponding to changes due to the breathing of the target object. It can be determined from critical regions, which can be identified based on a neural network. The target mapping relationship between the key area and the target area can be determined according to the actual situation.
  • the key area can be the position of the mouth and nose or the position of the mask determined based on the neural network, and when the person is lying on the left side, the exhaled gas will move downward and to the left, then the target area Can be located in the lower left portion of this key area.
  • each thermal image may have one or more target areas, and a single target area will be used as an example for description below.
  • the target area is the nose and mouth area or the mask area, which can be directly extracted from the thermal image based on the neural network.
  • the mouth and nose area can be the mouth area and/or the nose area.
  • the mouth area and the nose area can be extracted as the target area, or the mouth area and the nose Region merging is extracted as one target region.
  • Figure 3 shows a schematic diagram of extracting target areas based on neural networks.
  • the left image in Figure 3 represents the effect of extracting the mouth area and nose area respectively, and the middle image uses the mouth area and nose area as a target.
  • the area is extracted, and the target area extracted from the right image is the mask area.
  • the sample thermal image set and the label corresponding to the sample thermal image in the sample thermal image set can be obtained; for each sample thermal image, the sample image includes the outline of the sample target object rendered based on the temperature information of the sample target object, and the label represents The target area of the sample target object; the target area is the mouth and nose area or the mask area of the sample target object; the feature extraction is performed on the sample thermal image in the sample thermal image set to obtain the sample feature information; the target area is predicted according to the sample feature information to obtain the target area Prediction results; train the above neural network according to the prediction results and labels of the target area.
  • the neural network can be equipped with the ability to directly determine the target area, and by quickly and accurately determining the position for detecting the respiration rate, the accuracy and speed of respiration rate detection can be improved.
  • the embodiment of the present disclosure does not describe the above training process in detail.
  • the above-mentioned neural network can perform feature extraction layer by layer based on the feature pyramid, predict the target area according to the extracted sample feature information, and adjust the parameters of the neural network according to the difference between the predicted target area and the above-mentioned label. Since the thermal image is rendered based on temperature information, the clarity of the thermal image may be lower than that of the visible light image. In order to obtain sufficient discriminative feature information, the embodiments of the present disclosure optimize the feature extraction process.
  • FIG. 4 shows a schematic flowchart of a feature extraction method according to an embodiment of the present disclosure.
  • the above feature extraction includes:
  • the embodiment of the present disclosure does not limit the specific method of initial feature extraction.
  • at least one stage of convolution processing may be performed on the above image to obtain the above first feature map.
  • a plurality of image feature extraction results of different scales may be obtained, and at least two image feature extraction results of different scales may be fused to obtain the first feature map.
  • the above-mentioned performing composite feature extraction on the above-mentioned first feature map to obtain the first feature information may include: performing image feature extraction on the above-mentioned first feature map to obtain a first extraction result.
  • Channel information is extracted from the first feature map to obtain a second extraction result.
  • the above-mentioned first extraction result and the above-mentioned second extraction result are fused to obtain the above-mentioned first feature information.
  • the embodiment of the present disclosure does not limit the method for extracting image features from the above-mentioned first feature map. Exemplarily, it may perform at least one level of convolution processing on the above-mentioned first feature map to obtain the above-mentioned first extraction result.
  • the channel information extraction in the embodiments of the present disclosure may focus on mining the relationship between channels in the first feature map. Exemplarily, it can be realized based on fusion of multi-channel features.
  • the composite feature extraction in the embodiment of the present disclosure can not only retain the low-level information of the first feature map itself, but also fully extract high-level inter-channel information by fusing the above-mentioned first extraction result and the above-mentioned second extraction result to improve mining.
  • the information richness and expressive power of the first feature information obtained.
  • at least one fusion method may be used, and the embodiment of the present disclosure does not limit the fusion method, at least one of dimensionality reduction, addition, multiplication, inner product, convolution, and averaging. Combinations can be used for fusion.
  • the embodiment of the present disclosure it is possible to judge the more prominent regions and the less prominent regions in the first feature map according to the first feature information, and filter out the information in the more prominent regions to obtain a filtering result. That is to say, the first feature information includes more salient regions and less salient regions, and after filtering out information in more salient regions, only less salient regions are included in the filtering result.
  • the salient feature may refer to signal information that is highly consistent with the heartbeat frequency of a living body (for example, a person) in the first feature information. Since the distribution of the salient features in the first feature information is relatively scattered, 70% of the information in the more salient area may be basically consistent with the heartbeat frequency, and the less salient area actually includes salient features.
  • the embodiment of the present disclosure does not limit the salient feature judgment method, which may be based on a neural network or based on expert experience.
  • the above-mentioned suppressing the above-mentioned salient features in the filtering results to obtain the second feature map includes: performing feature extraction on the above-mentioned filtering results to obtain target features, and the above-mentioned The target feature is extracted by performing composite feature extraction to obtain target feature information, and based on the salient features in the target feature information, the target feature is filtered to obtain the above second feature map.
  • the stop condition is that the proportion of the salient features in the second feature map is less than 5%, and for example, the stop condition is that the number of updates of the second feature map reaches the preset number of times
  • the stop condition is that the number of updates of the second feature map reaches the preset number of times
  • the salient features can be filtered layer by layer based on the hierarchical structure, and compound feature extraction including channel information extraction can be performed based on the filtering results to obtain the second feature information including multiple target feature information, and discriminative information can be mined layer by layer , to improve the validity and discriminative power of the second feature information, and then improve the richness of information in the final sample feature information.
  • the feature extraction method in the embodiments of the present disclosure can be used to perform feature extraction on the sample thermal image, and can be used in each of the embodiments of the present disclosure when it is necessary to train a neural network based on the sample thermal image.
  • the target area is a mask area
  • the above-mentioned neural network includes a first neural network and a second neural network
  • the above-mentioned extraction of the target area in each of the above-mentioned thermal images based on the neural network includes: extraction based on the above-mentioned first neural network A human face target in each of the above-mentioned thermal images; a target area in the above-mentioned human face target is extracted based on the above-mentioned second neural network, and the above-mentioned target area is a mask area in the above-mentioned human face target.
  • the mask area can be determined on the basis of determining the face, avoiding the analysis of the breathing rate of the mask not worn on the face, and improving the accuracy and speed of the breathing rate detection.
  • the mask area or the mouth and nose area above can be determined as the key area, and the target area can be determined based on the key area.
  • FIG. 5 shows a schematic flowchart of a method for determining a target area according to an embodiment of the present disclosure, including:
  • the target mapping relationship between the key area and the target area may be different. For example, if the target subject sleeps on the left side, the mouth and nose will inhale the airflow from the lower left when inhaling, and exhale the airflow to the lower left when exhaling, then the target area is located at the lower left of the key area. If the target subject is sleeping on the right side, when inhaling, the mouth and nose will inhale the airflow from the lower right, and when exhaling, the airflow will be exhaled to the lower right, then the target area is located at the lower right of the key area.
  • the embodiment of the present disclosure does not limit the manner of determining the respiration rate detection scene.
  • various typical respiration rate detection scenarios can be hierarchically classified, for example, the major categories are sleeping scenes, active scenes, sitting still scenes, etc., and the subcategories represent the specific posture of the target object in each major category of scenes, For example, in the sleep scene, the user sleeps on the left side, on the right side, or on the back.
  • Each level category corresponds to a unique scene identifier.
  • the category identifier of sleep scenes is 10
  • the identifiers of left sleep, right sleep and supine sleep are specifically 10-1, 10-2 and 10-3.
  • the respiration rate detection scene can be determined by obtaining the specific scene identifier input by the user.
  • the respiration rate detection scene can also be automatically determined.
  • FIG. 6 shows a schematic flowchart of a method for determining a breathing rate detection scene according to an embodiment of the present disclosure, including:
  • scene clustering can be performed based on massive thermal images.
  • the embodiment of the present disclosure does not limit the scene clustering method.
  • the feature information is extracted from the corresponding thermal image, and the scene feature information of the scene category is determined according to the feature information extraction result.
  • the embodiment of the present disclosure does not limit the specific method of determining the scene feature information of the scene category according to the feature information extraction result.
  • clustering can be further performed according to the feature information extraction result, and the feature information corresponding to the cluster center can be determined as the scene The scene characteristic information of the category. It is also possible to randomly select a plurality of feature extraction results, and determine the average value of each feature extraction result as the scene feature information of the scene category.
  • the thermal images in step S101 are located in the same scene, and one or more thermal images can be selected for scene feature extraction to obtain target scene feature information.
  • the embodiment of the present disclosure does not limit the specific method of scene feature extraction.
  • FIG. 7 shows a schematic flow chart of a scene feature extraction method according to an embodiment of the present disclosure, including:
  • Embodiments of the present disclosure may perform the above scene feature extraction based on a feature extraction network.
  • FIG. 8 shows a schematic diagram of a feature extraction network (for extracting scene features) according to an embodiment of the present disclosure.
  • the feature extraction network can be expanded to form a standard convolutional network through top-down channels and horizontal connections, so that rich, multi-scale feature extraction results can be effectively extracted from single-resolution thermal images.
  • the feature extraction network only briefly shows 3 layers, but in practical applications, the feature extraction network may include 4 layers or even more.
  • the downsampling network layer in the feature extraction network can output feature extraction results at various scales.
  • the downsampling network layer is actually a general term for the related network layers that realize the feature aggregation function. Specifically, the downsampling network layer can be the largest pooling layer, average pooling layer, etc., the embodiment of the present disclosure does not limit the specific structure of the downsampling network layer.
  • the feature extraction results extracted by different layers of the feature extraction network have different scales, and the above-mentioned feature extraction results can be fused according to the order of increasing levels to obtain feature fusion results of multiple levels.
  • the above-mentioned feature extraction network may include three feature extraction layers, which sequentially output feature extraction results A1, B1, and C1 in order of increasing levels.
  • the embodiments of the present disclosure do not limit the expression manner of the feature extraction results, and the above feature extraction results A1, B1, and C1 may be represented by feature maps, feature matrices, or feature vectors.
  • the feature extraction results A1, B1 and C1 can be sequentially fused to obtain multiple levels of feature fusion results.
  • the feature extraction result A1 can be used to perform its own inter-channel information fusion to obtain the feature fusion result A2.
  • the feature extraction result A1 and the feature extraction result B1 can be fused to obtain a feature fusion result B2.
  • the feature extraction result A1, the feature extraction result B1 and the feature extraction result C1 can be fused to obtain a feature fusion result C2.
  • the embodiment of the present disclosure does not limit a specific fusion method, and at least one of dimension reduction, addition, multiplication, inner product, convolution and a combination thereof may be used for the above fusion.
  • the feature fusion results C2, B2 and A2 obtained above can be sequentially fused to obtain feature information of the target scene.
  • the fusion method used in the fusion process may be the same as or different from the previous step, which is not limited in this embodiment of the present disclosure.
  • the feature information of the target scene can not only contain rich feature information, but also contain sufficient context information through two-way fusion.
  • the scene category corresponding to the scene feature information closest to the target scene feature information may be determined as the target scene category. Based on this configuration, the category of the target scene can be automatically determined, so that the accuracy of the automatically determined category of the target scene is higher under the premise of optimizing the method for extracting feature information of the target scene.
  • mapping relationship management information may be acquired, and the above mapping relationship management information represents a correspondence relationship between a mapping relationship and a scene category. According to the target scene category and the mapping relationship management information, the target mapping relationship is obtained.
  • the above mapping relationship represents the corresponding relationship between the key area and the target area
  • the mapping relationship management information represents the corresponding relationship between the mapping relationship and the scene category.
  • the mapping relationship and the mapping relationship management information can be set according to the actual situation, and can also be modified according to the actual situation. , so that the solutions in the embodiments of the present disclosure can be adaptively updated with the expansion of application scenarios, so as to fully meet the requirements of providing non-contact respiration rate detection services in various scenarios.
  • the target area can be determined according to the key area directly identified from the neural network, thereby indirectly and automatically determining the target area, which further expands the application scenarios of the embodiments of the present disclosure.
  • temperature information corresponding to relevant pixel points in the above target area may be determined.
  • the temperature information corresponding to the above target area is calculated according to the temperature information corresponding to each of the relevant pixel points.
  • the breathing rate of the target object can be further determined.
  • each pixel in the target area may be the relevant pixel.
  • pixel filtering can also be performed based on the temperature information of each pixel in the target area, and the pixels whose temperature information does not meet the preset temperature requirements are filtered out, and the unfiltered pixels are determined as the relevant pixel.
  • Embodiments of the present disclosure do not limit the preset temperature requirement, for example, an upper temperature limit, a lower temperature limit or a temperature range may be defined.
  • the embodiment of the present disclosure does not limit the specific method for calculating the temperature information corresponding to the target area.
  • the mean value or weighted mean value of the temperature information corresponding to each relevant pixel point can be determined as the temperature information corresponding to the target area.
  • the embodiment of the present disclosure does not limit the weight value, which can be set by the user according to actual needs.
  • the weight value may be anti-correlated with the distance between the corresponding relevant pixel point and the center position of the target area. Exemplarily, if the relevant pixel is closer to the center of the target area, the weight is higher, and if the relevant pixel is farther from the center of the target, the weight is lower.
  • the temperature information changes periodically following the respiration of the target object, and the respiration rate of the target object is determined according to the extracted temperature information.
  • the embodiment of the present disclosure considers that the breathing of the target object will cause the temperature of the target area to show periodic changes.
  • the temperature of the target area will decrease accordingly.
  • the target object exhales the temperature of the target area will increase
  • the respiration rate of the target object can be determined by analyzing the periodic change rule of the extracted temperature information.
  • FIG. 9 shows a schematic flowchart of a method for determining the respiration rate of a target object according to the extracted temperature information according to an embodiment of the present disclosure, including:
  • a temperature sequence can be obtained.
  • each thermal image is obtained by shooting the target object A, then each thermal image can be extracted to the target area of the target object A, and then the temperature of the target area can be obtained information, so that a temperature sequence containing 200 temperature information can be obtained. If each thermal image above includes N target objects, then N temperature sequences containing 200 pieces of temperature information can be obtained.
  • a noise reduction processing strategy and a noise reduction processing method may be determined; according to the above noise reduction processing strategy and based on the above noise reduction method, the above temperature sequence is processed to obtain the above target temperature sequence.
  • noise reduction processing strategies include at least one of the following: noise reduction based on high-frequency threshold, noise reduction based on low-frequency threshold, random noise filtering, and posterior noise reduction.
  • the above noise reduction processing is implemented based on at least one of the following manners: independent component analysis, Laplacian pyramid, bandpass filtering, wavelet, and Hamming window.
  • the respiration rate verification conditions and noise reduction experience parameters corresponding to the posterior noise reduction you can set the respiration rate verification conditions and noise reduction experience parameters corresponding to the posterior noise reduction, and denoise the above temperature sequence according to the noise reduction experience parameters to obtain the target temperature sequence.
  • the embodiment of the present disclosure does not limit the method for determining the noise reduction experience parameter, which may be obtained according to expert experience.
  • FIG. 10 shows a schematic flow chart of a method for determining the respiration rate of a target subject according to the extracted target temperature information sequence according to an embodiment of the present disclosure, including:
  • the corresponding time intervals can be calculated for every two adjacent key points, and then N-1 time intervals can be determined.
  • Embodiments of the present disclosure do not limit the specific method for determining the above-mentioned respiration rate according to the time interval.
  • the reciprocal of one of them can be determined as the above-mentioned respiration rate, and the respiration rate can also be determined based on some or all of the time intervals, for example, the above-mentioned several time intervals or all
  • the reciprocal of the mean value of the time interval was determined as the above-mentioned respiration rate.
  • the embodiments of the present disclosure can accurately determine the respiration rate by calculating the time interval between adjacent key points.
  • the respiration rate detection method provided by the embodiments of the present disclosure can detect one or more target objects, as long as the target objects are located in the field of view of the thermal imaging camera.
  • the respiration rate can be determined by taking thermal imaging photos of the target object without contact with the target object, and can be widely used in various scenarios. For example, in hospital ward monitoring, patients can monitor the patient's breathing rate without wearing any equipment, reduce the patient's discomfort, and improve the quality, effectiveness and efficiency of patient monitoring. In a closed scene, such as an office or the lobby of an office building, the breathing rate of the people present is detected to determine whether there is any abnormality.
  • the baby's breathing can be detected to prevent the baby from suffocating due to food blocking the airway, and the baby's breathing rate can be analyzed in real time to judge the baby's health status.
  • the remote-controlled thermal imaging camera captures the target object that may become the source of infection, and monitors the vital signs of the target object while avoiding infection.
  • the respiration rate detection method based on thermal imaging provided by the embodiments of the present disclosure can determine the respiration rate of the target object by analyzing the thermal image captured by the thermal imaging camera, so as to obtain the respiration rate detection result without contacting the target object. It realizes non-contact detection, fills the blank of non-contact detection scene, and has good detection speed and detection accuracy.
  • Fig. 11 shows a block diagram of a device for detecting respiration rate based on thermal imaging according to an embodiment of the present disclosure.
  • the above-mentioned devices include:
  • a thermal image acquisition module 10 configured to acquire at least two thermal images, the at least two thermal images including the outline of the target object rendered based on the temperature information of the target object;
  • the target area extraction module 20, for each of the at least two thermal images, is used to extract the target area in the thermal image based on a neural network;
  • the temperature information extraction module 30 is configured to extract temperature information corresponding to the target area for each of the at least two thermal images; wherein, the temperature information of the target object in the at least two thermal images follows the The respiration of the said subject exhibits periodic changes;
  • the respiration rate determination module 40 is configured to determine the respiration rate of the target object according to the extracted temperature information.
  • the above-mentioned neural network is obtained based on the following method: obtaining a sample thermal image set and labels corresponding to multiple sample thermal images in the sample thermal image set; wherein, for each sample thermal image, the sample thermal image The image includes the outline of the sample target object rendered based on the temperature information of the sample target object, and the label corresponding to the sample thermal image represents a target area of the sample target object, and the target area is the mouth and nose of the sample target object Area or mask area; feature extraction is performed on the multiple sample thermal images in the sample thermal image set to obtain sample feature information; predict the target area according to the sample feature information to obtain the target area prediction result; according to the target area Predicting the outcome and the label, training the neural network.
  • the device further includes a sample feature information extraction module, configured to perform initial feature extraction on the sample thermal image for each sample thermal image to obtain a first feature map; the first feature map Perform composite feature extraction to obtain first feature information, wherein the composite feature extraction includes channel feature extraction; based on the salient features in the first feature information, filter the first feature map to obtain a filtering result; extract the filtering result The second feature information in the sample; fusing the first feature information and the second feature information to obtain sample feature information of the sample thermal image.
  • a sample feature information extraction module configured to perform initial feature extraction on the sample thermal image for each sample thermal image to obtain a first feature map
  • the first feature map Perform composite feature extraction to obtain first feature information, wherein the composite feature extraction includes channel feature extraction; based on the salient features in the first feature information, filter the first feature map to obtain a filtering result; extract the filtering result The second feature information in the sample; fusing the first feature information and the second feature information to obtain sample feature information of the sample thermal image.
  • the above-mentioned neural network includes a first neural network and a second neural network; the above-mentioned target area extraction module extracts the face target in the thermal image based on the first neural network; The neural network extracts the target area in the human face target, which is the mask area in the human face target
  • the above-mentioned target area extraction module is used to extract the key area in the thermal image based on the neural network, the key area is the mask area or the mouth and nose area; determine the breathing rate detection scene; according to the In the respiration rate detection scenario, determine a target mapping relationship between the key area and the target area; determine the target area according to the target mapping relationship and the key area.
  • the above-mentioned target area extraction module is also used to obtain scene mapping information, and the above-mentioned scene mapping information represents the corresponding relationship between scene feature information and scene categories; for at least one of the at least two thermal images
  • the thermal image is subjected to scene feature extraction to obtain target scene feature information; according to the target scene feature information and the scene mapping information, the target scene category corresponding to the target scene feature information is obtained, and the target scene category points to the breathing rate Detection scene.
  • the target area extraction module is further configured to obtain mapping relationship management information, the mapping relationship management information representing the corresponding relationship between the mapping relationship and the scene category; according to the above target scene category and the above mapping relationship management information, Obtain the above target mapping relationship.
  • the above-mentioned target area extraction module is further configured to perform multi-scale feature extraction on at least one of the at least two thermal images to obtain feature extraction results of multiple levels; increase by level In order, the feature extraction results are fused to obtain feature fusion results of multiple levels; the feature fusion results are fused in descending order of levels to obtain the target scene feature information.
  • the temperature information extraction module is configured to determine temperature information corresponding to pixels in the target area; and calculate temperature information corresponding to the target area according to the temperature information corresponding to pixels.
  • the respiration rate determination module is configured to sort the above temperature information in time order to obtain a temperature sequence; perform noise reduction processing on the above temperature sequence to obtain a target temperature sequence; based on the above target temperature sequence, Determine the respiration rate of the aforementioned target subject.
  • the respiration rate determination module is also used to determine the noise reduction processing strategy and noise reduction processing method; according to the above noise reduction processing strategy, the above temperature sequence is processed based on the above noise reduction method to obtain the above target temperature series.
  • the above-mentioned noise reduction processing strategy includes at least one of the following: noise reduction based on high-frequency threshold, noise reduction based on low-frequency threshold, random noise filtering, and posterior noise reduction; the above-mentioned noise reduction processing is implemented based on at least one of the following methods: independent component analysis , Laplacian pyramid, bandpass filter, wavelet, Hamming window.
  • the above-mentioned respiration rate determination module is also used to determine multiple key points in the above-mentioned target temperature sequence, and the above-mentioned key points are all peak points or all are valley points; for any two adjacent key points , to determine the time interval between the above two adjacent key points; according to the above time interval, determine the above respiration rate.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, wherein at least one instruction or at least one program is stored in the computer-readable storage medium, and the above-mentioned method is implemented when the at least one instruction or at least one program is loaded and executed by a processor.
  • the computer readable storage medium may be a non-transitory computer readable storage medium.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured as the above method.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • Fig. 12 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the above-mentioned electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel.
  • the above-mentioned touch sensor may not only sense a boundary of a touch or a sliding action, but also detect a duration and pressure related to the above-mentioned touching or sliding operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data.
  • Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/close state of the electronic device 800, the relative positioning of the components, such as the above-mentioned components are the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a component of the electronic device 800 Changes in the position of , presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and temperature changes of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access wireless networks based on communication standards, such as WiFi, 2G, 3G, 4G, 5G or combinations thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the aforementioned communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 13 shows a block diagram of another electronic device according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination of the above programming languages including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as “C” or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA)
  • FPGA field programmable gate array
  • PDA programmable logic array
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more programmable logic components for implementing specified logical functions.
  • Execute instructions may be executed.
  • the order noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Pulmonology (AREA)
  • Fuzzy Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种基于热成像的呼吸率检测方法、装置及电子设备。该方法包括获取至少两张热图像,至少两张热图像包括基于目标对象的温度信息渲染出的目标对象的轮廓(S101);针对至少两张热图像中的每一张,基于神经网络提取该热图像中的目标区域(S102);提取该目标区域对应的温度信息(S103);其中,至少两张热图像中目标对象的温度信息跟随目标对象的呼吸呈现周期性变化,根据提取到的温度信息,确定目标对象的呼吸率(S104)。该方法和装置可以通过分析热成像摄像头拍摄到的热图像,确定出目标对象的呼吸率,从而在不接触目标对象的情况下得到呼吸率检测结果,实现了非接触检测,并且具备良好的检测速度和检测准确度。

Description

基于热成像的呼吸率检测方法、装置及电子设备
相关申请的交叉引用
本专利申请要求于2021年7月30日提交的、申请号为202110877891.9的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本公开涉及计算机视觉技术领域,尤其涉及基于热成像的呼吸率检测方法、装置及电子设备。
背景技术
呼吸率是用于分析人体健康状况、情绪等信息的重要生理数据,相关技术中的呼吸率测量方式通常是有接触的,比如常用的呼吸带,其需要将受测者与呼吸带连接在一起,但是这种接触式的测量方式的适用场景有限。对于户外场景、有隔离需求等要求无接触测量的场景,则无法使用这种接触式的测量方式,因此,相关技术难以满足对于呼吸率进行无接触测量的需求。
发明内容
本公开提出了基于热成像的呼吸率检测方法、装置及电子设备。
根据本公开的一方面,提供了一种基于热成像的呼吸率检测方法,其包括:获取至少两张热图像,所述至少两张热图像包括基于目标对象的温度信息渲染出的所述目标对象的轮廓;针对所述至少两张热图像中的每一张,基于神经网络提取该热图像中的目标区域;提取该目标区域对应的温度信息;其中,所述至少两张热图像中所述目标对象的温度信息跟随所述目标对象的呼吸呈现周期性变化;根据提取到的温度信息,确定所述目标对象的呼吸率。基于上述配置,通过分析热图像,可以确定出目标对象的呼吸率,从而在不接触目标对象的情况下得到呼吸率检测结果,实现了非接触检测,填补了非接触检测场景的空白,并且具备良好的检测速度和检测准确度。
在一些可能的实施方式中,所述神经网络基于下述方法得到:获取样本热图像集和所述样本热图像集中多张样本热图像对应的标签;其中,针对每张样本热图像,该样本热图像包括基于样本目标对象的温度信息渲染出的所述样本目标对象的轮廓,该样本热图像对应的标签表征所述样本目标对象的目标区域,所述目标区域为所述样本目标对象的口鼻区域或口罩区域;对所述样本热图像集中的所述多张样本热图像进行特征提取,得到样本特征信息;根据所述样本特征信息预测目标区域,得到目标区域预测结果;根据所述目标区域预测结果和所述标签,训练所述神经网络。基于上述配置,可以使得神经网络具备直接确定目标区域的能力,通过快速准确地确定用于检测呼吸率的位置,提升呼吸率检测准确度和速度。
在一些可能的实施方式中,所述对所述样本热图像集中的所述多张样本热图像进行特征提取,得到样本特征信息,包括:针对每张样本热图像,对该样本热图像进行初始特征提取,得到第一特征图;对该第一特征图进行复合特征提取,得到第一特征信息,其中,该复合特征提取包括通道特征提取;基于该第一特征信息中的显著特征,对该第一特征图进行过滤得到过滤结果;提取所述过滤结果中的第二特征信息;融合该第一特征信息和该第二特征信息,得到该样本热图像的样本特征信息。基于上述配置,可以提升第二特征信息的有效度和判别力,进而提升最终的样本特征信息中信息的丰富程度。
在一些可能的实施方式中,所述神经网络包括第一神经网络和第二神经网络;所述针对所述至少两张热图像中的每一张,基于神经网络提取该热图像中的目标区域,包括:基于所述第一神经网络提取该热图像中的人脸目标;基于所述第二神经网络提取所述人脸目标中的目标区域,该目标区域为所述人脸目标中的口罩区域。基于上述配置,可以在确定人脸的基础上再行确定口罩区域,避免对未戴在人脸的口罩进行呼吸率分析,提 升呼吸率检测准确度和速度。
在一些可能的实施方式中,所述针对所述至少两张热图像中的每一张,基于神经网络提取该热图像中的目标区域,包括:基于所述神经网络提取该热图像中的关键区域,所述关键区域为口罩区域或口鼻区域;确定呼吸率检测场景;根据所述呼吸率检测场景,确定所述关键区域与所述目标区域的目标映射关系;根据所述目标映射关系和所述关键区域,确定所述目标区域。基于上述配置,对于不同的呼吸率检测场景可以根据从神经网络直接识别出的关键区域确定目标区域,从而间接自动确定出目标区域,进一步扩展了本公开实施例的应用场景。
在一些可能的实施方式中,所述确定呼吸率检测场景,包括:获取场景映射信息,所述场景映射信息表征场景特征信息与场景类别的对应关系;对所述至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息;根据所述目标场景特征信息和所述场景映射信息,得到所述目标场景特征信息对应的目标场景类别,所述目标场景类别指向所述呼吸率检测场景。基于上述配置,可以根据预设的场景映射信息和提取到的目标场景特征信息,自动确定出呼吸率检测场景,在无需人工干预的情况下高效准确地完成呼吸率检测场景的确定。
在一些可能的实施方式中,所述根据所述呼吸率检测场景,确定所述关键区域与所述目标区域的目标映射关系,包括:获取映射关系管理信息,所述映射关系管理信息表征映射关系与场景类别的对应关系;根据所述目标场景类别和所述映射关系管理信息,得到所述目标映射关系。基于上述配置,可以根据预设的映射关系管理信息和呼吸率检测场景,自动确定出目标映射关系,在无需人工干预的情况下高效准确地完成目标映射关系的确定。
在一些可能的实施方式中,所述对所述至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息,包括:对所述至少两张热图像中的至少一张热图像进行多尺度特征提取,得到多个层级的特征提取结果;按照层级递增顺序,对所述特征提取结果进行融合,得到多个层级的特征融合结果;按照层级递减顺序,对所述特征融合结果进行融合,得到所述目标场景特征信息。基于上述配置,使得目标场景特征信息不仅包含较为丰富的特征信息,还包含充分的上下文信息。
在一些可能的实施方式中,所述提取该目标区域对应的温度信息,包括:确定该目标区域中像素点对应的温度信息;根据所述像素点对应的温度信息,计算该目标区域对应的温度信息。基于上述配置,通过计算每一目标区域的温度信息,可以进而确定目标对象的呼吸率。
在一些可能的实施方式中,所述根据提取到的温度信息,确定所述目标对象的呼吸率,包括:按照时间顺序对所述温度信息进行排序,得到温度序列;对所述温度序列进行降噪处理,得到目标温度序列;基于所述目标温度序列,确定所述目标对象的呼吸率。基于上述配置,通过确定温度序列,对该温度序列进行降噪处理,可以滤除影响呼吸率计算的噪声,使得得到的呼吸率更为准确。
在一些可能的实施方式中,所述对所述温度序列进行降噪处理,得到目标温度序列,包括:确定降噪处理策略和降噪处理方式;根据所述降噪处理策略,基于所述降噪方式对所述温度序列进行处理,得到所述目标温度序列;其中,所述降噪处理策略包括下述至少一个:基于高频阈值降噪、基于低频阈值降噪、滤除随机噪声、后验降噪;所述降噪处理基于下述至少一种方式实施:独立成分分析、拉普拉斯金字塔、带通滤波、小波、汉明窗。基于上述配置,可以使得得到的目标温度序列较为平滑,噪声较少,波峰波谷均十分清晰,从而可以使得基于该目标温度序列确定出的呼吸率可以更为准确。
在一些可能的实施方式中,所述基于所述目标温度序列,确定所述目标对象的呼吸 率,包括:确定所述目标温度序列中多个关键点,所述关键点均为峰值点或均为谷值点;对于任意两个相邻关键点,确定所述两个相邻关键点之间时间间隔;根据所述时间间隔,确定所述呼吸率。基于上述配置,可以基于得到的各时间间隔准确确定出目标对象的呼吸率。
根据本公开的第二方面,提供一种基于热成像的呼吸率检测装置,所述装置包括:热图像获取模块,用于获取至少两张热图像,所述至少两张热图像包括基于目标对象的温度信息渲染出的所述目标对象的轮廓;针对所述至少两张热图像中的每一张,目标区域提取模块,用于基于神经网络提取该热图像中的目标区域;温度信息提取模块,用于提取该目标区域对应的温度信息;其中,所述至少两张热图像中所述目标对象的温度信息跟随所述目标对象的呼吸呈现周期性变化;呼吸率确定模块,用于根据提取到的温度信息,确定所述目标对象的呼吸率。
在一些可能的实施方式中,所述神经网络基于下述方法得到:获取样本热图像集和所述样本热图像集中多张样本热图像对应的标签;其中,针对每张样本热图像,该样本热图像包括基于样本目标对象的温度信息渲染出的所述样本目标对象的轮廓,该样本热图像对应的标签表征所述样本目标对象的目标区域,所述目标区域为所述样本目标对象的口鼻区域或口罩区域;对所述样本热图像集中的所述多张样本热图像进行特征提取,得到样本特征信息;根据所述样本特征信息预测目标区域,得到目标区域预测结果;根据所述目标区域预测结果和所述标签,训练所述神经网络。
在一些可能的实施方式中,所述装置还包括样本特征信息提取模块,用于针对每张样本热图像,对该样本热图像进行初始特征提取,得到第一特征图;对该第一特征图进行复合特征提取,得到第一特征信息,其中,该复合特征提取包括通道特征提取;基于该第一特征信息中的显著特征,对该第一特征图进行过滤得到过滤结果;提取所述过滤结果中的第二特征信息;融合该第一特征信息和该第二特征信息,得到该样本热图像的样本特征信息。
在一些可能的实施方式中,所述神经网络包括第一神经网络和第二神经网络;所述目标区域提取模块,基于所述第一神经网络提取该热图像中的人脸目标;基于所述第二神经网络提取所述人脸目标中的目标区域,该目标区域为所述人脸目标中的口罩区域。
在一些可能的实施方式中,所述目标区域提取模块,用于基于所述神经网络提取该热图像中的关键区域,所述关键区域为口罩区域或口鼻区域;确定呼吸率检测场景;根据所述呼吸率检测场景,确定所述关键区域与所述目标区域的目标映射关系;根据所述目标映射关系和所述关键区域,确定所述目标区域。
在一些可能的实施方式中,所述目标区域提取模块,还用于获取场景映射信息,所述场景映射信息表征场景特征信息与场景类别的对应关系;对所述至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息;根据所述目标场景特征信息和所述场景映射信息,得到所述目标场景特征信息对应的目标场景类别,所述目标场景类别指向所述呼吸率检测场景。
在一些可能的实施方式中,所述目标区域提取模块,还用于获取映射关系管理信息,所述映射关系管理信息表征映射关系与场景类别的对应关系;根据所述目标场景类别和所述映射关系管理信息,得到所述目标映射关系。
在一些可能的实施方式中,所述目标区域提取模块,还用于对所述至少两张热图像中的至少一张热图像进行多尺度特征提取,得到多个层级的特征提取结果;按照层级递增顺序,对所述特征提取结果进行融合,得到多个层级的特征融合结果;按照层级递减顺序,对所述特征融合结果进行融合,得到所述目标场景特征信息。
在一些可能的实施方式中,所述温度信息提取模块,用于确定该目标区域中像素点 对应的温度信息;根据所述像素点对应的温度信息,计算该目标区域对应的温度信息。
在一些可能的实施方式中,所述呼吸率确定模块,用于按照时间顺序对所述温度信息进行排序,得到温度序列;对所述温度序列进行降噪处理,得到目标温度序列;基于所述目标温度序列,确定所述目标对象的呼吸率。
在一些可能的实施方式中,所述呼吸率确定模块,还用于确定降噪处理策略和降噪处理方式;根据所述降噪处理策略,基于所述降噪方式对所述温度序列进行处理,得到所述目标温度序列;所述降噪处理策略包括下述至少一个:基于高频阈值降噪、基于低频阈值降噪、滤除随机噪声、后验降噪;所述降噪处理基于下述至少一种方式实施:独立成分分析、拉普拉斯金字塔、带通滤波、小波、汉明窗。
在一些可能的实施方式中,所述呼吸率确定模块,还用于确定所述目标温度序列中多个关键点,所述关键点均为峰值点或均为谷值点;对于任意两个相邻关键点,确定所述两个相邻关键点之间时间间隔;根据所述时间间隔,确定所述呼吸率。
根据本公开的第三方面,提供了一种电子设备,包括至少一个处理器,以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述至少一个处理器通过执行所述存储器存储的指令实现如第一方面中任意一项所述的基于热成像的呼吸率检测方法。
根据本公开的第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令或至少一段程序,所述至少一条指令或至少一段程序由处理器加载并执行以实现如第一方面中任意一项所述的基于热成像的呼吸率检测方法或第二方面中任意一项所述的基于热成像的呼吸率检测方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案和优点,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它附图。
图1示出根据本公开实施例的一种基于热成像的呼吸率检测方法的流程示意图;
图2示出根据本公开实施例的热图像示意图;
图3示出根据本公开实施例的基于神经网络提取目标区域的示意图;
图4示出根据本公开实施例的特征提取方法的流程示意图;
图5示出根据本公开实施例的目标区域确定方法流程示意图;
图6示出根据本公开实施例的确定呼吸率检测场景方法流程示意图;
图7示出根据本公开实施例的场景特征提取方法流程示意图;
图8示出根据本公开实施例的特征提取网络示意图;
图9示出根据本公开实施例的根据提取到的温度信息确定目标对象呼吸率的方法流程示意图;
图10示出根据本公开实施例的根据提取到的目标温度信息序列确定目标对象呼吸率的方法流程示意图;
图11示出根据本公开实施例的一种基于热成像的呼吸率检测装置的框图;
图12示出根据本公开实施例的一种电子设备的框图;
图13示出根据本公开实施例的另一种电子设备的框图。
具体实施方式
下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或服务器不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
本公开实施例提供一种基于热成像的呼吸率检测方法,该方法可以基于热成像摄像头拍摄到的热图像分析出拍摄对象的呼吸率,不需要与拍摄对象产生直接接触即可得到拍摄对象的呼吸率,满足人们对于无接触测量呼吸率的客观需求。本公开实施例可以在各种需要进行呼吸率无接触测量的具体场景中被使用,本公开实施例对于该具体场景并不进行具体限定。示例性的,在需要进行隔离的场景、在人流密集的场景,在某些有特殊要求的公共场合等均可以使用本公开实施例提供的方法进行无接触的呼吸率检测。
本公开实施例提供的基于热成像的呼吸率检测方法可以由终端设备、服务器或其它类型的电子设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该基于热成像的呼吸率检测方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。下面以电子设备作为执行主体为例对本公开实施例的基于热成像的呼吸率检测方法进行说明。
图1示出根据本公开实施例的一种基于热成像的呼吸率检测方法的流程示意图,如图1所示,上述方法包括:
S101:获取至少两个热图像,上述热图像包括基于目标对象的温度信息渲染出的上 述目标对象的轮廓。
本公开实施例可以基于热成像摄像头拍摄目标对象,得到上述目标对象的温度信息。基于该目标对象各个位置的温度信息渲染出该目标对象的轮廓,得到包括该轮廓的热图像。本公开实施例根据该热图像中温度的周期性变化规律检测呼吸率,至少需要两个热图像。图2示出根据本公开实施例的热图像示意图。将热成像摄像头对准人脸进行拍摄,可以得到人脸的温度信息,基于该温度信息即可渲染得到图2中的人脸轮廓。
本公开实施例并不对热成像摄像头进行限定,其可以是固定热成像摄像头也可以是可旋转热成像摄像头。也不限定热成像摄像头的控制方式,其可以响应于预设指令被触发,比如,由控制者或者相关传感器触发相关的控件,该热成像摄像头即可开启拍摄。在一个实施例中,该热成像摄像头还可以响应于感应信息被触发,比如,当周围温度升高到预设阈值时,该热成像摄像头可以自动启动拍照。在另一个实施例中,该热成像摄像头还可以定时被触发。
本公开实施例并不限定热成像摄像头的拍照模式,比如其拍照帧率,拍照清晰度模式、拍照结果输出模式等都可以根据实际情况进行设定。在被触发进行拍照的情况下,该热成像摄像头可以以视频流形式输出拍摄到的热图像,也可以以图片形式输出多个热图像。在输出视频流的情况下,本公开实施例可以对该视频流进行解析,得到上述多个热图像。
本公开实施例旨在测量呼吸率,呼吸率为一项生理参数,上述目标对象相应的为生物体,比如人,后文以进行人呼吸率检测为例进行详述。
S102:基于神经网络提取各上述热图像中的目标区域。
本公开基于神经网络提取各上述热图像中的目标区域,由于本公开实施例使用的是基于温度信息渲染目标对象轮廓的热图像,这种热图像伴随着热成像技术和渲染技术的提升而产生,目前对于这种热图像的图像处理方法相对较少,也难以自动从这种热图像中提取出充分的信息,因此,较多依赖于人工分析。本公开实施例可以对这种热图像进行处理,自动分析出其中的目标区域。
具体地,可以基于神经网络(Neural Network,NN)进行自动分析。神经网络在机器学习领域的一种模仿生物神经网络结构和功能的深度学习模型。机器学习(Machine Learning,ML)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、式教学习等技术。深度学习(Deep Learning,DL)是机器学习的分支,是一种试图使用包含复杂结构或由多重非线性变换构成的多个处理层对数据进行高层抽象的算法。
本公开实施例并不对目标区域进行限定,其可以指向目标对象的呼吸位置。比如,该目标区域可以为人脸的口鼻区域或者人脸佩戴口罩的区域。或者,该目标区域可以为由于目标对象呼吸而产生对应变化的区域。其可以根据关键区域确定,该关键区域可以基于神经网络识别出。关键区域和目标区域的目标映射关系可以根据实际情况进行确定。比如,在人侧卧的场景中,该关键区域可以为基于神经网络确定的口鼻位置或口罩位置,而人在左侧卧的情况下,呼出的气体会向下向左运动,则目标区域可以位于该关键区域的左下部。
本公开实施例也不限定目标区域的数量,每个热图像可以具备一个或多个目标区域,后文以单个目标区域为例进行说明。
在一个实施例中,目标区域为口鼻区域或口罩区域,可以直接基于神经网络在上述 热图像中提取该目标区域。本公开实施例中口鼻区域可以为口部区域和/或鼻部区域,在实际应用时,可以将口部区域和鼻部区域分别提取出来作为目标区域,也可以将口部区域和鼻部区域合并作为一个目标区域进行提取。
请参考图3,其示出了基于神经网络提取目标区域的示意图,图3中左侧图像表示口部区域和鼻部区域分别提取的效果,中部图像将口部区域和鼻部区域作为一个目标区域进行提取,右部图像提取的目标区域为口罩区域。
具体地,可以获取样本热图像集和样本热图像集中样本热图像对应的标签;针对每张样本热图像,该样本图像包括基于样本目标对象的温度信息渲染出的样本目标对象的轮廓,标签表征样本目标对象的目标区域;目标区域为样本目标对象的口鼻区域或口罩区域;对样本热图像集中的样本热图像进行特征提取,得到样本特征信息;根据样本特征信息预测目标区域,得到目标区域预测结果;根据目标区域预测结果和标签,训练上述神经网络。基于上述配置,可以使得神经网络具备直接确定目标区域的能力,通过快速准确地确定用于检测呼吸率的位置,提升呼吸率检测准确度和速度。
本公开实施例并不对该上述训练过程进行详述。示例性的,上述神经网络可以基于特征金字塔逐层进行特征提取,并根据提取到的样本特征信息预测目标区域,根据预测出的目标区域与上述标签之间的差异反馈调节该神经网络的参数。由于热图像是根据温度信息渲染得到的,热图像的清晰度相较于可见光图像可能有所不及,为了得到充分的具备判别力的特征信息,本公开实施例对于特征提取过程进行优化。
在一个实施例中,请参考图4,其示出根据本公开实施例的特征提取方法的流程示意图。针对样本热图像集中的每张样本热图像,上述特征提取包括:
S1.对样本热图像进行初始特征提取,得到第一特征图。
本公开实施例并不限定初始特征提取的具体方法,示例性的,可以对上述图像进行至少一级的卷积处理,得到上述第一特征图。在进行卷积处理的过程中,可以得到多个不同尺度的图像特征提取结果,可以融合至少两个不同尺度的上述图像特征提取结果得到上述第一特征图。
S2.对第一特征图进行复合特征提取,得到第一特征信息,其中,上述复合特征提取包括通道特征提取。
在一个实施例中,上述对上述第一特征图进行复合特征提取,得到第一特征信息可以包括:对上述第一特征图进行图像特征提取,得到第一提取结果。对上述第一特征图进行通道信息提取,得到第二提取结果。融合上述第一提取结果和上述第二提取结果,得到上述第一特征信息。本公开实施例并不限定对上述第一特征图进行图像特征提取的方法,示例性的,其可以对上述第一特征图进行至少一级卷积处理,得到上述第一提取结果。本公开实施例中的通道信息提取可以关注第一特征图中的各个通道之间的关系的挖掘。示例性的,其可以基于对多通道的特征进行融合实现。本公开实施例中的复合特征提取可以通过融合上述第一提取结果和上述第二提取结果,既保留第一特征图本身的低阶信息,又可以充分提取到高阶的通道间信息,提升挖掘出的第一特征信息的信息丰富程度和表达力。在实施复合特征提取的过程中,可能用到至少一种融合方法,本公开实施例不对该融合方法进行限定,降维、加法、乘法、内积、卷积、求平均的至少一种及其组合都可以被用于进行融合。
S3.基于上述第一特征信息中的显著特征,对上述第一特征图进行过滤。
本公开实施例中可以根据上述第一特征信息判断上述第一特征图中较为显著的区域和不甚显著的区域,并将较为显著的区域中的信息过滤掉,得到过滤结果。也就是说,第一特征信息包括较为显著的区域和不甚显著的区域,在将较为显著的区域中的信息过滤掉之后,过滤结果中仅包括不甚显著的区域。在一些实施例中,显著特征可以指在第 一特征信息中,与生物体(比如,人)的心跳频率吻合程度较高的信号信息。由于第一特征信息中显著特征的分布较为分散,较为显著的区域中70%的信息可能与心跳频率基本吻合,不甚显著的区域中实际上也包括显著特征。本公开实施例并不对显著特征判断方法进行限定,可以基于神经网络也可以基于专家经验进行限定。
S4.提取过滤结果中的第二特征信息。
具体地,可以抑制上述过滤结果中的显著特征,得到第二特征图;上述抑制上述过滤结果中的显著特征,得到第二特征图,包括:对上述过滤结果进行特征提取得到目标特征,对上述目标特征进行复合特征提取得到目标特征信息,以及基于上述目标特征信息中的显著特征,对上述目标特征进行过滤,得到上述第二特征图。在没有达到预设的停止条件(例如,停止条件为第二特征图中的显著特征占比低于5%,又例如,停止条件为第二特征图的更新次数达到预设次数)的情况下,根据上述第二特征图更新上述过滤结果,重复上述抑制上述过滤结果中的显著特征,得到第二特征图的步骤。在达到上述停止条件的情况下,将获取到的每一上述目标特征信息均作为上述第二特征信息。
S5.融合上述第一特征信息和上述第二特征信息,得到上述图像的样本特征信息。
基于上述配置,可以基于层级结构逐层过滤显著特征,并基于过滤结果进行包括通道信息提取的复合特征提取,得到包括多个目标特征信息的第二特征信息,通过逐层挖掘具有判别力的信息,提升第二特征信息的有效度和判别力,进而提升最终的样本特征信息中信息的丰富程度。本公开实施例中该特征提取方法可以用于对样本热图像进行特征提取,在本公开实施例中各需要基于样本热图像训练神经网络的情况下均可以使用。
在另一个实施例中,目标区域为口罩区域,上述神经网络包括第一神经网络和第二神经网络;上述基于神经网络提取各上述热图像中的目标区域,包括:基于上述第一神经网络提取各上述热图像中的人脸目标;基于上述第二神经网络提取上述人脸目标中的目标区域,上述目标区域为上述人脸目标中的口罩区域。第一神经网络和第二神经网络的训练方法的发明构思可以参考前文,在此不做赘述。基于上述配置,可以在确定人脸的基础上再行确定口罩区域,避免对未戴在人脸的口罩进行呼吸率分析,提升呼吸率检测准确度和速度。
在另一个实施例中,可以将上文的口罩区域或口鼻区域确定为关键区域,基于关键区域再行确定目标区域。请参考图5,其示出根据本公开实施例的目标区域确定方法流程示意图,包括:
S10.基于上述神经网络提取各上述热图像中的关键区域,上述关键区域为口罩区域或口鼻区域。
具体地,基于神经网络识别口罩区域或口鼻区域的方法请参考前文,在此不做赘述。
S20.确定呼吸率检测场景。
不同的呼吸率检测场景下,关键区域与目标区域的目标映射关系可以不同。示例性的,若目标对象呈现左侧卧睡姿,则吸气时口鼻自左下方吸入气流,呼气时将气流向左下方呼出,则目标区域位于关键区域左下方。若目标对象呈现右侧卧睡姿,则吸气时口鼻自右下方吸入气流,呼气时将气流向右下方呼出,则目标区域位于关键区域右下方。
本公开实施例并不对呼吸率检测场景的确定方式进行限定。在一个实施例中,可以对各类典型的呼吸率检测场景进行层级分类,比如,大类为睡眠场景、活动场景、静坐场景等,小类表征每个大类场景中目标对象的具体姿态,比如睡眠场景中,用户是左侧卧睡眠、右侧卧睡眠还是仰卧睡眠。每个层级分类对应唯一的场景标识,比如,睡眠场景的大类标识为10,左侧卧睡眠、右侧卧睡眠和仰卧睡眠的标识具体为10-1、10-2和10-3。获取用户输入的具体的场景标识,即可确定呼吸率检测场景。
在另一个实施例中,还可自动确定呼吸率检测场景。请参考图6,其示出根据本公 开实施例的确定呼吸率检测场景方法流程示意图,包括:
S21.获取场景映射信息,上述场景映射信息表征场景特征信息与场景类别的对应关系。
在一些实施方式中,可以基于海量热图像进行场景聚类,本公开实施例对于场景聚类方法不做限定,比如,其可以根据上述的层级分类对上述海量热图像进行场景聚类,得到各个场景类别对应的热图像。针对每个场景类别,对其对应的热图像进行特征信息提取,根据特征信息提取结果确定出该场景类别的场景特征信息。本公开实施例不限定根据特征信息提取结果确定出该场景类别的场景特征信息的具体方式,比如,可以进一步根据该特征信息提取结果进行聚类,将聚类中心对应的特征信息确定为该场景类别的场景特征信息。也可以随机选取多个特征提取结果,将各特征提取结果的平均值确定为该场景类别的场景特征信息。
S22.对至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息。
本公开实施例可以认为步骤S101中的热图像均位于相同场景,可以从中任选一个或多个热图像进行场景特征提取,以得到目标场景特征信息。本公开实施例并不限定场景特征提取的具体方式,在一个实施例中,请参考图7,其示出根据本公开实施例的场景特征提取方法流程示意图,包括:
S221.对上述热图像进行多尺度特征提取,得到多个层级的特征提取结果。
本公开实施例可以基于特征提取网络进行上述场景特征提取。请参考图8,其示出根据本公开实施例的特征提取网络(用于进行场景特征的提取)示意图。该特征提取网络可通过自上而下的通道和横向的连接来扩展形成一个标准的卷积网络,从而可以从单一分辨率的热图像中有效提取出丰富的、多尺度的上述特征提取结果。其中,特征提取网络仅简单示意出3层,而在实际应用中,特征提取网络可以包括4层甚至更多。特征提取网络中的下采样网络层可以输出各个尺度的特征提取结果,该下采样网络层事实上是实现特征聚合功能的相关的网络层的总称,具体地,该下采样网络层可以为最大池化层、平均池化层等,本公开实施例并不限定下采样网络层的具体结构。
S222.按照层级递增顺序,对上述特征提取结果进行融合,得到多个层级的特征融合结果。
本公开实施例中特征提取网络不同层提取到的特征提取结果具备不同尺度,按照层级递增顺序,可以对上述特征提取结果进行融合,得到多个层级的特征融合结果。以图8为例,上述特征提取网络可以包括三个特征提取层,按照层级递增顺序,依次输出特征提取结果A1,B1和C1。本公开实施例并不限定特征提取结果的表达方式,上述特征提取结果A1,B1和C1可以通过特征图、特征矩阵或特征向量表征。可以对特征提取结果A1,B1和C1顺序融合,得到多个层级的特征融合结果。比如,可以直接由特征提取结果A1进行自身的通道间信息融合,得到特征融合结果A2。特征提取结果A1和特征提取结果B1可以融合得到特征融合结果B2。特征提取结果A1、特征提取结果B1和特征提取结果C1可以融合得到特征融合结果C2。本公开实施例并不限定具体的融合方法,降维、加法、乘法、内积、卷积的至少一种及其组合都可以被用于进行上述融合。
S223.按照层级递减顺序,对上述特征融合结果进行融合,得到上述目标场景特征信息。
示例性的,可以对上文得到的特征融合结果C2,B2和A2顺序融合,得到目标场景特征信息。融合过程中使用的融合方法与上一步骤可以相同或不同,本公开实施例对此不进行限定。基于上述配置,可以通过双向融合的方式使得目标场景特征信息不仅包 含较为丰富的特征信息,还包含充分的上下文信息。
S23.根据上述目标场景特征信息和上述场景映射信息,得到上述目标场景特征信息对应的目标场景类别,上述目标场景类别指向上述呼吸率检测场景。
在一些实施例中,可以将与目标场景特征信息距离最近的场景特征信息所对应的场景类别,确定为上述目标场景类别。基于这一配置,可以自动确定目标场景类别,使得在优化目标场景特征信息提取方法的前提下,自动确定出的目标场景类别的准确度更高。
S30.根据上述呼吸率检测场景,确定关键区域与目标区域的目标映射关系。
在一个实施例中,可以获取映射关系管理信息,上述映射关系管理信息表征映射关系与场景类别的对应关系。根据上述目标场景类别和上述映射关系管理信息,得到上述目标映射关系。上述映射关系表征关键区域与目标区域的对应关系,映射关系管理信息表征映射关系与场景类别的对应关系,该映射关系以及映射关系管理信息可以根据实际情况进行设定,也可以跟随实际情况进行修改,以使得本公开实施例中的方案可以随着应用场景的扩展进行自适应的更新,以充分满足在各种场景中提供非接触式呼吸率检测服务的需求。
S40.根据上述目标映射关系和上述关键区域,确定上述目标区域。
在该实施例中,对于不同的呼吸率检测场景可以根据从神经网络直接识别出的关键区域确定目标区域,从而间接自动确定出目标区域,进一步扩展了本公开实施例的应用场景。
S103.提取各上述目标区域对应的温度信息。
在一个实施例中,可以对于每一目标区域,确定上述目标区域中相关像素点对应的温度信息。根据各上述相关像素点对应的温度信息,计算上述目标区域对应的温度信息。通过计算每一目标区域的温度信息,可以进而确定目标对象的呼吸率。
本公开实施例并不限定相关像素点。示例性的,该目标区域中的每一像素点都可以是该相关像素点。在一个实施例中,还可以基于目标区域中每一像素点的温度信息进行像素点过滤,将温度信息不符合预设温度要求的像素点过滤掉,将未被过滤的像素点确定为该相关像素点。本公开实施例并不对预设温度要求进行限定,比如,可以限定温度上限、温度下限或温度区间。
本公开实施例并不限定计算目标区域对应的温度信息的具体方法。示例性的,可以将各相关像素点对应的温度信息的均值或加权均值确定为该目标区域对应的温度信息,本公开实施例对于权值不进行限定,可以由用户根据实际需求进行设定。在一个实施例中,该权值可以与对应的相关像素点距离目标区域的中心位置的距离反相关。示例性的,若相关像素点距离该目标区域的中心位置较近,则该权值较高,若相关像素点距离该目标区域的中心位置距离较远,则该权值较低。
S104.上述温度信息跟随上述目标对象的呼吸呈现周期性变化,根据提取到的上述温度信息,确定上述目标对象的呼吸率。
本公开实施例认为目标对象的呼吸会导致目标区域的温度呈现周期性变化的规律,当目标对象吸气时,目标区域的温度会随之降低,当目标对象呼气时,目标区域的温度会随之升高,通过对提取到的上述温度信息的周期变化规律进行分析,可以确定目标对象的呼吸率。
请参考图9,其示出根据本公开实施例的根据提取到的温度信息确定目标对象呼吸率的方法流程示意图,包括:
S1041.按照时间顺序对各上述温度信息进行排序,得到温度序列。
对于每个目标对象,可以得到一个温度序列。以步骤S101中得到200张热图像为例,每个热图像均通过拍摄目标对象A得到,则每个热图像中都可以被提取到关于目标对象A的目标区域,进而得到该目标区域的温度信息,从而可以得到一个包含200条温度信息的温度序列。若上述每个热图像包括N个目标对象,则可以得到N个包含200条温度信息的温度序列。
S1042.对上述温度序列进行降噪处理,得到目标温度序列。
本公开实施例中可以确定降噪处理策略和降噪处理方式;根据上述降噪处理策略,基于上述降噪方式对上述温度序列进行处理,得到上述目标温度序列。
本公开实施例并不限定上述降噪处理策略和降噪处理方式的具体内容。示例性的上述降噪处理策略包括下述至少一个:基于高频阈值降噪、基于低频阈值降噪、滤除随机噪声、后验降噪。示例性的,上述降噪处理基于下述至少一种方式实施:独立成分分析、拉普拉斯金字塔、带通滤波、小波、汉明窗。
以后验降噪为例,可以设定后验降噪对应的呼吸率验证条件和降噪经验参数,根据该降噪经验参数对上述温度序列进行降噪,得到目标温度序列。根据该目标温度序列确定目标对象的呼吸率,并基于该呼吸率验证条件对确定出的目标对象的呼吸率进行验证,若验证通过,则确定该降噪经验参数的降噪效果是可以被接受的,在后续再次执行步骤S1042时,可以直接基于该降噪经验参数进行降噪。本公开实施例并不限定降噪经验参数的确定方法,可以根据专家经验得到。
S1043.基于上述目标温度序列,确定上述目标对象的呼吸率。
通过确定温度序列,对该温度序列进行降噪处理,可以滤除影响呼吸率计算的噪声,使得得到的呼吸率更为准确。请参考图10,其示出根据本公开实施例的根据提取到的目标温度信息序列确定目标对象呼吸率的方法流程示意图,包括:
S10431.确定上述目标温度序列中多个关键点,上述关键点均为峰值点或均为谷值点。
S10432.对于任意两个相邻关键点,确定上述两个相邻关键点之间时间间隔。
对于提取到的N个关键点,每两个相邻关键点可以计算对应的时间间隔,则可以确定出N-1个时间间隔。
S10433.根据上述时间间隔,确定上述呼吸率。
本公开实施例并不限定根据时间间隔确定上述呼吸率的具体方法。比如,对于上述N-1个时间间隔,可以将其中的一个的倒数确定为上述呼吸率,也可以基于其中的若干时间间隔或全部时间间隔确定呼吸率,比如,可以将上述若干时间间隔或全部时间间隔的平均值的倒数确定为上述呼吸率。本公开实施例通过计算相邻关键点之间的时间间隔,可以准确地确定呼吸率。
本公开实施例提供的呼吸率检测方法,可以对一个或多个目标对象进行检测,只要该目标对象位于热成像摄像头的视野即可。通过对目标对象进行热成像拍照即可确定呼吸率,而不需要与目标对象产生接触,可以被广泛用于各种场景。比如,在医院病房监控中,病人无需佩戴任何设备,即可监控病人的呼吸速率,降低病人的不适感,提高监护病人的质量、效果和效率。在封闭场景下,比如办公室、办公楼大厅,检测在场人员呼吸速率,判断有无异常。在婴儿看护场景下,检测婴儿呼吸,避免婴儿因食物阻塞呼吸道而窒息,及实时分析婴儿呼吸速率以判断婴儿健康状态。在传染风险高的场景下,远程遥控热成像摄像头拍摄可能成为传染源的目标对象,在避免感染的同时监控目标对象生命体征。
本公开实施例提供的基于热成像的呼吸率检测方法,通过分析热成像摄像头拍摄 到的热图像,可以确定出目标对象的呼吸率,从而在不接触目标对象的情况下得到呼吸率检测结果,实现了非接触检测,填补了非接触检测场景的空白,并且具备良好的检测速度和检测准确度。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格地执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。
图11示出根据本公开实施例的一种基于热成像的呼吸率检测装置的框图。如图11所示,上述装置包括:
热图像获取模块10,用于获取至少两张热图像,所述至少两张热图像包括基于目标对象的温度信息渲染出的所述目标对象的轮廓;
目标区域提取模块20,针对所述至少两张热图像中的每一张,用于基于神经网络提取该热图像中的目标区域;
温度信息提取模块30,针对所述至少两张热图像中的每一张,用于提取该目标区域对应的温度信息;其中,所述至少两张热图像中所述目标对象的温度信息跟随所述目标对象的呼吸呈现周期性变化;
呼吸率确定模块40,用于根据提取到的温度信息,确定所述目标对象的呼吸率。
在一些可能的实施方式中,上述神经网络基于下述方法得到:获取样本热图像集和所述样本热图像集中多张样本热图像对应的标签;其中,针对每张样本热图像,该样本热图像包括基于样本目标对象的温度信息渲染出的所述样本目标对象的轮廓,该样本热图像对应的标签表征所述样本目标对象的目标区域,所述目标区域为所述样本目标对象的口鼻区域或口罩区域;对所述样本热图像集中的所述多张样本热图像进行特征提取,得到样本特征信息;根据所述样本特征信息预测目标区域,得到目标区域预测结果;根据所述目标区域预测结果和所述标签,训练所述神经网络。
在一些可能的实施方式中,所述装置还包括样本特征信息提取模块,用于针对每张样本热图像,对该样本热图像进行初始特征提取,得到第一特征图;对该第一特征图进行复合特征提取,得到第一特征信息,其中,该复合特征提取包括通道特征提取;基于该第一特征信息中的显著特征,对该第一特征图进行过滤得到过滤结果;提取所述过滤结果中的第二特征信息;融合该第一特征信息和该第二特征信息,得到该样本热图像的样本特征信息。
在一些可能的实施方式中,上述神经网络包括第一神经网络和第二神经网络;上述目标区域提取模块,基于所述第一神经网络提取该热图像中的人脸目标;基于所述第二神经网络提取所述人脸目标中的目标区域,该目标区域为所述人脸目标中的口罩区域
在一些可能的实施方式中,上述目标区域提取模块,用于基于所述神经网络提取该热图像中的关键区域,所述关键区域为口罩区域或口鼻区域;确定呼吸率检测场景;根据所述呼吸率检测场景,确定所述关键区域与所述目标区域的目标映射关系;根据所述目标映射关系和所述关键区域,确定所述目标区域。
在一些可能的实施方式中,上述目标区域提取模块,还用于获取场景映射信息,上述场景映射信息表征场景特征信息与场景类别的对应关系;对所述至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息;根据所述目标场景特征信息和所述场景映射信息,得到所述目标场景特征信息对应的目标场景类别,所述目标场景类别指向所述呼吸率检测场景。
在一些可能的实施方式中,上述目标区域提取模块,还用于获取映射关系管理信息,上述映射关系管理信息表征映射关系与场景类别的对应关系;根据上述目标场景类别和上述映射关系管理信息,得到上述目标映射关系。
在一些可能的实施方式中,上述目标区域提取模块,还用于对所述至少两张热图像中的至少一张热图像进行多尺度特征提取,得到多个层级的特征提取结果;按照层级递增顺序,对所述特征提取结果进行融合,得到多个层级的特征融合结果;按照层级递减顺序,对所述特征融合结果进行融合,得到所述目标场景特征信息。
在一些可能的实施方式中,上述温度信息提取模块,用于确定该目标区域中像素点对应的温度信息;根据所述像素点对应的温度信息,计算该目标区域对应的温度信息。
在一些可能的实施方式中,上述呼吸率确定模块,用于按照时间顺序对上述温度信息进行排序,得到温度序列;对上述温度序列进行降噪处理,得到目标温度序列;基于上述目标温度序列,确定上述目标对象的呼吸率。
在一些可能的实施方式中,上述呼吸率确定模块,还用于确定降噪处理策略和降噪处理方式;根据上述降噪处理策略,基于上述降噪方式对上述温度序列进行处理,得到上述目标温度序列。上述降噪处理策略包括下述至少一个:基于高频阈值降噪、基于低频阈值降噪、滤除随机噪声、后验降噪;上述降噪处理基于下述至少一种方式实施:独立成分分析、拉普拉斯金字塔、带通滤波、小波、汉明窗。
在一些可能的实施方式中,上述呼吸率确定模块,还用于确定上述目标温度序列中多个关键点,上述关键点均为峰值点或均为谷值点;对于任意两个相邻关键点,确定上述两个相邻关键点之间时间间隔;根据上述时间间隔,确定上述呼吸率。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本公开实施例还提出一种计算机可读存储介质,上述计算机可读存储介质中存储有至少一条指令或至少一段程序,上述至少一条指令或至少一段程序由处理器加载并执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,上述处理器被配置为上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图12示出根据本公开实施例的一种电子设备的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图12,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存 储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在上述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。上述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与上述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如上述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G、3G、4G、5G或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,上述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算 机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图13示出根据本公开实施例的另一种电子设备的框图。例如,电子设备1900可以被提供为一服务器。参照图13,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,上述编程语言包括面向对象的编程语言—诸如Smalltalk、C+等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,上述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的顺序也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (15)

  1. 一种基于热成像的呼吸率检测方法,包括:
    获取至少两张热图像,所述至少两张热图像包括基于目标对象的温度信息渲染出的所述目标对象的轮廓;
    针对所述至少两张热图像中的每一张,
    基于神经网络提取该热图像中的目标区域;
    提取该目标区域对应的温度信息;其中,所述至少两张热图像中所述目标对象的温度信息跟随所述目标对象的呼吸呈现周期性变化;
    根据提取到的温度信息,确定所述目标对象的呼吸率。
  2. 根据权利要求1所述的方法,其特征在于,所述神经网络基于下述方法得到:
    获取样本热图像集和所述样本热图像集中多张样本热图像对应的标签;其中,针对每张样本热图像,该样本热图像包括基于样本目标对象的温度信息渲染出的所述样本目标对象的轮廓,该样本热图像对应的标签表征所述样本目标对象的目标区域,所述目标区域为所述样本目标对象的口鼻区域或口罩区域;
    对所述样本热图像集中的所述多张样本热图像进行特征提取,得到样本特征信息;
    根据所述样本特征信息预测目标区域,得到目标区域预测结果;
    根据所述目标区域预测结果和所述标签,训练所述神经网络。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述样本热图像集中的所述多张样本热图像进行特征提取,得到样本特征信息,包括:
    针对每张样本热图像,
    对该样本热图像进行初始特征提取,得到第一特征图;
    对该第一特征图进行复合特征提取,得到第一特征信息,其中,该复合特征提取包括通道特征提取;
    基于该第一特征信息中的显著特征,对该第一特征图进行过滤得到过滤结果;
    提取所述过滤结果中的第二特征信息;
    融合该第一特征信息和该第二特征信息,得到该样本热图像的样本特征信息。
  4. 根据权利要求1所述的方法,其特征在于,所述神经网络包括第一神经网络和第二神经网络;所述针对所述至少两张热图像中的每一张,基于神经网络提取该热图像中的目标区域,包括:
    基于所述第一神经网络提取该热图像中的人脸目标;
    基于所述第二神经网络提取所述人脸目标中的目标区域,该目标区域为所述人脸目标中的口罩区域。
  5. 根据权利要求1所述的方法,其特征在于,所述针对所述至少两张热图像中的每一张,基于神经网络提取该热图像中的目标区域,包括:
    基于所述神经网络提取该热图像中的关键区域,所述关键区域为口罩区域或口鼻区域;
    确定呼吸率检测场景;
    根据所述呼吸率检测场景,确定所述关键区域与所述目标区域的目标映射关系;
    根据所述目标映射关系和所述关键区域,确定所述目标区域。
  6. 根据权利要求5所述的方法,其特征在于,所述确定呼吸率检测场景,包括:
    获取场景映射信息,所述场景映射信息表征场景特征信息与场景类别的对应关系;
    对所述至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息;
    根据所述目标场景特征信息和所述场景映射信息,得到所述目标场景特征信息对应的目标场景类别,所述目标场景类别指向所述呼吸率检测场景。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述呼吸率检测场景,确定所述关键区域与所述目标区域的目标映射关系,包括:
    获取映射关系管理信息,所述映射关系管理信息表征映射关系与场景类别的对应关系;
    根据所述目标场景类别和所述映射关系管理信息,得到所述目标映射关系。
  8. 根据权利要求6或7所述的方法,其特征在于,所述对所述至少两张热图像中的至少一张热图像进行场景特征提取,得到目标场景特征信息,包括:
    对所述至少两张热图像中的至少一张热图像进行多尺度特征提取,得到多个层级的特征提取结果;
    按照层级递增顺序,对所述特征提取结果进行融合,得到多个层级的特征融合结果;
    按照层级递减顺序,对所述特征融合结果进行融合,得到所述目标场景特征信息。
  9. 根据权利要求1至8中任意一项所述的方法,其特征在于,所述提取该目标区域对应的温度信息,包括:
    确定该目标区域中像素点对应的温度信息;
    根据所述像素点对应的温度信息,计算该目标区域对应的温度信息。
  10. 根据权利要求1至8中任意一项所述的方法,其特征在于,所述根据提取到的温度信息,确定所述目标对象的呼吸率,包括:
    按照时间顺序对所述温度信息进行排序,得到温度序列;
    对所述温度序列进行降噪处理,得到目标温度序列;
    基于所述目标温度序列,确定所述目标对象的呼吸率。
  11. 根据权利要求10所述的方法,其特征在于,所述对所述温度序列进行降噪处理,得到目标温度序列,包括:
    确定降噪处理策略和降噪处理方式;
    根据所述降噪处理策略,基于所述降噪方式对所述温度序列进行处理,得到所述目标温度序列;
    其中,所述降噪处理策略包括下述至少一个:基于高频阈值降噪、基于低频阈值降噪、滤除随机噪声、后验降噪;所述降噪处理基于下述至少一种方式实施:独立成分分析、拉普拉斯金字塔、带通滤波、小波、汉明窗。
  12. 根据权利要求10或11所述的方法,其特征在于,所述基于所述目标温度序列,确定所述目标对象的呼吸率,包括:
    确定所述目标温度序列中多个关键点,所述关键点均为峰值点或均为谷值点;
    对于任意两个相邻关键点,确定所述两个相邻关键点之间时间间隔;
    根据所述时间间隔,确定所述呼吸率。
  13. 一种基于热成像的呼吸率检测装置,包括:
    热图像获取模块,用于获取至少两张热图像,所述至少两张热图像包括基于目标对象的温度信息渲染出的所述目标对象的轮廓;
    针对所述至少两张热图像中的每一张,
    目标区域提取模块,用于基于神经网络提取该热图像中的目标区域;
    温度信息提取模块,用于提取该目标区域对应的温度信息;其中,所述至少两张热图像中所述目标对象的温度信息跟随所述目标对象的呼吸呈现周期性变化;
    呼吸率确定模块,用于根据提取到的温度信息,确定所述目标对象的呼吸率。
  14. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令或至少一段程序,所述至少一条指令或至少一段程序由处理器加载并执行以实现如权利要求1-12中任意一项所述的基于热成像的呼吸率检测方法。
  15. 一种电子设备,包括至少一个处理器,以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述至少一个处理器通过执行所述存储器存储的指令实现如权利要求1-12中任意一项所述的基于热成像的呼吸率检测方法。
PCT/CN2022/096186 2021-07-30 2022-05-31 基于热成像的呼吸率检测方法、装置及电子设备 WO2023005402A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110877891.9A CN113576452A (zh) 2021-07-30 2021-07-30 基于热成像的呼吸率检测方法、装置及电子设备
CN202110877891.9 2021-07-30

Publications (1)

Publication Number Publication Date
WO2023005402A1 true WO2023005402A1 (zh) 2023-02-02

Family

ID=78253358

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/096186 WO2023005402A1 (zh) 2021-07-30 2022-05-31 基于热成像的呼吸率检测方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN113576452A (zh)
WO (1) WO2023005402A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392527A (zh) * 2023-12-11 2024-01-12 中国海洋大学 一种高精度水下目标分类检测方法及其模型搭建方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576451A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 呼吸率检测方法、装置、存储介质及电子设备
CN113576452A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 基于热成像的呼吸率检测方法、装置及电子设备
TWI816602B (zh) * 2022-11-17 2023-09-21 國立雲林科技大學 生理訊號量測方法及其系統

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103006187A (zh) * 2013-01-10 2013-04-03 浙江大学 一种非接触式生命体征数据监测系统和监测方法
US20170055878A1 (en) * 2015-06-10 2017-03-02 University Of Connecticut Method and system for respiratory monitoring
CN111568388A (zh) * 2020-04-30 2020-08-25 清华大学 一种非接触式口呼吸检测装置、方法及存储介质
CN111839519A (zh) * 2020-05-26 2020-10-30 合肥工业大学 非接触式呼吸频率监测方法及系统
CN111898580A (zh) * 2020-08-13 2020-11-06 上海交通大学 针对戴口罩人群的体温和呼吸数据采集系统、方法及设备
CN112057074A (zh) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 呼吸速率测量方法、装置、电子设备及计算机存储介质
CN113592817A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 检测呼吸率的方法、装置、存储介质及电子设备
CN113576451A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 呼吸率检测方法、装置、存储介质及电子设备
CN113576452A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 基于热成像的呼吸率检测方法、装置及电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8790269B2 (en) * 2011-05-09 2014-07-29 Xerox Corporation Monitoring respiration with a thermal imaging system
WO2015030611A1 (en) * 2013-09-02 2015-03-05 Interag Method and apparatus for determining respiratory characteristics of an animal
US10064559B2 (en) * 2015-06-14 2018-09-04 Facense Ltd. Identification of the dominant nostril using thermal measurements
WO2018069790A1 (en) * 2016-10-14 2018-04-19 Facense Ltd. Systems and methods to detect breathing parameters and provide biofeedback
KR102054213B1 (ko) * 2017-11-24 2019-12-10 연세대학교산학협력단 열화상 카메라를 이용한 호흡 측정 시스템
CN108335305B (zh) * 2018-02-09 2020-10-30 北京市商汤科技开发有限公司 图像分割方法和装置、电子设备、程序和介质
US11589776B2 (en) * 2018-11-06 2023-02-28 The Regents Of The University Of Colorado Non-contact breathing activity monitoring and analyzing through thermal and CO2 imaging
CN111667476B (zh) * 2020-06-09 2022-12-06 创新奇智(广州)科技有限公司 布料瑕疵检测方法、装置、电子设备及可读存储介质
CN112924035B (zh) * 2021-01-27 2022-06-21 复旦大学附属中山医院 基于热成像传感器的体温和呼吸率提取方法及应用
CN113128520B (zh) * 2021-04-28 2022-11-11 北京市商汤科技开发有限公司 图像特征提取方法、目标重识别方法、装置及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103006187A (zh) * 2013-01-10 2013-04-03 浙江大学 一种非接触式生命体征数据监测系统和监测方法
US20170055878A1 (en) * 2015-06-10 2017-03-02 University Of Connecticut Method and system for respiratory monitoring
CN111568388A (zh) * 2020-04-30 2020-08-25 清华大学 一种非接触式口呼吸检测装置、方法及存储介质
CN111839519A (zh) * 2020-05-26 2020-10-30 合肥工业大学 非接触式呼吸频率监测方法及系统
CN112057074A (zh) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 呼吸速率测量方法、装置、电子设备及计算机存储介质
CN111898580A (zh) * 2020-08-13 2020-11-06 上海交通大学 针对戴口罩人群的体温和呼吸数据采集系统、方法及设备
CN113592817A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 检测呼吸率的方法、装置、存储介质及电子设备
CN113576451A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 呼吸率检测方法、装置、存储介质及电子设备
CN113576452A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 基于热成像的呼吸率检测方法、装置及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392527A (zh) * 2023-12-11 2024-01-12 中国海洋大学 一种高精度水下目标分类检测方法及其模型搭建方法
CN117392527B (zh) * 2023-12-11 2024-02-06 中国海洋大学 一种高精度水下目标分类检测方法及其模型搭建方法

Also Published As

Publication number Publication date
CN113576452A (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2023005468A1 (zh) 检测呼吸率的方法、装置、存储介质及电子设备
WO2023005402A1 (zh) 基于热成像的呼吸率检测方法、装置及电子设备
WO2023005469A1 (zh) 呼吸检测区域确定方法、装置、存储介质及电子设备
WO2023005403A1 (zh) 呼吸率检测方法、装置、存储介质及电子设备
Climent-Pérez et al. A review on video-based active and assisted living technologies for automated lifelogging
US20190216333A1 (en) Thermal face image use for health estimation
CN105825522B (zh) 图像处理方法和支持该方法的电子设备
US10960173B2 (en) Recommendation based on dominant emotion using user-specific baseline emotion and emotion analysis
Qi et al. Multiple density maps information fusion for effectively assessing intensity pattern of lifelogging physical activity
US20180121733A1 (en) Reducing computational overhead via predictions of subjective quality of automated image sequence processing
WO2020062969A1 (zh) 动作识别方法及装置、驾驶员状态分析方法及装置
CN105874776A (zh) 图像处理设备和方法
WO2022036972A1 (zh) 图像分割方法及装置、电子设备和存储介质
Fan et al. Fall detection via human posture representation and support vector machine
CN106980840A (zh) 脸型匹配方法、装置及存储介质
WO2021047069A1 (zh) 人脸识别方法和电子终端设备
CN111488774A (zh) 一种图像处理方法、装置和用于图像处理的装置
CN106254807A (zh) 提取静止图像的电子设备和方法
CN111227789A (zh) 人体健康监护方法和装置
CN111144266A (zh) 人脸表情的识别方法及装置
CN115482485A (zh) 视频处理方法、装置、计算机设备及可读存储介质
Healy et al. Detecting demeanor for healthcare with machine learning
US11521424B2 (en) Electronic device and control method therefor
KR20220012407A (ko) 이미지 분할 방법 및 장치, 전자 기기 및 저장 매체
Eldib et al. Detection of visitors in elderly care using a low-resolution visual sensor network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22848012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE