CN113576451A - Respiration rate detection method and device, storage medium and electronic equipment - Google Patents

Respiration rate detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113576451A
CN113576451A CN202110871818.0A CN202110871818A CN113576451A CN 113576451 A CN113576451 A CN 113576451A CN 202110871818 A CN202110871818 A CN 202110871818A CN 113576451 A CN113576451 A CN 113576451A
Authority
CN
China
Prior art keywords
target
area
target object
temperature information
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110871818.0A
Other languages
Chinese (zh)
Inventor
覃德智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110871818.0A priority Critical patent/CN113576451A/en
Publication of CN113576451A publication Critical patent/CN113576451A/en
Priority to PCT/CN2022/096219 priority patent/WO2023005403A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Fuzzy Systems (AREA)
  • Pulmonology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The disclosure relates to a respiration rate detection method, a respiration rate detection device, a storage medium, and an electronic apparatus. The method comprises the steps of acquiring at least two thermal images, wherein the thermal images are obtained by rendering based on temperature information of a preset area, the preset area comprises a target object, and the breathing area of the target object falls into a target area determined based on the preset area; for each of said thermal images, extracting temperature information of said target area in said thermal image, said temperature information varying periodically with respiration of said target subject; and determining the breathing rate of the target object according to the extracted temperature information. According to the method and the device, the breathing rate of the target object can be determined by analyzing the temperature information of the target area of the thermal image, so that non-contact detection is realized, and the method and the device have good detection speed and detection accuracy.

Description

Respiration rate detection method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a respiration rate detection method and apparatus, a storage medium, and an electronic device.
Background
The respiration rate is an important physiological parameter for analyzing the state of the human body, and the related art generally detects the respiration rate of the human body through a contact type respiration rate detection device, such as a respiration belt or an electronic respiration rate measuring instrument. The contact type respiration rate detection device cannot perform non-contact type respiration rate detection on a human body, and therefore, the related art has difficulty in meeting the requirement for non-contact respiration rate detection.
Disclosure of Invention
In order to solve at least one technical problem mentioned above, the present disclosure proposes a respiration rate detection method, apparatus, storage medium, and electronic device.
According to an aspect of the present disclosure, there is provided a respiration rate detection method including: acquiring at least two thermal images, wherein the thermal images are obtained by rendering based on temperature information of a preset area, the preset area comprises a target object, and a breathing area of the target object falls into a target area determined based on the preset area; for each of the thermal images, extracting temperature information of the target area in the thermal image, the temperature information generating periodic changes along with respiration of the target object; and determining the breathing rate of the target object according to the extracted temperature information. Based on the above configuration, by analyzing the temperature information of the target area of the thermal image, the respiration rate of the target object can be determined, so that the respiration rate detection result is obtained without contacting the target object, non-contact detection is realized, the blank of a non-contact detection scene is filled, and good detection speed and detection accuracy are achieved.
In some possible embodiments, the respiration rate detection method is applied to a respiration rate detection apparatus, the respiration rate detection apparatus includes a thermal imaging device, a shooting area of the thermal imaging device is the preset area, and the method further includes: acquiring a video stream, wherein a frame image in the video stream is obtained by rendering based on temperature information shot by the thermal imaging equipment; displaying the video stream, and marking the target area in a picture corresponding to the video stream; the acquiring at least two thermal images includes: extracting at least two target frame images from the video stream as the at least two thermal images, wherein the target frame images are frame images in the video stream when a preset requirement is met, and the preset requirement is that the target object enters the preset area and the breathing area of the target object falls into the target area of the target frame images. Based on the configuration, the frame image shot by the thermal imaging equipment can be displayed, the target area is marked in the display result, the target object can adjust the posture position of the target object according to the displayed picture, so that the breathing area of the target object can fall into the target area in the acquired thermal image, and the breathing rate obtained by analyzing the temperature information based on the target area is accurate.
In some possible embodiments, before the acquiring the video stream, the method further includes: monitoring the temperature state of the preset area; under the condition that the temperature state of the preset area changes, judging whether a condition that a target object enters the preset area exists or not; and if so, generating a video stream acquisition instruction, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream. Based on the configuration, the aim of reducing the resource consumption of the thermal imaging equipment to a greater extent on the premise of not additionally increasing the hardware cost can be achieved by multiplexing the thermal imaging equipment.
In some possible embodiments, before the acquiring the video stream, the method further includes: monitoring the preset area based on a preset sensor, and generating a video stream acquisition instruction under the condition that a monitoring result indicates that a target object enters the preset area, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream, and the preset sensor comprises a visual sensor or an induction sensor. Based on the configuration, the preset area can be monitored at low cost, and only when the target object enters the preset area, the thermal imaging device is triggered to start shooting and output a video stream, so that the resource consumption of the thermal imaging device is reduced to the maximum extent.
In some possible embodiments, the method further comprises: triggering the operation of acquiring the at least two thermal images to be executed in response to a respiration rate detection trigger instruction, wherein the respiration rate detection trigger instruction is used for indicating that the preset requirement is met, and the preset requirement is that the target object enters the preset area and the respiration area of the target object falls into the target area. Based on the above configuration, the start frequency of the respiration rate detection can be reduced, thereby reducing the resource consumption of the respiration rate detection.
In some possible embodiments, the method further comprises: predicting a breathing region of a frame image in the video stream based on a neural network to obtain a prediction result of the breathing region; and under the condition that the coincidence degree of the respiratory region prediction result and the target region is higher than a preset threshold value, triggering to execute the operation of acquiring at least two thermal images. Based on the above configuration, the start frequency of the respiration rate detection can be reduced, thereby reducing the resource consumption of the respiration rate detection.
In some possible embodiments, the neural network is obtained based on the following method: acquiring a sample thermal image and a label corresponding to the sample thermal image; the sample thermal image is obtained by rendering based on temperature information of a sample target object, and the label points to a breathing area of the sample target object; the breathing area is an oral-nasal area or a mask area; carrying out feature extraction on the sample thermal image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; and training the neural network according to the respiratory region prediction result and the label. Based on the configuration, the neural network can be enabled to have the capability of predicting the breathing zone.
In some possible embodiments, the performing feature extraction on the sample thermal image to obtain a feature extraction result includes: performing initial feature extraction on the sample thermal image to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction; filtering the first feature map based on salient features in the first feature information; extracting second characteristic information in the filtering result; and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result. Based on the configuration, the information with sufficient discrimination can be mined, the effectiveness and the discrimination of the second feature information can be improved, and the richness of the information in the final feature extraction result can be further improved.
In some possible embodiments, the extracting temperature information corresponding to the target region in each of the target frame images includes: for a target area in each target frame image, determining temperature information corresponding to a relevant pixel point in the target area; and calculating the temperature information corresponding to the target area according to the temperature information corresponding to each related pixel point. Based on the above configuration, by extracting the temperature information of the target region, the breathing rate of the target subject can be further determined.
In some possible embodiments, the determining the breathing rate of the target subject according to the extracted temperature information includes: sequencing the temperature information according to a time sequence to obtain a temperature sequence; denoising the temperature sequence to obtain a target temperature sequence; determining a respiration rate of the target subject based on the target temperature sequence. Based on the configuration, the temperature sequence is determined, and the noise reduction processing is carried out on the temperature sequence, so that the noise influencing the calculation of the respiratory rate can be filtered, and the obtained respiratory rate is more accurate.
In some possible embodiments, the determining the respiration rate of the target subject based on the target temperature sequence includes: determining each key point in the target temperature sequence, wherein the key points are all peak points or all valley points; for any two adjacent key points, determining the time interval between the two adjacent key points; determining the respiration rate according to the time interval. Based on the above configuration, by calculating the time interval between adjacent key points, the breathing rate can be accurately determined.
According to a second aspect of the present disclosure, there is provided a respiration rate detection apparatus, the apparatus comprising: the thermal image acquisition module is used for acquiring at least two thermal images, the thermal images are obtained by rendering based on temperature information of a preset area, the preset area comprises a target object, the breathing area of the target object falls into the target area determined based on the preset area, and the temperature information generates periodic change along with the breathing of the target object; a temperature information extraction module for extracting, for each of the thermal images, temperature information of the target area in the thermal image; and the breathing rate determining module is used for determining the breathing rate of the target object according to the extracted temperature information.
In some possible embodiments, the respiration rate detection apparatus includes a thermal imaging device, a shooting area of the thermal imaging device is the preset area, the apparatus further includes a video stream processing module, configured to acquire a video stream, and a frame image in the video stream is rendered based on temperature information shot by the thermal imaging device; displaying the video stream, and marking the target area in a picture corresponding to the video stream; the thermal image acquisition module is configured to extract at least two target frame images from the video stream as the at least two thermal images, where the target frame images are frame images in the video stream when a preset requirement is met, and the preset requirement is that the target object enters the preset area and a breathing area of the target object falls into a target area of the target frame images.
In some possible embodiments, the apparatus further includes a first video stream processing triggering module, configured to monitor a temperature state of the preset area; under the condition that the temperature state of the preset area changes, judging whether a condition that a target object enters the preset area exists or not; and if so, generating a video stream acquisition instruction, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream.
In some possible embodiments, the apparatus further includes a second video stream processing triggering module, configured to monitor the preset area based on a preset sensor, and generate a video stream acquiring instruction in a case that a monitoring result indicates that a target object enters the preset area, where the video stream acquiring instruction is used to trigger execution of the operation of acquiring the video stream, and the preset sensor includes a visual sensor or an inductive sensor.
In some possible embodiments, the apparatus further includes a detection triggering module configured to trigger the operation of acquiring the at least two thermal images in response to a respiration rate detection triggering instruction, where the respiration rate detection triggering instruction is used to indicate that the preset requirement is met, and the preset requirement is that the target object enters the preset area and the respiration area of the target object falls into the target area.
In some possible embodiments, the apparatus further includes a detection triggering module, configured to perform a respiratory region prediction on a frame image in the video stream based on a neural network, so as to obtain a respiratory region prediction result; and under the condition that the coincidence degree of the respiratory region prediction result and the target region is higher than a preset threshold value, triggering to execute the operation of acquiring at least two thermal images.
In some possible embodiments, the neural network is obtained based on the following method: acquiring a sample thermal image and a label corresponding to the sample thermal image; the sample thermal image is obtained by rendering based on temperature information of a sample target object, and the label points to a breathing area of the sample target object; the breathing area is an oral-nasal area or a mask area; carrying out feature extraction on the sample thermal image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; and training the neural network according to the respiratory region prediction result and the label.
In some possible embodiments, the apparatus includes a feature extraction module configured to perform an initial feature extraction on the sample thermal image to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction; filtering the first feature map based on salient features in the first feature information; extracting second characteristic information in the filtering result; and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
In some possible embodiments, the temperature information extraction module is configured to, for a target region in each of the target frame images, determine temperature information corresponding to a relevant pixel point in the target region; and calculating the temperature information corresponding to the target area according to the temperature information corresponding to each related pixel point.
In some possible embodiments, the respiration rate determining module is configured to sort the temperature information according to a time sequence to obtain a temperature sequence; denoising the temperature sequence to obtain a target temperature sequence; determining a respiration rate of the target subject based on the target temperature sequence.
In some possible embodiments, the respiration rate determining module is configured to determine each key point in the target temperature sequence, where the key points are both peak points or both valley points; for any two adjacent key points, determining the time interval between the two adjacent key points; determining the respiration rate according to the time interval.
According to a third aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the respiration rate detection method of any one of the first aspects by executing the instructions stored by the memory.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having at least one instruction or at least one program stored therein, the at least one instruction or at least one program being loaded and executed by a processor to implement the respiration rate detection method according to any one of the first aspect or the respiration rate detection method according to any one of the second aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present specification, and other drawings can be obtained by those skilled in the art without inventive efforts.
Fig. 1 shows a flow diagram of a method of breath rate detection according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a respiration rate detection scenario in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a target object pose position adjustment scenario in accordance with an embodiment of the present disclosure;
FIG. 4 shows a schematic flow diagram of a feature extraction method according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic flow chart for determining a target subject breathing rate from extracted temperature information according to an embodiment of the present disclosure;
FIG. 6 shows a schematic flow diagram for determining a target subject breathing rate based on extracted target temperature information, according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of a respiration rate detection device according to an embodiment of the disclosure;
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 9 shows a block diagram of another electronic device in accordance with an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments in the present description, belong to the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The embodiment of the disclosure provides a respiration rate detection method, which can analyze the respiration rate of a shot object based on the temperature change condition in a target area in a thermal image shot by a thermal imaging device, so that the respiration rate of the shot object can be obtained without directly contacting the shot object, and the objective requirement of people on contactless measurement of the respiration rate is met. The disclosed embodiments may be used in various specific scenarios requiring contactless measurement of respiration rate, and the disclosed embodiments are not particularly limited to this specific scenario. For example, in a scene needing isolation, in a scene with dense people flow, in some public places with special requirements, and the like, the method provided by the embodiment of the disclosure can be used for contactless respiration rate detection.
The respiration rate detection method provided by the embodiment of the present disclosure may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the respiration rate detection method may be implemented by a processor invoking computer readable instructions stored in a memory. The following describes a respiration rate detection method according to an embodiment of the present disclosure, taking an electronic device as an execution subject.
Fig. 1 shows a schematic flow diagram of a respiration rate detection method according to an embodiment of the present disclosure, as shown in fig. 1, the method includes:
s101: the method comprises the steps of obtaining at least two thermal images, wherein the thermal images are obtained through rendering based on temperature information of a preset area, a target object is included in the preset area, and a breathing area of the target object falls into the target area determined based on the preset area.
The thermal image in the embodiment of the present disclosure may be obtained by imaging with a thermal imaging device, where the shooting area of the thermal imaging device is the preset area. In one implementation scenario, a region may be divided into the preset region, and the thermal imaging device may be adjusted until the preset region is included in the field of view of the thermal imaging device, for example, the field of view coincides with the preset region. The temperature information of each position point in the shooting area can be acquired, and the thermal image is obtained based on the temperature information rendering.
The disclosed embodiments detect the respiration rate based on the periodic variation of temperature in the thermal images, so at least two thermal images are required. The embodiments of the present disclosure are directed to measuring a respiration rate, which is a physiological parameter, and the target object is a corresponding organism, such as a human, and the following description will take human respiration rate detection as an example.
The embodiment of the present disclosure does not limit the control manner of the thermal imaging device, and the thermal imaging device may be triggered in response to a preset instruction, for example, a controller or an associated sensor triggers an associated control, and the thermal imaging device may start shooting. In one embodiment, the thermal imaging device may also be triggered in response to the sensed information, for example, when the ambient temperature rises to a preset threshold, the thermal imaging device may automatically initiate a photograph. In another embodiment, the thermal imaging device may also be triggered on a timed basis.
Further, the embodiment of the disclosure also does not limit the photographing mode of the thermal imaging device, for example, the frame rate and the photographing definition mode of the thermal imaging device can be set according to the actual situation. In the case of being triggered to take a photograph, the thermal imaging apparatus may output the taken thermal image in the form of a video stream. In one embodiment, a video stream may be obtained, and a frame image in the video stream is rendered based on temperature information captured by the thermal imaging device. And displaying the video stream, and marking the target area in a picture corresponding to the video stream.
Specifically, referring to fig. 2, fig. 2 shows a schematic diagram of a respiration rate detection scenario according to an embodiment of the present disclosure. The thermal imaging device can perform thermal imaging shooting on the preset area 1 and output a corresponding video stream under the condition that the field of view of the thermal imaging device is overlapped with the preset area 1, wherein each frame of image in the video stream is obtained by rendering based on the temperature information of the preset area 1. Based on the preset area 1, a key area 2 can be determined, and when the target object enters the preset area 1 and the breathing area falls into the key area 2, the temperature change rule of the key area 2 can reflect the breathing rate of the target object. According to the corresponding relationship between the field of view of the thermal imaging device and the preset region 1, the corresponding region of the key region 2 in the thermal image captured by the thermal imaging device can be uniquely determined, and the region is the target region 3. The target area 3 points to an area for extracting temperature change information in the frame image, that is, the breathing rate of the target object can be determined by extracting the temperature information of the target area 3 and analyzing the change rule. In the embodiment of the present disclosure, there may be more than one target area, and a single target area is taken as an example for description below, and the case of multiple target areas and the case of a single target area are based on the same inventive concept.
In order to facilitate the target object to adjust its position in time, the video stream may be displayed, and the target area 3 is marked in the display screen, so that the target object adjusts the posture position according to the display screen to ensure that the breathing area of the target object in the display screen falls into the target area 3, and the posture position in this disclosure means a posture and/or a position. The result of the thermal imaging device shooting the target object can be displayed in the scene of fig. 2, and the corresponding position of the target area 3 is also marked in the display screen, so that the target object can be observed by itself. Referring to fig. 3, fig. 3 is a schematic diagram illustrating a target object pose position adjustment scenario according to an embodiment of the disclosure. If the breathing zone of the target object on the left side of fig. 3 does not fall within the target zone 3, the target object can adjust the posture position by itself until its own breathing zone falls within the target zone 3 (on the right side of fig. 3). In this case, the embodiment of the present disclosure considers that the temperature information change rule corresponding to the target area 3 can accurately reflect the respiration rate of the target object.
That is to say, the embodiment of the present disclosure may extract at least two target frame images from the video stream as the at least two thermal images in step S101, where the target frame images are frame images in the video stream when a preset requirement is met, and the preset requirement is that the target object enters the preset area and a breathing area of the target object falls into a target area of the target frame images.
The breathing area of the target object may be an oral-nasal area or a mask area, and the oral-nasal area may be understood as an oral area and/or a nasal area, and in practical applications, the oral area and the nasal area may be respectively used as the breathing area, or the oral area and the nasal area may be combined to be used as one breathing area.
Based on the configuration, the frame image shot by the thermal imaging equipment can be displayed, the target area is marked in the display result, the target object can adjust the posture position of the target object according to the displayed picture, so that the breathing area of the target object can fall into the target area in the acquired thermal image, and the breathing rate obtained by analyzing the temperature information based on the target area is accurate.
In some possible embodiments, the temperature state of the preset area may also be monitored. And judging whether the target object enters the preset area or not under the condition that the temperature state of the preset area is changed. And if so, generating a video stream acquisition instruction, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream. Specifically, the thermal imaging device may be triggered intermittently to monitor a temperature state of a preset area, and then only when the target object enters the preset area, the thermal imaging device may output the video stream, and when the target object does not enter the preset area, the thermal imaging device may not output the video stream. Compared with the method for outputting the video stream, the method for intermittently shooting the thermal imaging device can obviously reduce the resource consumption, namely, the thermal imaging device can be intermittently started under the condition that no target object enters a preset area by multiplexing the thermal imaging device, and the video stream is output under the condition that the target object is determined to enter the preset area, so that the aim of greatly reducing the resource consumption of the thermal imaging device is fulfilled on the premise that the hardware cost is not additionally increased.
In some possible embodiments, the preset area may be further monitored based on a preset sensor, and a video stream acquiring instruction is generated when a monitoring result indicates that a target object enters the preset area, where the video stream acquiring instruction is used to trigger execution of the operation of acquiring the video stream, and the preset sensor includes a visual sensor or an inductive sensor. The disclosed embodiments do not limit the visual sensor, and may be, for example, a color sensor or a gray sensor. The disclosed embodiments are also not limited to inductive sensors, which may be, for example, infrared sensors or microwave sensors. By additionally introducing a visual sensor or an induction sensor, a preset area can be monitored at low cost, and only when a target object enters the preset area, the thermal imaging device is triggered to start shooting and output a video stream, so that the resource consumption of the thermal imaging device is reduced to the maximum extent.
In the embodiment of the present disclosure, the temperature analysis of the target area may accurately reflect the respiration rate only when a preset requirement is met, where the preset requirement may be expressed that the target object enters the preset area and the respiration area of the target object falls into the target area. In some embodiments of the present disclosure, the execution of step S101 may be triggered in case a preset requirement is met. Based on the above configuration, the start frequency of the respiration rate detection can be reduced, thereby reducing the resource consumption of the respiration rate detection.
In one embodiment, the operation of acquiring at least two thermal images may be triggered in response to a respiration rate detection trigger instruction, which is used to indicate that the preset requirement is met. The embodiment of the present disclosure does not limit the triggering manner of the respiration rate detection triggering instruction, and may be triggered manually or by a machine. Taking the scene shown in fig. 3 as an example, after entering the preset region, the target object may automatically adjust the position and posture according to the display screen, and when it is determined that the breathing region of the target object falls into the target region of the screen, the target object may automatically or notify others to trigger the breathing rate detection instruction. In another embodiment, the screen in fig. 3 may also be displayed in a display interface (a breathing rate detection manipulator) of another user, and the breathing rate detection trigger instruction is triggered by the breathing rate detection manipulator.
In another embodiment, the breathing region prediction may be performed on the frame image in the video stream based on a neural network, so as to obtain a breathing region prediction result. And triggering to execute the operation of acquiring at least two thermal images under the condition that the coincidence degree of the respiratory region prediction result and the target region is higher than a preset threshold value. The embodiment of the present disclosure does not limit the preset threshold, and may autonomously set according to an actual situation. Based on the configuration, the full-automatic respiration rate detection can be realized without manually controlling the starting time of the respiration rate detection method.
Because the frame image in the embodiment of the present disclosure is rendered based on the temperature information, and the image is generated along with the improvement of the thermal imaging technology and the rendering technology, currently, there are relatively few processing methods for the image, and it is difficult to automatically extract sufficient information from the image, and therefore, the method relies on manual analysis more. Embodiments of the present disclosure may make a breathing region prediction for such images based on neural networks. The neural network is a deep learning model simulating the structure and function of a biological neural network in the field of machine learning. Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning. Deep Learning (DL) is a branch of machine Learning, an algorithm that attempts to perform high-level abstraction of data using multiple processing layers that contain complex structures or consist of multiple nonlinear transformations.
Specifically, a sample thermal image and a label corresponding to the sample thermal image can be acquired; the sample thermal image is rendered based on temperature information of a sample target object, the tag points to a breathing zone of the sample target object; the breathing area is an oral-nasal area or a mask area; carrying out feature extraction on the sample thermal image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; and training the neural network according to the respiratory region prediction result and the label. Based on the configuration, the trained neural network can be provided with the capability of predicting the breathing area.
The disclosed embodiments are not detailed with respect to the training process described above. For example, the neural network may extract features layer by layer based on the feature pyramid, predict a breathing region according to the extracted feature information, and feedback-adjust a parameter of the neural network according to a difference between the predicted breathing region and the tag. Since the sample thermal image is rendered according to the temperature information, the definition of the sample thermal image may be inferior to that of the visible light image, and in order to obtain sufficient feature information with discriminative power, the embodiment of the present disclosure optimizes the feature extraction process.
In one embodiment, please refer to fig. 4, which illustrates a flowchart of a feature extraction method according to an embodiment of the present disclosure. The feature extraction includes:
s1, performing initial feature extraction on the sample thermal image to obtain a first feature map.
The embodiment of the present disclosure does not limit a specific method for extracting the initial feature, and for example, at least one stage of convolution processing may be performed on the image to obtain the first feature map. In the process of performing convolution processing, a plurality of image feature extraction results of different scales can be obtained, and the first feature map can be obtained by fusing the image feature extraction results of at least two different scales.
And S2, performing composite feature extraction on the first feature graph to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction.
In an embodiment, the performing the composite feature extraction on the first feature map to obtain the first feature information may include: and carrying out image feature extraction on the first feature map to obtain a first extraction result. And extracting channel information of the first characteristic diagram to obtain a second extraction result. And fusing the first extraction result and the second extraction result to obtain the first characteristic information. The embodiment of the present disclosure does not limit the method for extracting the image feature of the first feature map, and for example, the method may perform at least one stage of convolution processing on the first feature map to obtain the first extraction result. The channel information extraction in the embodiment of the present disclosure may focus on mining of the relationship between the respective channels in the first feature map. Illustratively, it may be implemented based on fusing features of multiple channels. In the embodiment of the present disclosure, the composite feature extraction may be performed by fusing the first extraction result and the second extraction result, so that not only the low-order information of the first feature map itself is retained, but also the high-order inter-channel information may be sufficiently extracted, and the information abundance and the expression of the mined first feature information are improved. In the process of implementing the composite feature extraction, at least one fusion method may be used, the fusion method is not limited by the embodiment of the disclosure, and at least one of dimensionality reduction, addition, multiplication, inner product, convolution and averaging and a combination thereof may be used for fusion.
And S3, filtering the first characteristic diagram based on the remarkable characteristics in the first characteristic information.
In the embodiment of the present disclosure, a more significant region and a less significant region in the first feature map may be determined according to the first feature information, and information in the more significant region is filtered out to obtain a filtering result. The embodiment of the present disclosure does not limit the method for judging the salient features, and may be limited based on a neural network or based on expert experience.
And S4, extracting second characteristic information in the filtering result.
Specifically, the significant features in the filtering result can be suppressed to obtain a second feature map; the above-mentioned significant characteristic that suppresses in the above-mentioned filtering result obtains the second characteristic map, including: and performing feature extraction on the filtering result to obtain target features, performing composite feature extraction on the target features to obtain target feature information, and filtering the target features based on the significant features in the target feature information to obtain the second feature map. And under the condition that a preset stopping condition is not reached, updating the filtering result according to the second feature map, and repeating the step of inhibiting the remarkable features in the filtering result to obtain a second feature map. When the stop condition is reached, the second feature information is set to every acquired target feature information.
And S5, fusing the first characteristic information and the second characteristic information to obtain a characteristic extraction result of the image.
Based on the configuration, the significant features can be filtered layer by layer based on the hierarchical structure, composite feature extraction including channel information extraction is performed based on the filtering result, second feature information including a plurality of target feature information is obtained, information with discrimination is mined layer by layer, the effectiveness and discrimination of the second feature information are improved, and the richness of the information in the final feature extraction result is further improved. The feature extraction method in the embodiments of the present disclosure may be used to perform feature extraction on the sample thermal image, and may be used in each case where a neural network needs to be trained based on the sample thermal image in the embodiments of the present disclosure.
S102, extracting temperature information of the target area in the thermal images for each thermal image, wherein the temperature information is periodically changed along with the breathing of the target object.
As can be seen from the foregoing, the corresponding position of the key area in the preset area in the thermal image can be determined as the preset area, the preset area is only related to the position of the key area, but is not related to the position information of each pixel point in the thermal image, and the target area is uniquely determined according to the preset area.
In one embodiment, for a target area in each of the target frame images, temperature information corresponding to a relevant pixel point in the target area may be determined; and calculating the temperature information corresponding to the target area according to the temperature information corresponding to each related pixel point. By extracting the temperature information of the target area, the breathing rate of the target object can be further determined.
The embodiment of the present disclosure does not limit the related pixel points. For example, each pixel in the target region may be the associated pixel. In an embodiment, the pixel point filtering may be further performed based on the temperature information of each pixel point in the target region, the pixel points whose temperature information does not meet the preset temperature requirement are filtered, and the pixel points that are not filtered are determined as the relevant pixel points. The embodiment of the present disclosure does not limit the preset temperature requirement, for example, an upper temperature limit, a lower temperature limit, or a temperature interval may be defined.
The embodiment of the present disclosure does not limit a specific method for calculating the temperature information corresponding to the target area. For example, the average value or the weighted average value of the temperature information corresponding to each relevant pixel point may be determined as the temperature information corresponding to the target area. In one embodiment, the weight may be inversely related to a distance of the corresponding related pixel point from the center position of the target area. Illustratively, if the distance between the relevant pixel point and the center position of the target area is close, the weight is higher, and if the distance between the relevant pixel point and the center position of the target area is farther, the weight is lower.
S103, determining the breathing rate of the target object according to the extracted temperature information.
The embodiment of the disclosure considers that under the condition that the breathing region of the target object falls into the target region, the breathing of the target object can cause the temperature of the target region to show a periodic variation rule, when the target object inhales, the temperature of the target region can be reduced, when the target object exhales, the temperature of the target region can be increased, and by analyzing the extracted periodic variation rule of the temperature information, the breathing rate of the target object can be determined.
Referring to fig. 5, a schematic flow chart of determining a target subject breathing rate according to the extracted temperature information according to an embodiment of the present disclosure is shown, including:
and S1031, sequencing the temperature information according to a time sequence to obtain a temperature sequence.
In one embodiment, the thermal images may be ranked according to the obtained timing sequence of the thermal images to obtain a thermal image sequence, and the temperature information of the target area in each thermal image may be extracted to obtain a temperature sequence. Taking the example of obtaining 200 thermal images in step S101, each thermal image includes the target object a, a temperature sequence including 200 pieces of temperature information can be obtained, and a temperature information change rule of the temperature sequence represents a respiration rate of the target object a. Of course, there may be a plurality of target regions in the preset region, and each target region may be used to accommodate a breathing region of a target object. That is to say, the embodiment of the present disclosure can perform the respiration rate detection on a plurality of target objects at the same time, and only needs to ensure that the respiration region of each target object falls into one unique corresponding target region. Accordingly, in the case that a plurality of target regions exist, the above operation may be performed for each target region, so as to obtain a corresponding temperature sequence, and further determine the breathing rate of the target object corresponding to the target region.
S1032, carrying out noise reduction processing on the temperature sequence to obtain a target temperature sequence.
In the embodiment of the disclosure, a noise reduction strategy and a noise reduction mode can be determined; and processing the temperature sequence based on the noise reduction mode according to the noise reduction processing strategy to obtain the target temperature sequence.
The embodiments of the present disclosure do not limit the specific content of the above noise reduction strategy and noise reduction method. Exemplary above noise reduction processing strategies include at least one of: denoising based on a high-frequency threshold, denoising based on a low-frequency threshold, filtering random noise and posteriori denoising. Illustratively, the noise reduction processing is implemented based on at least one of the following modes: independent component analysis, laplacian pyramid, band-pass filtering, wavelets, hamming windows.
Taking a posterior denoising as an example, a respiration rate verification condition and a denoising experience parameter corresponding to the posterior denoising can be set, and the denoising is performed on the temperature sequence according to the denoising experience parameter to obtain a target temperature sequence. And determining the breathing rate of the target object according to the target temperature sequence, verifying the determined breathing rate of the target object based on the breathing rate verification condition, if the verification is passed, determining that the noise reduction effect of the noise reduction empirical parameter is acceptable, and when the step S1032 is executed again, directly performing noise reduction based on the noise reduction empirical parameter. The embodiment of the present disclosure does not limit the determination method of the noise reduction experience parameter, and may be obtained according to expert experience.
S1033, determining the breathing rate of the target object based on the target temperature sequence.
By determining the temperature sequence and carrying out noise reduction processing on the temperature sequence, the noise which influences the determination of the respiratory rate can be filtered out, so that the obtained respiratory rate is more accurate. Referring to fig. 6, a schematic flow chart of determining a target subject breathing rate based on extracted target temperature information according to an embodiment of the present disclosure is shown, including:
s10331, determining each key point in the target temperature sequence, wherein the key points are peak points or valley points.
S10332, for any two adjacent key points, determining a time interval between the two adjacent key points.
For the extracted N keypoints, every two adjacent keypoints can calculate a corresponding time interval, and then N-1 time intervals can be determined.
S10333, determining the respiration rate according to the time interval.
The embodiments of the present disclosure do not limit the specific method of determining the above-described respiration rate according to the time interval. For example, for the N-1 time intervals, the inverse of one of the N-1 time intervals may be determined as the respiration rate, or the respiration rate may be determined based on some or all of the N time intervals, or, for example, the inverse of the average of some or all of the N time intervals may be determined as the respiration rate. The disclosed embodiments can accurately determine the respiration rate by calculating the time interval between adjacent key points.
According to the respiration rate detection method provided by the embodiment of the disclosure, the respiration rate of the target object can be determined by analyzing the temperature information of the target area under the condition that the target object enters the preset area and the respiration area of the target object falls into the target area. The whole process does not need to be in contact with a target object, and can be widely used for various scenes. For example, in monitoring a hospital ward, a patient can monitor the breathing rate of the patient without wearing any equipment, so that the discomfort of patient monitoring is reduced, and the quality, effect and efficiency of patient monitoring are improved. In a closed scene, such as an office and an office building hall, the breathing rate of persons in the field is detected, and whether abnormality exists is judged. Under the infant nursing scene, the infant breathing is detected, the phenomenon that the infant is suffocated due to the fact that food blocks the respiratory tract is avoided, and the infant breathing rate is analyzed in real time to judge the health state of the infant. Under the scene that the infection risk is high, the target object that can become the source of infection is shot to remote control thermal imaging equipment, monitors target object vital sign when avoiding infecting.
According to the respiration rate detection method provided by the embodiment of the disclosure, the respiration rate of the target object can be determined by analyzing the temperature information of the target area of the thermal image shot by the thermal imaging device, so that the respiration rate detection result is obtained without contacting the target object, non-contact detection is realized, the blank of a non-contact detection scene is filled, and good detection speed and detection accuracy are achieved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing of the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
Fig. 7 shows a block diagram of a respiration rate detection device according to an embodiment of the present disclosure. As shown in fig. 7, the above apparatus includes:
a thermal image acquisition module 10, configured to acquire at least two thermal images, where the thermal images are obtained by rendering based on temperature information of a preset area, the preset area includes a target object, and a respiratory area of the target object falls into a target area determined based on the preset area;
a temperature information extraction module 20, configured to extract, for each of the thermal images, temperature information of the target area in the thermal image, where the temperature information changes periodically along with respiration of the target object;
and a respiration rate determining module 30, configured to determine a respiration rate of the target object according to the extracted temperature information.
In some possible embodiments, the respiration rate detecting apparatus includes a thermal imaging device, a shooting area of the thermal imaging device is the preset area, the apparatus further includes a video stream processing module, configured to acquire a video stream, and a frame image in the video stream is rendered based on temperature information shot by the thermal imaging device; displaying the video stream, and marking the target area in a picture corresponding to the video stream; the thermal image acquiring module is configured to extract at least two target frame images from the video stream as the at least two thermal images, where the target frame images are frame images in the video stream when a preset requirement is met, and the preset requirement is that the target object enters the preset area and a breathing area of the target object falls into a target area of the target frame images.
In some possible embodiments, the apparatus further includes a first video stream processing triggering module, configured to monitor a temperature state of the preset area; under the condition that the temperature state of the preset area changes, judging whether a condition that a target object enters the preset area exists or not; and if so, generating a video stream acquisition instruction, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream.
In some possible embodiments, the apparatus further includes a second video stream processing triggering module, configured to monitor the preset area based on a preset sensor, and generate a video stream acquiring instruction in a case that a monitoring result indicates that a target object enters the preset area, where the video stream acquiring instruction is used to trigger execution of the operation of acquiring the video stream, and the preset sensor includes a visual sensor or an inductive sensor.
In some possible embodiments, the apparatus further includes a detection triggering module configured to trigger the operation of acquiring the at least two thermal images in response to a respiration rate detection triggering instruction, where the respiration rate detection triggering instruction is used to indicate that the preset requirement is met, and the preset requirement is that the target object enters the preset area and the respiration area of the target object falls into the target area.
In some possible embodiments, the apparatus further includes a detection triggering module, configured to perform a respiratory region prediction on a frame image in the video stream based on a neural network, so as to obtain a respiratory region prediction result; and triggering to execute the operation of acquiring at least two thermal images under the condition that the coincidence degree of the respiratory region prediction result and the target region is higher than a preset threshold value.
In some possible embodiments, the neural network is obtained based on the following method: acquiring a sample thermal image and a label corresponding to the sample thermal image; the sample thermal image is rendered based on temperature information of a sample target object, the tag points to a breathing zone of the sample target object; the breathing area is an oral-nasal area or a mask area; carrying out feature extraction on the sample thermal image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; and training the neural network according to the respiratory region prediction result and the label.
In some possible embodiments, the apparatus includes a feature extraction module configured to perform initial feature extraction on the sample thermal image to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction; filtering the first feature map based on salient features in the first feature information; extracting second characteristic information in the filtering result; and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
In some possible embodiments, the temperature information extraction module is configured to, for a target area in each of the target frame images, determine temperature information corresponding to a relevant pixel point in the target area; and calculating the temperature information corresponding to the target area according to the temperature information corresponding to each related pixel point.
In some possible embodiments, the respiration rate determining module is configured to sort the temperature information according to a time sequence to obtain a temperature sequence; carrying out noise reduction processing on the temperature sequence to obtain a target temperature sequence; and determining the breathing rate of the target object based on the target temperature sequence.
In some possible embodiments, the respiration rate determining module is configured to determine each key point in the target temperature sequence, where the key points are peak points or valley points; for any two adjacent key points, determining the time interval between the two adjacent key points; and determining the respiration rate according to the time interval.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The embodiment of the present disclosure also provides a computer-readable storage medium, where at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or the at least one program is loaded by a processor and executed to implement the method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the method.
The electronic device may be provided as a terminal, server, or other form of device.
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 8, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the above-mentioned communication component 816 further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 9 shows a block diagram of another electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 9, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A method of breath rate detection, the method comprising:
acquiring at least two thermal images, wherein the thermal images are obtained by rendering based on temperature information of a preset area, the preset area comprises a target object, and a breathing area of the target object falls into a target area determined based on the preset area;
for each of the thermal images, extracting temperature information of the target area in the thermal image, the temperature information generating periodic changes along with respiration of the target object;
and determining the breathing rate of the target object according to the extracted temperature information.
2. The method according to claim 1, wherein the respiration rate detection method is applied to a respiration rate detection apparatus, the respiration rate detection apparatus includes a thermal imaging device, a shooting area of the thermal imaging device is the preset area, and the method further includes:
acquiring a video stream, wherein a frame image in the video stream is obtained by rendering based on temperature information shot by the thermal imaging equipment;
displaying the video stream, and marking the target area in a picture corresponding to the video stream;
the acquiring at least two thermal images includes: extracting at least two target frame images from the video stream as the at least two thermal images, wherein the target frame images are frame images in the video stream when a preset requirement is met, and the preset requirement is that the target object enters the preset area and the breathing area of the target object falls into the target area of the target frame images.
3. The method of claim 2, wherein prior to said obtaining the video stream, the method further comprises:
monitoring the temperature state of the preset area;
under the condition that the temperature state of the preset area changes, judging whether a condition that a target object enters the preset area exists or not;
and if so, generating a video stream acquisition instruction, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream.
4. The method of claim 2, wherein prior to said obtaining the video stream, the method further comprises:
monitoring the preset area based on a preset sensor, and generating a video stream acquisition instruction under the condition that a monitoring result indicates that a target object enters the preset area, wherein the video stream acquisition instruction is used for triggering and executing the operation of acquiring the video stream, and the preset sensor comprises a visual sensor or an induction sensor.
5. The method according to any one of claims 1 to 4, further comprising:
triggering the operation of acquiring the at least two thermal images to be executed in response to a respiration rate detection trigger instruction, wherein the respiration rate detection trigger instruction is used for indicating that the preset requirement is met, and the preset requirement is that the target object enters the preset area and the respiration area of the target object falls into the target area.
6. The method according to any one of claims 2 to 4, further comprising:
predicting a breathing region of a frame image in the video stream based on a neural network to obtain a prediction result of the breathing region;
and under the condition that the coincidence degree of the respiratory region prediction result and the target region is higher than a preset threshold value, triggering to execute the operation of acquiring at least two thermal images.
7. The method of claim 6, wherein the neural network is derived based on the following method:
acquiring a sample thermal image and a label corresponding to the sample thermal image; the sample thermal image is obtained by rendering based on temperature information of a sample target object, and the label points to a breathing area of the sample target object; the breathing area is an oral-nasal area or a mask area;
carrying out feature extraction on the sample thermal image to obtain a feature extraction result;
predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result;
and training the neural network according to the respiratory region prediction result and the label.
8. The method of claim 7, wherein said performing feature extraction on said sample thermal image to obtain a feature extraction result comprises:
performing initial feature extraction on the sample thermal image to obtain a first feature map;
performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction;
filtering the first feature map based on salient features in the first feature information;
extracting second characteristic information in the filtering result;
and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
9. The method according to any one of claims 1 to 8, wherein the extracting temperature information corresponding to the target region in each of the target frame images includes:
for a target area in each target frame image, determining temperature information corresponding to a relevant pixel point in the target area;
and calculating the temperature information corresponding to the target area according to the temperature information corresponding to each related pixel point.
10. The method according to any one of claims 1 to 9, wherein determining the respiration rate of the target subject from the extracted temperature information comprises:
sequencing the temperature information according to a time sequence to obtain a temperature sequence;
denoising the temperature sequence to obtain a target temperature sequence;
determining a respiration rate of the target subject based on the target temperature sequence.
11. The method of claim 10, wherein determining the target subject's breathing rate based on the target temperature sequence comprises:
determining each key point in the target temperature sequence, wherein the key points are all peak points or all valley points;
for any two adjacent key points, determining the time interval between the two adjacent key points;
determining the respiration rate according to the time interval.
12. A respiration rate detection apparatus, the apparatus comprising:
the thermal image acquisition module is used for acquiring at least two thermal images, wherein the thermal images are obtained by rendering based on temperature information of a preset area, the preset area comprises a target object, and the breathing area of the target object falls into a target area determined based on the preset area;
the temperature information extraction module is used for extracting temperature information of the target area in each thermal image, and the temperature information generates periodic change along with the breathing of the target object;
and the breathing rate determining module is used for determining the breathing rate of the target object according to the extracted temperature information.
13. A computer-readable storage medium, having at least one instruction or at least one program stored thereon, which is loaded and executed by a processor to implement the respiration rate detection method according to any one of claims 1 to 11.
14. An electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the respiration rate detection method of any one of claims 1 to 11 by executing the instructions stored by the memory.
CN202110871818.0A 2021-07-30 2021-07-30 Respiration rate detection method and device, storage medium and electronic equipment Withdrawn CN113576451A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110871818.0A CN113576451A (en) 2021-07-30 2021-07-30 Respiration rate detection method and device, storage medium and electronic equipment
PCT/CN2022/096219 WO2023005403A1 (en) 2021-07-30 2022-05-31 Respiratory rate detection method and apparatus, and storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110871818.0A CN113576451A (en) 2021-07-30 2021-07-30 Respiration rate detection method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113576451A true CN113576451A (en) 2021-11-02

Family

ID=78252655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110871818.0A Withdrawn CN113576451A (en) 2021-07-30 2021-07-30 Respiration rate detection method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN113576451A (en)
WO (1) WO2023005403A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157807A (en) * 2021-11-29 2022-03-08 江苏宏智医疗科技有限公司 Image acquisition method and device and readable storage medium
WO2023005403A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Respiratory rate detection method and apparatus, and storage medium and electronic device
WO2023005402A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Respiratory rate detection method and apparatus based on thermal imaging, and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027621B1 (en) * 2001-03-15 2006-04-11 Mikos, Ltd. Method and apparatus for operator condition monitoring and assessment
US20120289850A1 (en) * 2011-05-09 2012-11-15 Xerox Corporation Monitoring respiration with a thermal imaging system
WO2015030611A1 (en) * 2013-09-02 2015-03-05 Interag Method and apparatus for determining respiratory characteristics of an animal
KR20190060243A (en) * 2017-11-24 2019-06-03 연세대학교 산학협력단 Respiratory measurement system using thermovision camera
CN111568388A (en) * 2020-04-30 2020-08-25 清华大学 Non-contact mouth respiration detection device and method and storage medium
CN111839519A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact respiratory frequency monitoring method and system
CN111898580A (en) * 2020-08-13 2020-11-06 上海交通大学 System, method and equipment for acquiring body temperature and respiration data of people wearing masks
CN112057074A (en) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 Respiration rate measuring method, respiration rate measuring device, electronic equipment and computer storage medium
CN112924035A (en) * 2021-01-27 2021-06-08 复旦大学附属中山医院 Body temperature and respiration rate extraction method based on thermal imaging sensor and application thereof
CN113128520A (en) * 2021-04-28 2021-07-16 北京市商汤科技开发有限公司 Image feature extraction method, target re-identification method, device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592817A (en) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 Method and device for detecting respiration rate, storage medium and electronic equipment
CN113576452A (en) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 Respiration rate detection method and device based on thermal imaging and electronic equipment
CN113591701A (en) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 Respiration detection area determination method and device, storage medium and electronic equipment
CN113576451A (en) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 Respiration rate detection method and device, storage medium and electronic equipment
CN113793300A (en) * 2021-08-19 2021-12-14 合肥工业大学 Non-contact type respiration rate detection method based on thermal infrared imager
CN113887474B (en) * 2021-10-15 2022-09-23 深圳市商汤科技有限公司 Respiration rate detection method and device, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027621B1 (en) * 2001-03-15 2006-04-11 Mikos, Ltd. Method and apparatus for operator condition monitoring and assessment
US20120289850A1 (en) * 2011-05-09 2012-11-15 Xerox Corporation Monitoring respiration with a thermal imaging system
WO2015030611A1 (en) * 2013-09-02 2015-03-05 Interag Method and apparatus for determining respiratory characteristics of an animal
KR20190060243A (en) * 2017-11-24 2019-06-03 연세대학교 산학협력단 Respiratory measurement system using thermovision camera
CN111568388A (en) * 2020-04-30 2020-08-25 清华大学 Non-contact mouth respiration detection device and method and storage medium
CN111839519A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact respiratory frequency monitoring method and system
CN112057074A (en) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 Respiration rate measuring method, respiration rate measuring device, electronic equipment and computer storage medium
CN111898580A (en) * 2020-08-13 2020-11-06 上海交通大学 System, method and equipment for acquiring body temperature and respiration data of people wearing masks
CN112924035A (en) * 2021-01-27 2021-06-08 复旦大学附属中山医院 Body temperature and respiration rate extraction method based on thermal imaging sensor and application thereof
CN113128520A (en) * 2021-04-28 2021-07-16 北京市商汤科技开发有限公司 Image feature extraction method, target re-identification method, device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023005403A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Respiratory rate detection method and apparatus, and storage medium and electronic device
WO2023005402A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Respiratory rate detection method and apparatus based on thermal imaging, and electronic device
CN114157807A (en) * 2021-11-29 2022-03-08 江苏宏智医疗科技有限公司 Image acquisition method and device and readable storage medium

Also Published As

Publication number Publication date
WO2023005403A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
WO2023005468A1 (en) Respiratory rate measurement method and apparatus, storage medium, and electronic device
CN109614613B (en) Image description statement positioning method and device, electronic equipment and storage medium
WO2023005403A1 (en) Respiratory rate detection method and apparatus, and storage medium and electronic device
CN110598504B (en) Image recognition method and device, electronic equipment and storage medium
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
CN113591750A (en) Key point detection method and device, electronic equipment and storage medium
CN110287671B (en) Verification method and device, electronic equipment and storage medium
CN111310616A (en) Image processing method and device, electronic equipment and storage medium
WO2023005402A1 (en) Respiratory rate detection method and apparatus based on thermal imaging, and electronic device
CN113591701A (en) Respiration detection area determination method and device, storage medium and electronic equipment
CN109951476B (en) Attack prediction method and device based on time sequence and storage medium
CN109410276B (en) Key point position determining method and device and electronic equipment
WO2021047069A1 (en) Face recognition method and electronic terminal device
CN109325479B (en) Step detection method and device
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
CN113887474B (en) Respiration rate detection method and device, electronic device and storage medium
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
KR20220149503A (en) Image capturing method and apparatus, electronic device and computer readable storage medium
CN111178298A (en) Human body key point detection method and device, electronic equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN109241875B (en) Attitude detection method and apparatus, electronic device, and storage medium
CN111724361B (en) Method and device for displaying focus in real time, electronic equipment and storage medium
CN110929616B (en) Human hand identification method and device, electronic equipment and storage medium
CN112884040B (en) Training sample data optimization method, system, storage medium and electronic equipment
CN113158918A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056533

Country of ref document: HK

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211102