WO2021203644A1 - 温度修正方法、装置及系统 - Google Patents

温度修正方法、装置及系统 Download PDF

Info

Publication number
WO2021203644A1
WO2021203644A1 PCT/CN2020/119468 CN2020119468W WO2021203644A1 WO 2021203644 A1 WO2021203644 A1 WO 2021203644A1 CN 2020119468 W CN2020119468 W CN 2020119468W WO 2021203644 A1 WO2021203644 A1 WO 2021203644A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
target
temperature
thermal imaging
person
Prior art date
Application number
PCT/CN2020/119468
Other languages
English (en)
French (fr)
Inventor
张耀威
周舒畅
胡晨
Original Assignee
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京迈格威科技有限公司 filed Critical 北京迈格威科技有限公司
Priority to US17/795,176 priority Critical patent/US20230162397A1/en
Publication of WO2021203644A1 publication Critical patent/WO2021203644A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0022Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
    • G01J5/0025Living bodies
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/80Calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01KMEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
    • G01K13/00Thermometers specially adapted for specific purposes
    • G01K13/20Clinical contact thermometers for use with humans or animals
    • G01K13/223Infrared clinical thermometers, e.g. tympanic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a temperature correction method, device and system.
  • the commonly used body temperature detection technology is mainly infrared imaging technology, which detects the thermal imaging image of the target object through the infrared imaging technology.
  • the thermal imaging image is an image that records the temperature of objects such as persons.
  • the purpose of the present disclosure includes, for example, providing a temperature correction method, device, and system, which can effectively improve the accuracy of personnel temperature measurement.
  • the embodiment of the present disclosure provides a temperature correction method, including: obtaining a target reference image frame pair including a person; wherein the target reference image frame pair includes a target visible light image frame and a target thermal imaging image frame; The visible light image frame is converted into a depth image frame; the depth image frame and the target thermal imaging image frame are input into an atmospheric inverse scattering model; wherein the atmospheric inverse scattering model is a pre-fitting of the relationship between the depth and the temperature correction value The obtained neural network model; temperature correction is performed on the target thermal imaging image frame based on the depth image frame through the atmospheric inverse scattering model to obtain the target temperature.
  • the method further includes: determining the tracking information of the person in the target visible light image frame according to a tracking algorithm; wherein the tracking information includes positioning information and a tracking ID; and the positioning information includes Position information of the human face and/or human body in the target visible light image frame; the tracking ID is used to identify different persons in the target visible light image frame, and the tracking ID of the same person in different visible light image frames is the same.
  • the step of performing temperature correction on the target thermal imaging image frame based on the depth image frame through the atmospheric inverse scattering model to obtain the target temperature includes: using the atmospheric inverse scattering model Extract the pixel depth value of the depth image frame and the pixel temperature value of the target thermal imaging image frame according to the location information of the target person; wherein, the target person is determined based on the tracking ID; according to the depth and temperature The relationship of the correction value determines the temperature correction value corresponding to the pixel depth value; the pixel temperature value is corrected according to the temperature correction value to obtain the target temperature of the target person.
  • the temperature correction value corresponding to the pixel depth value determined according to the relationship between the depth and the temperature correction value is the first temperature correction value.
  • the step of performing temperature correction on the target thermal imaging image frame based on the depth image frame through the atmospheric inverse scattering model to obtain the target temperature includes: using the atmospheric inverse scattering model The relationship between the depth and the temperature correction value in, determines a first temperature correction value based on the depth image frame; performs temperature correction on the target thermal imaging image frame according to the first temperature correction value and a preset temperature correction factor, Obtain the target temperature of the target person; wherein the preset temperature correction factors include: gender and age correction factors and/or time correction factors, and the target person is determined based on the tracking ID.
  • the step of performing temperature correction on the target thermal imaging image frame based on the depth image frame through the atmospheric inverse scattering model to obtain the target temperature includes: using the atmospheric inverse scattering model Extract the pixel depth value of the depth image frame and the pixel temperature value of the target thermal imaging image frame according to the location information of the target person; determine the first temperature corresponding to the pixel depth value according to the relationship between the depth and the temperature correction value Correction value; performing temperature correction on the target thermal imaging image frame according to the first temperature correction value and a preset temperature correction factor to obtain the target temperature of the target person; wherein the preset temperature correction factor includes: gender The age correction factor and/or the time correction factor, the target person is determined based on the tracking ID.
  • the step of performing temperature correction on the target thermal imaging image frame according to the first temperature correction value and a preset temperature correction factor to obtain the target temperature of the target person includes: The gender and age correction factors and the preset gender and age mapping table determine the second temperature correction value of the target person; wherein, the gender and age mapping table records the temperature correction values corresponding to different genders and different age ranges; and/or , Determine the third temperature correction value of the target person according to the time correction factor and a preset time mapping table; wherein the time mapping table records the temperature correction values corresponding to different time intervals; The weight weights the first temperature correction value and the second temperature correction value and/or the third temperature correction value to obtain a target temperature correction value; according to the target temperature correction value, the target thermal imaging image The pixel temperature value in the positioning frame of the frame is corrected to obtain the target temperature of the target person; wherein the positioning frame is determined based on the positioning information of the target person.
  • the step of performing temperature correction on the target thermal imaging image frame according to the first temperature correction value and a preset temperature correction factor to obtain the target temperature of the target person includes: The first temperature correction value corrects the pixel temperature value in the positioning frame of the target thermal imaging image frame to obtain the first corrected thermal imaging image frame of the target thermal imaging image frame; wherein, the positioning frame is based on The positioning information of the target person is determined; according to the gender and age correction factor and/or the time correction factor, the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame is secondarily corrected , Get the target temperature of the target person.
  • the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame is secondarily corrected
  • the step of obtaining the target temperature of the target person includes: correcting the pixel temperature values distributed in the positioning frame of the first corrected thermal imaging image frame according to the gender and age correction factors to obtain a second corrected thermal Imaging image frame; according to the time correction factor, the pixel temperature values distributed in the positioning frame of the second corrected thermal imaging image frame are corrected to obtain a third corrected thermal imaging image frame; according to the third corrected thermal The corrected pixel temperature values distributed in the positioning frame of the imaging image frame determine the target temperature of the target person.
  • the step of performing secondary correction on the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame according to the gender and age correction factor includes: extracting the The face feature of the target person in the target visible light image frame; the gender and age information of the target person is identified according to the face feature; according to the gender and age information and a preset gender and age mapping table, the first A second correction is made to the temperature values of pixels distributed in the positioning frame of the thermal imaging image frame; wherein, the temperature correction values corresponding to different genders and different age ranges are recorded in the gender-age-age mapping table.
  • the step of performing secondary correction on the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame according to the time correction factor includes: acquiring the target Refer to the shooting time of the image frame pair; according to the shooting time and a preset time mapping table, perform a secondary correction on the pixel temperature values distributed in the positioning frame of the first corrected thermal imaging image frame; wherein, the time The temperature correction values corresponding to different time intervals are recorded in the mapping table.
  • the method further includes: determining a shooting distance between the person and the shooting device of the target reference image frame pair according to the target visible light image frame; based on the shooting distance and a preset The temperature correction factor of performs temperature correction on the target thermal imaging image frame; wherein, the preset temperature correction factor includes: gender and age correction factor and/or time correction factor.
  • the step of determining the shooting distance between the person and the shooting device of the target reference image frame pair according to the target visible light image frame includes: according to the target person in the target visible light image frame The position information corresponding to the face of the target person is determined to determine the pixel ratio of the face area of the target person in the target visible light image frame; wherein the target person is determined based on the tracking ID; The ratio determines the shooting distance between the target person and the shooting device of the target reference image frame pair.
  • the step of obtaining a target reference image frame pair containing a person includes: performing image capture on a designated area by a dual-lens camera to obtain multiple pairs of original reference image frame pairs;
  • the optical camera includes a visible light camera and an infrared camera.
  • the original visible light image frame collected by the visible light camera and the original thermal imaging image frame collected by the infrared camera correspond to each other to form the original reference image frame pair; Face detection is performed on the frame, the target visible light image frame containing the person is determined in the original visible light image frame; the original reference image frame pair corresponding to the target visible light image frame is determined as the target reference image frame pair containing the person.
  • the step of converting the target visible light image frame into a depth image frame includes: inputting the target visible light image frame into a preset depth map conversion model, and converting the target visible light image frame into a preset depth map conversion model.
  • the model converts the target visible light image frame into a depth image frame.
  • the method further includes: determining a heating person in the target visible light image frame according to the target temperature of each person and a preset temperature threshold, and obtaining the target tracking ID of the heating person From other reference image frame pairs within a preset time from the target reference image frame pair, obtain multiple visible light image frames to be tracked with the target tracking ID; according to the shooting time and The shooting location tracks the feverish persons.
  • the embodiment of the present disclosure also provides a temperature correction device, including: an image acquisition module configured to acquire a target reference image frame pair containing a person; wherein the target reference image frame pair includes a target visible light image frame and a target thermal imaging image Frame; an image conversion module configured to convert the target visible light image frame into a depth image frame; an image input module configured to input the depth image frame and the target thermal imaging image frame into an atmospheric inverse scattering model; wherein,
  • the atmospheric inverse scattering model is a neural network model obtained by pre-fitting the relationship between the depth and the temperature correction value; the temperature correction module is configured to perform the thermal imaging image of the target based on the depth image frame through the atmospheric inverse scattering model Frame temperature correction to get the target temperature.
  • the embodiment of the present disclosure provides a temperature correction system, the system includes: an image acquisition device, a processor, and a storage device; the image acquisition device is configured to collect a target reference image frame pair; the storage device stores a computer A program, the computer program, when executed by the processor, executes the method as described herein.
  • the embodiment of the present disclosure provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program executes the steps of the method described herein when the computer program is run by a processor.
  • a healthy subject with a normal body temperature will be judged as a sick patient, or a sick patient whose body temperature exceeds a threshold will be judged as a healthy person.
  • the embodiments of the present disclosure provide a temperature correction method, device, and system. After obtaining a target visible light image frame and a target thermal imaging image frame containing a person, the target visible light image frame is first converted into a depth image frame, and then the depth The image frame and the target thermal imaging image frame are input into the atmospheric inverse scattering model; the atmospheric inverse scattering model is a neural network model obtained by pre-fitting the relationship between the depth and the temperature correction value; finally, the target is based on the depth image frame through the atmospheric inverse scattering model The temperature of the thermal imaging image frame is corrected to obtain the target temperature.
  • the above-mentioned method of temperature correction through the atmospheric inverse scattering model can ensure a higher temperature detection efficiency while effectively using the depth.
  • the relationship with the temperature correction value is used to correct the temperature of the personnel, so as to obtain a more accurate result. Since the depth reflects the distance between the camera and the person, this method takes into account and corrects the influence of the distance on the temperature, which can effectively improve the accuracy of the person's temperature detection when the temperature is measured in a non-contact manner.
  • FIG. 1 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure
  • Figure 2 shows a flow chart of a temperature correction method provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a temperature correction method provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of another temperature correction method provided by an embodiment of the present disclosure
  • Fig. 5 shows a structural block diagram of a temperature correction device provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a temperature correction method, device, and system.
  • the temperature correction method, device and system can be applied to human body temperature monitoring, detection and measurement in communities, stations, hospitals and other body temperature quarantine places, and can also be applied to temperature monitoring, detection and measurement scenarios of objects such as water cups and mobile phones. .
  • Fig. 1 an example electronic device 100 for implementing the temperature correction method, device, and system of the embodiments of the present disclosure will be described.
  • FIG. 1 shows a schematic structural diagram of an electronic device.
  • the electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image acquisition device 110.
  • the bus system 112 and/or other forms of connection mechanisms (not shown) are interconnected.
  • the components and structure of the electronic device 100 shown in FIG. 1 are only exemplary and not restrictive. According to requirements, the electronic device may have some of the components shown in FIG. Other components and structures. For example, there is a wireless connection between two or more components in the electronic device 100, such as Bluetooth, Wi-Fi (Wifi), and the Internet of Things.
  • the processor 102 may be a central processing unit (CPU) or another form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
  • CPU central processing unit
  • the processor 102 may be a central processing unit (CPU) or another form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
  • the storage device 104 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory includes, for example, random access memory (RAM) and/or cache memory (cache).
  • the non-volatile memory includes, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions are stored on the computer-readable storage medium, and the processor 102 can run the program instructions to implement the client functions in the embodiments of the present disclosure described below (implemented by the processor) and / Or other desired functions.
  • Various application programs and various data such as various data used and/or generated by the application program, can also be stored in the computer-readable storage medium.
  • the input device 106 may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, and a touch screen.
  • the output device 108 may output various information (for example, images or sounds) to the outside (for example, a user), and may include one or more of a display, a speaker, and the like.
  • the image capture device 110 may capture images (for example, photos, videos, etc.) desired by the user, and store the captured images in the storage device 104 for use by other components.
  • the exemplary electronic equipment used to implement a temperature correction method, device, and system according to the embodiments of the present disclosure is implemented on smart terminals such as smart phones, cameras, temperature measurement equipment, and tablet computers.
  • the method mainly includes the following steps S202 to S208:
  • Step S202 Obtain a target reference image frame pair containing a person; wherein the target reference image frame pair can be obtained by image collection of a person in the monitoring area by a dual-lens camera.
  • the dual-lens camera is a combined camera with a dual-camera structure.
  • the dual-lens camera in this embodiment is a combination of a visible light camera and a thermal imaging camera, where the visible light camera and the thermal imaging camera are combined into a whole by arranging up and down or side-to-side.
  • the visible light image frame of the target containing the person collected by the visible light camera and the target thermal imaging image frame of the target containing the person collected by the thermal imaging camera correspond to each other, forming a target reference image frame pair.
  • Step S204 Convert the target visible light image frame into a depth image frame.
  • the target visible light image frame may be input to a preset depth map conversion model, and the target visible light image frame is converted into a depth image frame through the depth map conversion model.
  • the depth map conversion model outputs the depth image frame corresponding to the target visible light image frame according to the internal parameter matrix of the visible light camera and the pixel coordinates of the target visible light image frame.
  • the depth image obtained by the conversion is the distance (depth) from the dual-lens camera to each point of the person as the pixel depth value, which can reflect the shooting distance between the dual-lens camera and the person.
  • Step S206 Input the depth image frame and the target thermal imaging image frame into the atmospheric inverse scattering model; wherein the atmospheric inverse scattering model is a neural network model obtained by pre-fitting the relationship between the depth and the temperature correction value.
  • Step S208 Perform temperature correction on the target thermal imaging image frame based on the depth image frame through the atmospheric inverse scattering model to obtain the target temperature.
  • the atmospheric inverse scattering model can extract the pixel depth value of the depth image frame and the pixel temperature value of the person in the target thermal imaging image frame, and calculate the pixel temperature value according to the pixel depth value and the relationship between the depth and the temperature correction value. Correct, and output the target temperature corrected for the personnel temperature.
  • the target visible light image frame is first converted into a depth image frame, and then the depth image frame and the target thermal imaging image are converted Frame input atmospheric inverse scattering model;
  • the atmospheric inverse scattering model is a neural network model obtained by pre-fitting the relationship between depth and temperature correction value; finally, the target thermal imaging image frame is corrected for temperature based on the depth image frame through the atmospheric inverse scattering model , Get the target temperature.
  • the above-mentioned method of temperature correction through the atmospheric inverse scattering model can ensure a higher temperature detection efficiency while effectively using the depth.
  • the relationship with the temperature correction value is used to correct the temperature of the personnel, so as to obtain a more accurate result. Since the depth reflects the distance between the camera and the person, this method takes into account and corrects the influence of the distance on the temperature, so that the accuracy of the person's temperature detection can be effectively improved when the temperature is measured in a non-contact mode.
  • the above-mentioned target reference image frame pair containing persons may be obtained by the following method:
  • the dual-lens camera is used to collect images in the designated area to obtain multiple pairs of original reference image frames; among them, the original visible light image frame collected by the visible light camera in the dual-lens camera and the original thermal imaging image frame collected by the infrared camera correspond to each other, forming The original reference image frame pair.
  • Face detection on the original visible light image frames can use existing face detection methods, such as face detection methods based on convolutional neural networks (R-CNN, Fast R-CNN, Faster R-CNN, etc.). These methods can determine whether a person is included in the original visible light image frame, and predict the location information of the person in the target visible light image frame that contains the person.
  • the location information usually includes the human face and/or human body in the target visible light image frame. location information.
  • Objects such as plants, vehicles
  • Objects other than persons in the target thermal imaging image frame all have temperature values. These objects will not only interfere with the correction of the persons' temperature, but also increase the amount of calculation. Therefore, optionally, when performing the above step S208, temperature correction is only performed on the area where the person in the target thermal imaging image frame is located, so as to reduce the amount of temperature correction data and improve the temperature correction efficiency.
  • a person tracking method is provided.
  • the positioning information includes the position information of the human face and/or the human body in the target visible light image frame; the tracking ID is used to identify different persons in the target visible light image frame, and the tracking ID of the same person in different visible light image frames is the same.
  • the target visible light image frame is usually multiple frames in a continuous image frame sequence, and the same person is included in the multiple target visible light image frames.
  • the location information includes the position information in the target visible light image frame; the person in the target visible light image frame is occluded in the face or body and other parts, the location information Including the position information of the unobstructed part of the human face or the human body in the target visible light image frame.
  • the depth image frame is obtained by conversion of the target visible light image frame, after determining the tracking information of the person in the target visible light image frame, the tracking information of the person in the depth image frame can also be determined.
  • this embodiment can predetermine the positional correspondence between the visible light image frame and the infrared light image frame according to the spatial arrangement of the visible light camera and the thermal imaging camera and the parameters of the camera, and then according to The position correspondence relationship and the tracking information of the person in the target visible light image frame, so as to accurately determine the tracking information of the person in the target thermal imaging image frame.
  • step S208 Based on the tracking information of the person determined by the above method, the step of performing temperature correction on the target thermal imaging image frame through the atmospheric inverse scattering model in step S208 can be implemented with reference to the following steps (1) to (3):
  • the target person is determined based on the tracking ID; each person can be individually identified according to the tracking ID As a target person, you can also treat each person as a target person, and distinguish different target persons by tracking ID.
  • a positioning frame of the person in the target visible light image frame can be determined, and the positioning frame includes one or more position frames corresponding to the human face and/or the human body.
  • the atmospheric inverse scattering model is used to extract the pixel depth value in the location frame where the target person is located in the depth image frame and the pixel temperature value in the location frame where the target person is located in the target thermal imaging image frame.
  • the number of extraction of pixel depth values and pixel temperature values can be reduced, the extraction efficiency of the atmospheric inverse scattering model can be improved, and the temperature interference of other objects can be reduced, so as to improve the efficiency and accuracy of subsequent temperature correction.
  • the relationship between the depth and the temperature correction value can be expressed by a curve or a function.
  • Each depth corresponds to a matching temperature correction value, which may be positive or negative, and based on this, the temperature correction value corresponding to the pixel depth value is determined.
  • the temperature correction value corresponding to the depth value of each pixel can be determined according to the relationship between the depth and the temperature correction value; it can also be based on the pixel in the positioning frame where the target person is located.
  • the depth value determines the pixel depth value representative, which represents the average, mode, or median of multiple pixel depth values, and then determines the corresponding pixel depth value according to the relationship between the depth and the temperature correction value Temperature correction value.
  • the temperature correction value may be a temperature correction value corresponding to each pixel depth value.
  • the pixel temperature value at the corresponding pixel position in the target thermal imaging image frame is calculated according to each temperature correction value. The correction is performed, wherein the corresponding pixel position in the target thermal imaging image frame is obtained based on the corresponding position relationship between the target visible light image frame and the target thermal imaging image frame.
  • the temperature correction value can be the pixel depth value representing the corresponding temperature correction value.
  • the pixel temperature value representative is determined according to the pixel temperature value in the positioning frame where the target person is located, and then the pixel temperature is determined according to the temperature correction value. The value represents the correction.
  • the temperature correction value can be added to the pixel temperature value.
  • the corrected target temperature is decreased relative to the uncorrected pixel temperature value; when the temperature correction value is positive, the corrected target temperature is increased relative to the uncorrected pixel temperature value.
  • the temperature correction method provided by this embodiment can only perform temperature correction on the image area where the person is located based on the positioning information. This can reduce the interference of plants, vehicles and other objects on the temperature correction of the person, so as to improve the accuracy of the temperature. Reduce the amount of calculation and improve the efficiency of temperature correction.
  • the temperature value of the person in the target thermal imaging image frame may be combined with other temperature correction factors on the basis of the depth/distance correction factor. Make corrections as shown in the following steps (1) and (2):
  • (1) Determine the first temperature correction value based on the depth image frame based on the relationship between the depth and the temperature correction value in the atmospheric inverse scattering model.
  • the pixel depth value and pixel temperature value can be extracted through the atmospheric inverse scattering model, and then the first temperature correction value corresponding to the pixel depth value can be determined according to the relationship between the depth and the temperature correction value; the specific implementation method can refer to the above step (1) and step ( 2), no further description here.
  • the preset temperature correction factors may include, but are not limited to: gender and age correction Factors and/or time correction factors, gender and age correction factors are temperature correction factors set for people of different age groups and genders, and time correction factors are set for differences in human body temperature at different times of the day The temperature correction factor.
  • Step 1 Correct the pixel temperature value in the positioning frame of the target thermal imaging image frame according to the first temperature correction value to obtain the first corrected thermal imaging image frame of the target thermal imaging image frame; the positioning frame is based on the positioning of the target person The information is ok.
  • Step 2 Perform a secondary correction on the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame according to the gender and age correction factor and/or the time correction factor to obtain the target temperature of the target person.
  • the above gender and age correction factors and time correction factors can be used alternatively or at the same time.
  • this embodiment does not limit the temperature of the gender and age correction factor and the time correction factor.
  • Correction order For example, referring to Fig. 3, the pixel temperature values distributed in the positioning frame of the first corrected thermal imaging image frame can be corrected first according to gender and age correction factors to obtain the second corrected thermal imaging image frame; then the factors are corrected according to time The pixel temperature values distributed in the positioning frame of the second corrected thermal imaging image frame are corrected to obtain the third corrected thermal imaging image frame.
  • the target temperature of the target person may be determined according to the corrected pixel temperature values distributed in the positioning frame of the third corrected thermal imaging image frame, such as the corrected pixels distributed in the positioning frame of the third corrected thermal imaging image frame
  • the average value of the temperature value is used as the target temperature of the target person.
  • the step of performing secondary correction on the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame according to gender and age correction factors includes: (i) extracting the facial features of the target person in the target visible light image frame; The feature recognizes the gender and age information of the target person; for example, the deep learning network model is used to extract the feature of the position box corresponding to the person's face in the target visible light image frame to obtain the face feature; recognize the gender and age information of the target person according to the face feature .
  • the deep learning network model used to identify the gender and age information of a person is itself an existing method, and the principle will not be elaborated here.
  • the gender and age mapping table can refer to the following table 1 As shown, Table 1 records the temperature correction values corresponding to different genders and different age ranges. For example, there are two temperature correction values corresponding to the age range of 3-10 years old, the mapping value for boys is +0.5 and the mapping value for girls is +0.8. Taking the mapping value (+0.5) of a boy as an example, it means that the temperature of the boy in the first corrected thermal imaging image frame is increased by 0.5 degrees Celsius (°C).
  • Table 1 Gender and age mapping table
  • the unit of the mapping value is Celsius (°C).
  • the step of twice correcting the pixel temperature value in the positioning frame of the first corrected thermal imaging image frame includes: obtaining the shooting time of the target reference image frame pair; according to the shooting time and a preset time mapping table , Perform secondary correction on the pixel temperature values distributed in the positioning frame of the first corrected thermal imaging image frame; wherein, the time mapping table can be referred to as shown in Table 2 below. Table 2 records the temperature correction values corresponding to different time intervals.
  • the shooting time of the target reference image frame pair is 8 o'clock, in the time interval of 6-10 o'clock, and the corresponding temperature correction value is -0.5, which means that the A correction of the temperature value of the pixels distributed in the positioning frame of the person in the thermal imaging image frame is reduced by 0.5 degrees Celsius.
  • the unit of the temperature correction value is degrees Celsius (°C).
  • the above-mentioned temperature correction method provided by this embodiment corrects the temperature of the person in the thermal imaging image frame for the target, and sequentially undergoes the initial correction of the distance and the second correction of the preset temperature correction factor, which can effectively improve the accuracy of the temperature detection of the person.
  • step (2) can also be implemented with reference to the temperature correction method shown in FIG. 4, and this method may include the following steps 1) to 3):
  • Step 1) Determine the second temperature correction value of the target person according to the gender and age correction factor and the preset gender and age mapping table; and/or determine the third temperature correction value of the target person according to the time correction factor and the preset time mapping table Value;
  • the gender and age mapping table can refer to Table 1 above
  • the time mapping table can refer to Table 2 above.
  • Step 2) weighting the first temperature correction value, the second temperature correction value and/or the third temperature correction value according to the preset weight to obtain the target temperature correction value.
  • the target temperature correction value can be obtained by referring to the following formula:
  • ⁇ P ⁇ 1 ⁇ P 1 + ⁇ 2 ⁇ P 2 + ⁇ 3 ⁇ P 3
  • ⁇ P is the target temperature correction value
  • ⁇ P 1 is the first temperature correction value
  • ⁇ 1 is the weight corresponding to the first temperature correction value
  • ⁇ P 2 is the second temperature correction value
  • ⁇ 2 is the weight corresponding to the second temperature correction value
  • ⁇ P 3 is the third temperature correction value
  • ⁇ 3 is the weight corresponding to the third temperature correction value
  • ⁇ 2 and ⁇ 3 are 0 and ⁇ 1 is 1, it means that in the current body temperature detection scene, the temperature is corrected only based on the relationship between the depth and the temperature correction value.
  • the above method of obtaining the target temperature correction value based on the weight can not only improve the accuracy of body temperature detection, but also better adapt to the actual body temperature detection scene, so as to appropriately reduce the amount of body temperature calculation.
  • Step 3) Correct the pixel temperature value in the positioning frame of the target thermal imaging image frame according to the target temperature correction value. For example, referring to FIG. 4, add the pixel temperature value P 0 in the positioning frame to the target temperature correction value ⁇ P, Obtain the corrected target temperature P of the target person.
  • the temperature correction method by flexibly selecting the depth/distance correction factor, gender and age correction factor, and time correction factor, the temperature correction method can better adapt to the current body temperature detection scene and help improve The accuracy of body temperature detection can also control the amount of calculation during body temperature correction to a certain extent.
  • the embodiment of the present disclosure may also provide another temperature correction method, as shown in the following steps A to C:
  • Step A Obtain a target reference image frame pair including a person; wherein the target reference image frame pair includes a target visible light image frame and a target thermal imaging image frame.
  • Step B Determine the shooting distance between the person and the shooting device of the target reference image frame pair according to the target visible light image frame.
  • the shooting distance will affect the accuracy of the temperature of the person reflected in the target thermal imaging image frame.
  • the temperature of the person in the target thermal imaging image frame can be corrected based on the shooting distance to obtain a more accurate temperature of the person, thereby well solving the above-mentioned problem.
  • the shooting distance of the target visible light image frame is generally the same as the shooting distance of the target thermal imaging image frame (for example, the shooting distance is basically the same when using a dual-lens camera), or it can be based on the correspondence between the visible light image frame and the thermal imaging image frame
  • the relationship and the shooting distance of the target thermal imaging image frame are determined, and the quality of the general target visible light image frame is good, which can accurately describe the target’s facial/human body characteristics, age, gender and other related information, which can be based on the target visible light image frame. Determine the shooting distance more accurately.
  • Step C Perform temperature correction on the target thermal imaging image frame based on the shooting distance; alternatively, perform temperature correction on the target thermal imaging image frame based on the shooting distance and one or more preset temperature correction factors.
  • the preset temperature correction factors include: gender and age correction factors and/or time correction factors.
  • the specific implementation of the temperature correction can refer to the above-mentioned temperature correction method based on the depth image frame, which will not be further described here.
  • the temperature correction method in this embodiment also takes into account and corrects the influence of distance on temperature; at the same time, in this way, the image quality of the visible light image frame is usually lower. High, the shooting distance thus determined can have higher accuracy, and the temperature of the person can be corrected by effectively using the shooting distance with higher accuracy. Therefore, the present disclosure can effectively improve the accuracy of personnel temperature detection when temperature measurement is performed in a non-contact manner.
  • step 2) Taking into account the visual law of near and far, and in the order of the distance unit of meters, the size of each person's face and human body can be approximately equal. Based on this, we provide an implementation method for determining the shooting distance. Refer to the following steps 1) to step 2):
  • the target person's face area can first determine the positioning frame of the face area according to the position information corresponding to the face, and count the number of first pixels in the location frame of the face area; then count the number of second pixels in the overall target visible light image frame; and finally The ratio of the first number of pixels to the second number of pixels is used as the ratio of pixels of the target person's face area in the target visible light image frame.
  • the pixel ratio value can reflect the size of the human face in the image, and combined with the visual law of near large and small distance, the person’s shooting distance can be determined according to the pixel ratio value.
  • a second embodiment of determining the shooting distance can also be provided here, referring to the following steps 1 to 2:
  • Step 1 Determine the location information of the target person in the depth image according to the location information of the target person in the target visible light image frame.
  • the depth image frame is converted from the target visible light image frame.
  • the positioning information in the target visible light image frame can be directly determined as the positioning information in the depth image frame.
  • Step 2 Collect the pixel depth value in the positioning frame of the target person in the depth image frame, and determine the shooting distance of the target person according to the pixel depth value; wherein the positioning frame is determined based on the positioning information of the target person.
  • the depth image frame refers to an image in which the distance (depth) from the dual-lens camera to each point of the person is used as the pixel depth value, and thus the shooting distance of the person can be determined according to the pixel depth value.
  • this embodiment also provides an example of using the corrected temperature to track heat-producing persons.
  • the heat-producing persons can be tracked based on the tracking ID in the aforementioned tracking information. Tracking; the tracking method of hot people can include the following three steps:
  • a target visible light image frame may contain one or more heat-generating persons, and each heat-generating person is tracked separately according to the unique target tracking ID corresponding to each person.
  • the second step is to obtain multiple visible light image frames to be tracked with target tracking IDs from other reference image frame pairs within a preset time from the target reference image frame pair.
  • the shooting time of the target reference image frame pair is used as the reference time
  • the preset time may be a period of time before the reference time, a period of time after the reference time, or a period of time including the reference time.
  • the third step is to track the heat-producing person according to the shooting time and shooting location of the visible light image frame to be tracked.
  • the corresponding visible light image frame to be tracked can be the historical visible light image frame and/or the latest visible light image frame; in this case, the heat-generating person can be determined according to the shooting time and shooting location of the historical visible light image frame
  • the possible future movement trajectory of the fever person is predicted; or, the latest location of the fever person can be determined according to the shooting location of the latest visible light image frame.
  • at least one of the historical motion trajectory, the possible future motion trajectory, and the latest position can be used as the tracking information of the heat-generating person.
  • the temperature correction method provided by the above embodiment takes into account the influence of distance on temperature measurement, and corrects the measured temperature according to the distance, so as to effectively improve the accuracy of personnel temperature detection during non-contact temperature measurement. sex.
  • the device includes:
  • the image acquisition module 502 is configured to acquire a target reference image frame pair including a person; wherein the target reference image frame pair includes a target visible light image frame and a target thermal imaging image frame;
  • the image conversion module 504 is configured to convert the target visible light image frame into a depth image frame
  • the image input module 506 is configured to input the depth image frame and the target thermal imaging image frame into the atmospheric inverse scattering model; wherein the atmospheric inverse scattering model is a neural network model obtained by pre-fitting the relationship between the depth and the temperature correction value;
  • the first temperature correction module 508 is configured to perform temperature correction on the target thermal imaging image frame based on the depth image frame through the atmospheric inverse scattering model to obtain the target temperature.
  • the above-mentioned temperature correction device directly based on the temperature of the person determined by the thermal imaging image frame of the detected target, performs temperature correction through the atmospheric inverse scattering model, which can ensure higher temperature detection efficiency.
  • the relationship between depth and temperature correction value is also effectively used to correct the temperature of personnel, so as to obtain more accurate results. Since the depth reflects the distance between the camera and the person, this method takes into account and corrects the influence of the distance on the temperature, which can effectively improve the accuracy of the person's temperature detection during non-contact temperature measurement.
  • the above-mentioned temperature correction device further includes a tracking module (not shown in the figure), the tracking module is configured to: determine the tracking information of the person in the target visible light image frame according to the tracking algorithm; wherein, the tracking information Including positioning information and tracking ID; positioning information includes the position information of the face and/or human body in the target visible light image frame; the tracking ID is used to identify different persons in the target visible light image frame, and the same person in different visible light image frames The tracking ID is the same.
  • the above-mentioned first temperature correction module 508 is further configured to extract the pixel depth value of the depth image frame and the pixel temperature value of the target thermal imaging image frame according to the location information of the target person through the atmospheric inverse scattering model ; Among them, the target person is determined based on the tracking ID; the temperature correction value corresponding to the pixel depth value is determined according to the relationship between the depth and the temperature correction value; the pixel temperature value is corrected according to the temperature correction value to obtain the target temperature of the target person.
  • the above-mentioned first temperature correction module 508 is further configured to determine the first temperature correction value based on the depth image frame according to the relationship between the depth in the atmospheric inverse scattering model and the temperature correction value;
  • the temperature correction value and the preset temperature correction factor perform temperature correction on the target thermal imaging image frame to obtain the target temperature of the target person; among them, the preset temperature correction factor includes: gender and age correction factor and/or time correction factor, the target person Determined based on the tracking ID.
  • the above-mentioned first temperature correction module 508 is further configured to determine the second temperature correction value of the target person according to the gender and age correction factor and a preset gender and age mapping table; wherein, the gender and age mapping table There are recorded temperature correction values for different genders and different age ranges; and/or the third temperature correction value of the target person is determined according to the time correction factor and the preset time mapping table; among them, there are different records in the time mapping table.
  • the pixel temperature value in the positioning frame of the thermal imaging image frame is corrected to obtain the target temperature of the target person; wherein, the positioning frame is determined based on the positioning information of the target person.
  • the above-mentioned first temperature correction module 508 is further configured to correct the pixel temperature value in the positioning frame of the target thermal imaging image frame according to the first temperature correction value to obtain the target thermal imaging image frame The first corrected thermal imaging image frame; wherein, the positioning frame is determined based on the positioning information of the target person; according to gender and age correction factors and/or time correction factors, the pixel temperature in the positioning frame of the first corrected thermal imaging image frame The value is corrected twice to obtain the target temperature of the target person.
  • the above-mentioned first temperature correction module 508 is further configured to perform a second correction on the pixel temperature values distributed in the positioning frame of the first corrected thermal imaging image frame according to gender and age correction factors to obtain The second corrected thermal imaging image frame; the third correction is performed on the pixel temperature values distributed in the positioning frame of the second corrected thermal imaging image frame according to the time correction factor to obtain the third corrected thermal imaging image frame; according to the third corrected thermal imaging
  • the corrected pixel temperature values distributed in the positioning frame of the image frame determine the target temperature of the target person.
  • the above-mentioned first temperature correction module 508 is further configured to: extract the facial features of the target person in the target visible light image frame; identify the gender and age information of the target person according to the facial features; Information and the preset gender and age mapping table, the second correction of the pixel temperature values distributed in the positioning frame of the first modified thermal imaging image frame; wherein, the gender and age mapping table records corresponding to different genders and different age ranges. Temperature correction value.
  • the above-mentioned first temperature correction module 508 is further configured to: obtain the shooting time of the target reference image frame pair; The temperature values of the pixels distributed in the positioning frame are corrected twice; among them, the temperature correction values corresponding to different time intervals are recorded in the time mapping table.
  • the above-mentioned temperature correction device further includes a second temperature correction module (not shown in the figure), and the second document correction module is configured to determine the person and the target reference image frame according to the target visible light image frame.
  • the shooting distance between the right shooting devices; temperature correction is performed on the target thermal imaging image frame based on the shooting distance and preset temperature correction factors; wherein the preset temperature correction factors include: gender, age correction factors and/or time correction factors .
  • the above-mentioned second temperature correction module is further configured to: according to the position information corresponding to the face of the target person in the target visible light image frame, determine that the face area of the target person is in the target visible light image frame. Pixel ratio value; where the target person is determined based on the tracking ID; the shooting distance between the target person and the shooting device of the target reference image frame pair is determined according to the pixel ratio value.
  • the above-mentioned image acquisition module 502 is further configured to: use a dual-lens camera to capture images of a designated area to obtain multiple pairs of original reference image frame pairs; wherein, the dual-lens camera includes a visible light camera and an infrared camera , The original visible light image frame collected by the visible light camera and the original thermal imaging image frame collected by the infrared camera correspond to each other to form an original reference image frame pair; by performing face detection on the original visible light image frame, it is determined that the original visible light image frame contains persons The target visible light image frame; the original reference image frame pair corresponding to the target visible light image frame is determined as the target reference image frame pair containing the person.
  • the image conversion module 504 is further configured to: input the target visible light image frame into a preset depth map conversion model, and convert the target visible light image frame into a depth image frame through the depth map conversion model.
  • the above-mentioned temperature correction device further includes a heating person tracking module (not shown in the figure), and the heating person tracking module is configured to: according to the target temperature of each person and a preset temperature threshold, Identify the heating person in the visible light image frame, and obtain the target tracking ID of the heating person; from the distance target reference image frame to other reference image frame pairs within a preset time, obtain multiple visible light image frames to be tracked with target tracking ID; According to the shooting time and shooting location of the visible light image frame to be tracked, the heating person is tracked.
  • the heating person tracking module is configured to: according to the target temperature of each person and a preset temperature threshold, Identify the heating person in the visible light image frame, and obtain the target tracking ID of the heating person; from the distance target reference image frame to other reference image frame pairs within a preset time, obtain multiple visible light image frames to be tracked with target tracking ID; According to the shooting time and shooting location of the visible light image frame to be tracked, the heating person is tracked.
  • this embodiment provides a temperature correction system, which includes: an image acquisition device, a processor, and a storage device; wherein the image acquisition device is configured to collect target reference image frame pairs; the storage device stores A computer program, when the computer program is run by a processor, executes the steps of the method described herein, especially any one of the temperature correction methods provided in the foregoing methods.
  • this embodiment also provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and when the computer program is run by a processing device, the steps of the method described herein are executed, especially the steps of the method described above. step.
  • the computer program product of a temperature correction method, device, and system provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be used to execute the instructions described in the foregoing method embodiments
  • the instructions included in the program code can be used to execute the instructions described in the foregoing method embodiments
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
  • the present disclosure uses the atmospheric inverse scattering model to perform temperature correction, which not only ensures higher temperature detection efficiency, but also effectively uses the relationship between depth and temperature correction value to correct the temperature of the personnel, thereby obtaining more accurate results. Since the depth reflects the distance between the camera and the person, this method takes into account and corrects the influence of the distance on the temperature, which can effectively improve the accuracy of the person's temperature detection when the temperature is measured in a non-contact manner.

Abstract

一种温度修正方法、装置及系统,涉及计算机技术领域,方法包括:获取包含有人员的目标参考图像帧对;其中,目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧(S202);将目标可见光图像帧转换为深度图像帧(S204);将深度图像帧和目标热成像图像帧输入大气逆散射模型;其中,大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型(S206);通过大气逆散射模型基于深度图像帧对目标热成像图像帧进行温度修正,得到目标温度(S208)。能够有效提高人员温度的准确性。

Description

温度修正方法、装置及系统
相关申请的交叉引用
本申请要求于2020年4月7提交中国专利局的申请号为202010267395.7、名称为“温度修正方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,尤其是涉及一种温度修正方法、装置及系统。
背景技术
针对诸如新冠肺炎、流感等大多数疫情,“发烧”、“高温”是疑似病原携带者较为常见的显著特征。在此情况下,当疫情爆发时,通常会对体温检测设备有较大的需求量。目前常用的体温检测技术主要是红外成像技术,通过红外成像技术探测目标物体的热成像图像,热成像图像是记录人员等物体的温度的图像。
发明内容
本公开的目的包括例如提供一种温度修正方法、装置及系统,能够有效提高人员温度测量的准确性。
本公开实施例提供了一种温度修正方法,包括:获取包含有人员的目标参考图像帧对;其中,所述目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧;将所述目标可见光图像帧转换为深度图像帧;将所述深度图像帧和所述目标热成像图像帧输入大气逆散射模型;其中,所述大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度。
在一种或多种实施例中,所述方法还包括:根据跟踪算法确定所述目标可见光图像帧中人员的跟踪信息;其中,所述跟踪信息包括定位信息和跟踪ID;所述定位信息包括人脸和/或人体在所述目标可见光图像帧中的位置信息;所述跟踪ID用于标识所述目标可见光图像帧中不同的人员,且同一人员在不同可见光图像帧中的跟踪ID相同。
在一种或多种实施例中,通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度的步骤,包括:通过所述大气逆散射模型根据目标人员的定位信息提取所述深度图像帧的像素深度值和所述目标热成像图像帧的像素温度值;其中,所述目标人员为基于所述跟踪ID确定的;根据所述深度与温度修正值的关系确 定所述像素深度值对应的温度修正值;根据所述温度修正值对所述像素温度值进行修正,得到所述目标人员的目标温度。
在一种或多种实施例中,根据所述深度与温度修正值的关系确定的所述像素深度值对应的温度修正值为第一温度修正值。
在一种或多种实施例中,通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度的步骤,包括:通过所述大气逆散射模型中的深度与温度修正值的关系,基于所述深度图像帧确定第一温度修正值;根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度;其中,所述预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素,所述目标人员为基于所述跟踪ID确定的。
在一种或多种实施例中,通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度的步骤,包括:通过所述大气逆散射模型根据目标人员的定位信息提取所述深度图像帧的像素深度值和所述目标热成像图像帧的像素温度值;根据所述深度与温度修正值的关系确定所述像素深度值对应的第一温度修正值;根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度;其中,所述预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素,所述目标人员为基于所述跟踪ID确定的。
在一种或多种实施例中,根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度的步骤,包括:根据所述性别年龄修正因素和预设的性别年龄映射表确定目标人员的第二温度修正值;其中,所述性别年龄映射表中记录有不同性别和不同年龄区间分别对应的温度修正值;和/或,根据所述时间修正因素和预设的时间映射表确定所述目标人员的第三温度修正值;其中,所述时间映射表中记录有不同时间区间分别对应的温度修正值;根据预设的权重将所述第一温度修正值与所述第二温度修正值和/或所述第三温度修正值进行加权,得到目标温度修正值;根据所述目标温度修正值对所述目标热成像图像帧的定位框内的像素温度值进行修正,得到目标人员的目标温度;其中,所述定位框为基于所述目标人员的定位信息确定的。
在一种或多种实施例中,根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度的步骤,包括:根据所述第一温度修正值对所述目标热成像图像帧的定位框内的像素温度值进行修正,得到所述目标热成像图像帧的第一修正热成像图像帧;其中,所述定位框为基于所述目标人员的定位信息确定的;根据所述性别年龄修正因素和/或所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正,得到所述目标人员的目标温度。
在一种或多种实施例中,根据所述性别年龄修正因素和/或所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正,得到所述目标人员的目标温度的步骤,包括:根据所述性别年龄修正因素对所述第一修正热成像图像帧的所述定位框内分布的像素温度值进行修正,得到第二修正热成像图像帧;根据所述时间修正因素对所述第二修正热成像图像帧的所述定位框内分布的像素温度值进行修正,得到第三修正热成像图像帧;根据所述第三修正热成像图像帧的所述定位框内分布的修正后像素温度值确定目标人员的目标温度。
在一种或多种实施例中,根据所述性别年龄修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正的步骤,包括:提取所述目标可见光图像帧中的所述目标人员的人脸特征;根据所述人脸特征识别所述目标人员的性别年龄信息;根据所述性别年龄信息以及预设的性别年龄映射表,对所述第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,所述性别年龄映射表中记录有不同性别和不同年龄区间分别对应的温度修正值。
在一种或多种实施例中,根据所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正的步骤,包括:获取所述目标参考图像帧对的拍摄时间;根据所述拍摄时间和预设的时间映射表,对所述第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,所述时间映射表中记录有不同时间区间分别对应的温度修正值。
在一种或多种实施例中,所述方法还包括:根据所述目标可见光图像帧确定人员与所述目标参考图像帧对的拍摄设备之间的拍摄距离;基于所述拍摄距离和预设的温度修正因素对所述目标热成像图像帧进行温度修正;其中,所述预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素。
在一种或多种实施例中,根据所述目标可见光图像帧确定人员与所述目标参考图像帧对的拍摄设备之间的拍摄距离的步骤,包括:根据所述目标可见光图像帧中目标人员的人脸对应的位置信息,确定所述目标人员的人脸区域在所述目标可见光图像帧中的像素占比值;其中,所述目标人员为基于所述跟踪ID确定的;根据所述像素占比值确定所述目标人员与所述目标参考图像帧对的拍摄设备之间的拍摄距离。
在一种或多种实施例中,获取包含有人员的目标参考图像帧对的步骤,包括:通过双光相机对指定区域进行图像采集,得到多对原始参考图像帧对;其中,所述双光相机包含可见光相机和红外相机,所述可见光相机采集的原始可见光图像帧与所述红外相机采集的原始热成像图像帧互相对应,形成所述原始参考图像帧对;通过对所述原始可见光图像帧进行人脸检测,在所述原始可见光图像帧中确定包含有人员的目标可见光图像帧;将所述 目标可见光图像帧对应的原始参考图像帧对确定为包含有人员的目标参考图像帧对。
在一种或多种实施例中,将所述目标可见光图像帧转换为深度图像帧的步骤,包括:将所述目标可见光图像帧输入至预设的深度图转换模型,通过所述深度图转换模型将所述目标可见光图像帧转换为深度图像帧。
在一种或多种实施例中,所述方法还包括:根据各个人员的目标温度和预设温度阈值,在所述目标可见光图像帧中确定发热人员,并获取所述发热人员的目标跟踪ID;从距离所述目标参考图像帧对预设时间内的其他参考图像帧对中,获取具有所述目标跟踪ID的多张待跟踪可见光图像帧;根据所述待跟踪可见光图像帧的拍摄时间和拍摄地点对所述发热人员进行追踪。
本公开实施例还提供一种温度修正装置,包括:图像获取模块,配置成获取包含有人员的目标参考图像帧对;其中,所述目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧;图像转换模块,配置成将所述目标可见光图像帧转换为深度图像帧;图像输入模块,配置成将所述深度图像帧和所述目标热成像图像帧输入大气逆散射模型;其中,所述大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;温度修正模块,配置成通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度。
本公开实施例提供了一种温度修正系统,所述系统包括:图像采集装置、处理器和存储装置;所述图像采集装置,配置成采集目标参考图像帧对;所述存储装置上存储有计算机程序,所述计算机程序在被所述处理器运行时执行如本文所述的方法。
本公开实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行本文所述的方法的步骤。
发明人发现,在现有技术中,在红外测温领域,拍摄热图像的红外设备与物体之间的距离对热成像图像中的温度检测结果有较大影响,使得测温结果准确性较低。特别地,在检测疾病相关对象的体温时,会导致将体温正常的健康对象判定为疾病患者,或将体温超过阈值的疾病患者判定为健康人员。
本公开实施例提供了一种温度修正方法、装置及系统,在获取到包含有人员的目标可见光图像帧和目标热成像图像帧后,先将目标可见光图像帧转换为深度图像帧,然后将深度图像帧和目标热成像图像帧输入大气逆散射模型;该大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;最后通过大气逆散射模型基于深度图像帧对目标热成像图像帧进行温度修正,得到目标温度。相比于现有技术中,直接基于探测到目标热成像图像帧确定的人员温度,上述通过大气逆散射模型进行温度修正的方式,在能够保证较高温度检测效率的同时,还有效利用了深度与温度修正值的关系来修正人员的温 度,从而得到更为准确的结果。由于深度体现的是相机与人员的距离,从而该方式考虑到并修正了距离对温度的影响,从而能够在非接触式下测温时,有效提高人员温度检测的准确性。
本公开的其他特征和优点将在随后的说明书中阐述,或者,部分特征和优点可以从说明书推知或毫无疑义地确定,或者通过实施本公开的上述技术即可得知。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本公开实施例所提供的一种电子设备的结构示意图;
图2示出了本公开实施例所提供的一种温度修正方法流程图;
图3示出了本公开实施例所提供的一种温度修正方式示意图;
图4示出了本公开实施例所提供的另一种温度修正方式示意图;
图5示出了本公开实施例所提供的一种温度修正装置结构框图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合附图对本公开的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
考虑到现有体温检测技术中,体温检测结果受测温设备(例如红外设备)与被测温物体之间的距离影响较大,使得测温结果准确性通常较低。基于此,本公开实施例提供了一种温度修正方法、装置及系统。该温度修正方法、装置及系统可以在社区、车站、医院等体温检疫场所,应用于人体温度监测、检测和测量等功能,还可以应用于如水杯、手机等物体的温度监测、检测和测量场景。为便于理解,以下对本公开实施例进行详细介绍。
实施例
首先,参照图1来描述用于实现本公开实施例的温度修正方法、装置及系统的示例电 子设备100。
图1示出了一种电子设备的结构示意图,其中电子设备100包括一个或多个处理器102、一个或多个存储装置104、输入装置106、输出装置108以及图像采集装置110,这些组件通过总线系统112和/或其它形式的连接机构(未示出)互连。应当注意,图1所示的电子设备100的组件和结构只是示例性的,而非限制性的,根据需要,所述电子设备可以具有图1示出的部分组件,也可以具有图1未示出的其他组件和结构。例如,电子设备100中两个或更多个部件之间无线连接,诸如蓝牙、无线保真(Wifi)和物联网。
所述处理器102可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制所述电子设备100中的其它组件以执行期望的功能。
所述存储装置104可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上存储一个或多个计算机程序指令,处理器102可以运行所述程序指令,以实现下文所述的本公开实施例中(由处理器实现)的客户端功能以及/或者其它期望的功能。在所述计算机可读存储介质中还可以存储各种应用程序和各种数据,例如所述应用程序使用和/或产生的各种数据等。
所述输入装置106可以是用户用来输入指令的装置,并且可以包括键盘、鼠标、麦克风和触摸屏等中的一个或多个。
所述输出装置108可以向外部(例如,用户)输出各种信息(例如,图像或声音),并且可以包括显示器、扬声器等中的一个或多个。
所述图像采集装置110可以拍摄用户期望的图像(例如照片、视频等),并且将所拍摄的图像存储在所述存储装置104中以供其它组件使用。
示例性地,用于实现根据本公开实施例的一种温度修正方法、装置及系统的示例电子设备被实现为诸如智能手机、摄像机、测温设备和平板电脑等智能终端上。
参照图2所示的一种温度修正方法的流程图,该方法主要包括如下步骤S202至步骤S208:
步骤S202,获取包含有人员的目标参考图像帧对;其中,目标参考图像帧对可以通过双光相机对监测区域中的人员进行图像采集得到。双光相机为一种双相机结构的组合相机,本实施例中的双光相机为可见光相机和热成像相机的组合,其中可见光相机和热成像相机通过上下排列或左右排列等方式组合为整体。可见光相机采集的含有人员的目标可见光图像帧和热成像相机采集的含有人员的目标热成像图像帧相互对应,形成目标参考图像帧对。
步骤S204,将目标可见光图像帧转换为深度图像帧。在一种实现方式中,可以将目标可见光图像帧输入至预设的深度图转换模型,通过深度图转换模型将目标可见光图像帧转换为深度图像帧。其中,深度图转换模型根据可见光相机的内参矩阵和目标可见光图像帧的像素坐标输出目标可见光图像帧对应的深度图像帧。上述经转换得到的深度图像将从双光相机到人员的各点的距离(深度)作为像素深度值,该像素深度值能够反映双光相机与人员之间拍摄距离的远近程度。
步骤S206,将深度图像帧和目标热成像图像帧输入大气逆散射模型;其中,大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型。
步骤S208,通过大气逆散射模型基于深度图像帧对目标热成像图像帧进行温度修正,得到目标温度。
在本实施例中,大气逆散射模型可以提取出深度图像帧的像素深度值和目标热成像图像帧中人员的像素温度值,根据像素深度值以及深度与温度修正值的关系对像素温度值进行修正,并输出对人员温度修正后的目标温度。
本实施例提供的温度修正方法,在获取到包含有人员的目标可见光图像帧和目标热成像图像帧后,先将目标可见光图像帧转换为深度图像帧,然后将深度图像帧和目标热成像图像帧输入大气逆散射模型;该大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;最后通过大气逆散射模型基于深度图像帧对目标热成像图像帧进行温度修正,得到目标温度。相比于现有技术中,直接基于探测到目标热成像图像帧确定的人员温度,上述通过大气逆散射模型进行温度修正的方式,在能够保证较高温度检测效率的同时,还有效利用了深度与温度修正值的关系来修正人员的温度,从而得到更为准确的结果。由于深度体现的是相机与人员的距离,该方式考虑到并修正了距离对温度的影响,从而能够在非接触式下测温时,有效提高人员温度检测的准确性。
在本公开的一个实施例中,上述包含有人员的目标参考图像帧对具体可以采取如下的方法获取:
首先通过双光相机对指定区域进行图像采集,得到多对原始参考图像帧对;其中,双光相机中的可见光相机采集的原始可见光图像帧与红外相机采集的原始热成像图像帧互相对应,形成原始参考图像帧对。
然后通过对原始可见光图像帧进行人脸检测,在原始可见光图像帧中确定包含有人员的目标可见光图像帧。对原始可见光图像帧进行人脸检测可以采用现有的人脸检测方法,例如基于卷积神经网络的人脸检测方法(R-CNN、Fast R-CNN、Faster R-CNN等)。这些方法能够确定原始可见光图像帧中是否包含有人员,以及在包含有人员的目标可见光图像帧中预测产生人员的定位信息,该定位信息通常包括人脸和/或人体在目标可见光图像帧中的 位置信息。
目标热成像图像帧中除了人员之外的物体(如植物、车辆)均具有温度值,这些物体不但会对人员温度的修正造成干扰,还会增加计算量。因此,可选地,在执行上述步骤S208时,只对目标热成像图像帧中的人员所在的区域进行温度修正,以此减少温度修正的数据量,提高温度修正效率。为了获取目标热成像图像帧中至少一个人员的区域,提供一种人员跟踪方法,参照如下所示:根据跟踪算法确定目标可见光图像帧中人员的跟踪信息;其中,跟踪信息包括定位信息和跟踪ID;定位信息包括人脸和/或人体在目标可见光图像帧中的位置信息;跟踪ID用于标识目标可见光图像帧中不同的人员,且同一人员在不同可见光图像帧中的跟踪ID相同。
在实际应用场景中,目标可见光图像帧通常为连续的图像帧序列中的多帧,且多帧目标可见光图像帧中包含有相同的人员。目标可见光图像帧中的人员在全身不被遮挡的情况下,定位信息包括在目标可见光图像帧中的位置信息;目标可见光图像帧中的人员在面部或身体等局部被遮挡的情况下,定位信息包括人脸或人体等未被遮挡部分在目标可见光图像帧中的位置信息。
可以理解的是,由于深度图像帧是由目标可见光图像帧转换得到的,由此在确定目标可见光图像帧中人员的跟踪信息后,也就可以确定深度图像帧中人员的跟踪信息。
对于目标热成像图像帧中人员的跟踪信息的确定方式,在实际应用中,由于双光相机中的可见光相机和热成像相机的空间排列方式,使得所拍摄的可见光图像帧与红外光图像帧无法完全对齐、存在一定的空间偏差。为了改善图像帧空间偏差造成的跟踪信息偏差,本实施例可以根据可见光相机和热成像相机的空间排列方式通和相机的参数预先确定可见光图像帧与红外光图像帧的位置对应关系,然后再根据该位置对应关系和目标可见光图像帧中人员的跟踪信息,从而准确地确定目标热成像图像帧中人员的跟踪信息。
基于上述方法确定的人员的跟踪信息,步骤S208中通过大气逆散射模型对目标热成像图像帧进行温度修正的步骤可参照如下步骤(1)至(3)实现:
(1)通过大气逆散射模型根据目标人员的定位信息提取深度图像帧的像素深度值和目标热成像图像帧的像素温度值;其中,基于跟踪ID确定目标人员;可以根据跟踪ID将各个人员逐一作为目标人员,也可以将每个人员都作为目标人员,并通过跟踪ID来区分不同的目标人员。
基于定位信息可以确定人员在目标可见光图像帧中的定位框,该定位框包括人脸和/或人体对应的一个或多个位置框。通过大气逆散射模型提取深度图像帧中目标人员所在定位框内的像素深度值和目标热成像图像帧中目标人员所在定位框内的像素温度值。在此情况下,可以减少像素深度值和像素温度值的提取数量,提高大气逆散射模型的提取效率,并 有利于减少其它物体温度的干扰,以提高后续温度修正的效率和准确性。
(2)根据深度与温度修正值的关系确定像素深度值对应的温度修正值。深度与温度修正值的关系可以通过曲线或者函数表示。每一个深度都对应有匹配的温度修正值,该温度修正值可能为正可能为负,基于此确定像素深度值对应的温度修正值。目标人员所在定位框内的像素深度值一般为多个,本实施例可以根据深度与温度修正值的关系确定各个像素深度值对应的温度修正值;还可以先根据目标人员所在定位框内的像素深度值确定像素深度值代表,该像素深度值代表诸如为多个像素深度值中的平均数、众数或中位数等,然后再根据深度与温度修正值的关系确定像素深度值代表对应的温度修正值。
(3)根据温度修正值对像素温度值进行修正,得到目标人员的目标温度。
在本实施例中,可选地,温度修正值可以为各个像素深度值对应的温度修正值,在此情况下,根据各温度修正值对目标热成像图像帧中对应像素位置处的像素温度值进行修正,其中,目标热成像图像帧中对应像素位置是基于目标可见光图像帧和目标热成像图像帧的对应位置关系获取的。可选地,温度修正值可以为像素深度值代表对应的温度修正值,在此情况下,先根据目标人员所在定位框内的像素温度值确定像素温度值代表,然后根据温度修正值对像素温度值代表进行修正。
在修正时可以将温度修正值与像素温度值相加。当温度修正值为负时,相对于未修正的像素温度值,修正后的目标温度降低;当温度修正值为正时,相对于未修正的像素温度值,修正后的目标温度增加。
该实施例提供的温度修正方式,可以基于定位信息只对人员所在的图像区域进行温度修正,这样既能够减少植物、车辆等物体对人员温度的修正造成干扰,以提高温度的准确性,还能够减少计算量,提高温度修正效率。
考虑到除了深度(也即拍摄设备与人员之间的距离)因素会对热成像图像帧中的温度有较大影响之外,年龄、性别和环境温度等诸多因素也会对温度测量的准确性造成影响。基于此,可选地,在对目标热成像图像帧中的人员进行温度修正的过程中,可以在深度/距离修正因素的基础上结合其它温度修正因素对目标热成像图像帧中的人员温度值进行修正,参照如下步骤(一)和步骤(二)所示:
(一)通过大气逆散射模型中的深度与温度修正值的关系,基于深度图像帧确定第一温度修正值。可以通过大气逆散射模型先提取像素深度值和像素温度值,然后根据深度与温度修正值的关系确定像素深度值对应的第一温度修正值;具体实现方式可参照上述步骤(1)和步骤(2),在此不再展开描述。
(二)根据第一温度修正值和预设的温度修正因素对目标热成像图像帧进行温度修正,得到目标人员的目标温度;其中,预设的温度修正因素可以包括但不限于:性别年龄修正 因素和/或时间修正因素,性别年龄修正因素是针对不同年龄段、不同性别的人群存在体温差别而设定的温度修正因素,时间修正因素是针对一天不同的时间段中人员体温差别而设定的温度修正因素。
为了便于理解,参照图3,在此给出根据第一温度修正值和预设的温度修正因素对目标热成像图像帧进行温度修正的一种可选实施方式,参照如下步骤1和步骤2:
步骤1,根据第一温度修正值对目标热成像图像帧的定位框内的像素温度值进行修正,得到目标热成像图像帧的第一修正热成像图像帧;上述定位框为基于目标人员的定位信息确定的。
在具体实现时,首先根据第一温度修正值对目标热成像图像帧的定位框内的像素温度值进行修正,得到目标热成像图像帧的定位框内的修正后像素温度值;然后基于目标热成像图像帧的定位框内的修正后像素温度值和目标热成像图像帧的其它区域(除定位框之外的区域)内初始的像素温度值,得到目标热成像图像帧的第一修正热成像图像帧。
步骤2,根据性别年龄修正因素和/或时间修正因素,对第一修正热成像图像帧的定位框内的像素温度值进行二次修正,得到目标人员的目标温度。
上述性别年龄修正因素和时间修正因素可以择一采用,也可以同时采用。可选地,当同时根据性别年龄修正因素和时间修正因素对第一修正热成像图像帧中定位框内的温度作进一步修正时,本实施例并不限定性别年龄修正因素和时间修正因素的温度修正顺序。例如,参照图3所示:可以先根据性别年龄修正因素对第一修正热成像图像帧的定位框内分布的像素温度值进行修正,得到第二修正热成像图像帧;然后再根据时间修正因素对第二修正热成像图像帧的定位框内分布的像素温度值进行修正,得到第三修正热成像图像帧。在此情况下,可以根据第三修正热成像图像帧的定位框内分布的修正后像素温度值确定目标人员的目标温度,诸如将第三修正热成像图像帧的定位框内分布的修正后像素温度值的平均值作为目标人员的目标温度。
下面将分别介绍性别年龄修正因素和时间修正因素的具体温度修正方式。
根据性别年龄修正因素对第一修正热成像图像帧的定位框内的像素温度值进行二次修正的步骤,包括:(i)提取目标可见光图像帧中的目标人员的人脸特征;根据人脸特征识别目标人员的性别年龄信息;例如,通过深度学习网络模型对目标可见光图像帧中人员的人脸对应的位置框进行特征提取,得到人脸特征;根据人脸特征识别目标人员的性别年龄信息。其中,用于识别人员的性别年龄信息的深度学习网络模型,其本身是一种现有方法,这里不再详细阐述原理。(ii)根据性别年龄信息以及预设的性别年龄映射表,对第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,性别年龄映射表可参照如下表1所示,表1中记录有不同性别和不同年龄区间分别对应的温度修正值。例如3-10 岁年龄区间对应的温度修正值有两个,分别为男孩的映射值为+0.5和女孩的映射值为+0.8。以男孩的映射值(+0.5)为例,其表示将第一修正热成像图像帧中该男孩的温度增加0.5摄氏度(℃)。
表1:性别年龄映射表
Figure PCTCN2020119468-appb-000001
表1中,映射值的单位为摄氏度(℃)。
根据时间修正因素,对第一修正热成像图像帧的定位框内的像素温度值进行二次修正的步骤,包括:获取目标参考图像帧对的拍摄时间;根据拍摄时间和预设的时间映射表,对第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,时间映射表可参照如下表2所示。表2中记录有不同时间区间分别对应的温度修正值,例如目标参考图像帧对的拍摄时间为8时,处于6-10时的时间区间,其对应的温度修正值为-0.5,表示将第一修正热成像图像帧中人员的定位框内分布的像素温度值减少0.5摄氏度。
表2:时间映射表
时间区间 6-10时 10-12时 12-14时 14-18时 18-22时
温度修正值 -0.5 +0.3 +0.5 +0 -0.4
表3中,温度修正值的单位为摄氏度(℃)。
本实施例提供的上述温度修正方式,对于目标修正热成像图像帧中人员的温度,依次经过距离的初始修正和预设的温度修正因素的再次修正,能够有效提升人员温度检测的准确性。
在本公开实施例的另一个可选的实施方式中,还可以参照图4所示的温度修正方式实现上述步骤(二),该方式可以包括如下步骤1)至步骤3):
步骤1),根据性别年龄修正因素和预设的性别年龄映射表确定目标人员的第二温度修正值;和/或,根据时间修正因素和预设的时间映射表确定目标人员的第三温度修正值;其中,性别年龄映射表可参照上述表1,时间映射表可参照上述表2。
步骤2),根据预设的权重将第一温度修正值与第二温度修正值和/或第三温度修正值进行加权计算,得到目标温度修正值。
在具体实现时,可参照如下公式得到目标温度修正值:
ΔP=λ 1ΔP 12ΔP 23ΔP 3
其中,ΔP为目标温度修正值,ΔP 1为第一温度修正值,λ 1为第一温度修正值对应的权重,ΔP 2为第二温度修正值,λ 2为第二温度修正值对应的权重,ΔP 3为第三温度修正值,λ 3为第三温度修正值对应的权重;其中,λ 123=1,而且,为了灵活适应实际体温检测场景,λ 1、λ 2、λ 3中的一项或两项可以0。例如当λ 2、λ 3为0时,λ 1为1,表示在当前的体温检测场景中,仅基于深度与温度修正值的关系对温度进行修正。上述基于权重得到目标温度修正值的方式,既能够提高体温检测的准确性又可以更好的适应实际体温检测场景,以适当减小体温的计算量。
步骤3),根据目标温度修正值对目标热成像图像帧的定位框内的像素温度值进行修正,诸如参照图4,将定位框内的像素温度值P 0与目标温度修正值ΔP相加,得到目标人员修正后的目标温度P。
在本实施例提供的温度修正方式中,通过对深度/距离修正因素、性别年龄修正因素、时间修正因素进行灵活选择,使得温度修正方式能够更好地适应当前的体温检测场景,有助于提高体温检测的准确性,同时还能够在一定程度上控制体温修正过程中的计算量大小。
本公开实施例还可以提供另一种温度修正的方法,参照如下步骤A至步骤C所示:
步骤A、获取包含有人员的目标参考图像帧对;其中,目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧。
步骤B、根据目标可见光图像帧确定人员与目标参考图像帧对的拍摄设备之间的拍摄距离。
容易理解的是,由于大气对进入双光相机的不同波长的光有一定的衰减作用,故拍摄距离会对目标热成像图像帧所反映的人员的温度的准确性造成影响。本实施例可以基于拍摄距离对目标热成像图像帧中人员的温度进行修正,以获得更准确的人员温度,从而很好地解决了上述问题。
由于目标可见光图像帧的拍摄距离一般与目标热成像图像帧的拍摄距离是相同的(例如使用双光相机时拍摄距离基本相同),或者是可以基于可见光图像帧与热成像图像帧之间的对应关系以及目标热成像图像帧的拍摄距离确定的,且一般目标可见光图像帧的质量较好,能够准确地描述目标的面部/人体特征、年龄、性别等相关信息,由此基于目标可见光图像帧可以较为准确地确定拍摄距离。
步骤C、基于拍摄距离对目标热成像图像帧进行温度修正;或者,基于拍摄距离和一种或多种预设的温度修正因素对目标热成像图像帧进行温度修正。其中,预设的温度修正 因素包括:性别年龄修正因素和/或时间修正因素。温度修正方式可以有多种,诸如根据拍摄距离度对目标热成像图像帧中的温度进行增加或减少。又诸如,先拟合拍摄距离与温度衰减之间的关系,然后再基于该关系修正目标热成像图像帧中人员的温度。温度修正的具体实现方式可参照上述基于深度图像帧的温度修正方式,在此不再展开描述。
同上述多个实施例中提供的温度修正方法,该实施例中的温度修正的方法,也是考虑到并修正了距离对温度的影响;同时,在该方式中,通常可见光图像帧的图像质量较高,由此确定的拍摄距离能够具有较高的准确性,进而通过有效利用准确度较高的拍摄距离来修正人员的温度。因此,本公开能够在非接触式下测温时,有效提高人员温度检测的准确性。
考虑到近大远小的视觉规律,以及在米这个距离单位的数量级下,每个人员的人脸、人体的大小可近似相等,基于此提供一种确定拍摄距离的实施方式一,参照如下步骤1)至步骤2):
1)根据目标可见光图像帧中目标人员的人脸对应的位置信息,确定目标人员的人脸区域在目标可见光图像帧中的像素占比值;其中,目标人员为基于跟踪ID确定的。
具体可以是先根据人脸对应的位置信息确定人脸区域的定位框,并统计人脸区域的位置框中的第一像素数量;然后统计目标可见光图像帧整体中的第二像素数量;最后将第一像素数量与第二像素数量的比值作为目标人员的人脸区域在目标可见光图像帧中的像素占比值。
2)根据像素占比值确定目标人员与目标参考图像帧对的拍摄设备之间的拍摄距离。像素占比值能够反映人脸在图像中的大小,结合近大远小的视觉规律,可以根据像素占比值确定人员的拍摄距离。
根据上述深度图像帧,在此还可以提供一种确定拍摄距离的实施方式二,参照如下步骤1至步骤2:
步骤1,根据目标可见光图像帧中目标人员的定位信息确定深度图像中目标人员的定位信息。深度图像帧是由目标可见光图像帧转换得到的,在此情况下,可以直接将目标可见光图像帧中的定位信息确定为深度图像帧中的定位信息。
步骤2,采集深度图像帧中目标人员的定位框内的像素深度值,根据像素深度值确定目标人员的拍摄距离;其中,定位框为基于目标人员的定位信息确定的。
可以理解的是,深度图像帧是指将从双光相机到人员的各点的距离(深度)作为像素深度值的图像,由此可以根据像素深度值确定人员的拍摄距离。
当然,以上两种确定拍摄距离的实施方式仅为示例性描述,不应理解为限制。
根据上述多个公开实施例提供的温度修正方法,本实施例还提供一种利用修正后温度 对发热人员进行追踪的示例,在该示例中,可以基于上述跟踪信息中的跟踪ID对发热人员进行追踪;热人员的追踪方法可以包括如下三步:
第一步,根据各个人员的修正后的目标温度和预设温度阈值,在目标可见光图像帧中确定发热人员,并获取发热人员的目标跟踪ID。其中,一张目标可见光图像帧中可能包含有一个或者多个发热人员,根据各个人员对应的唯一目标跟踪ID对每个发热人员分别进行追踪。
第二步,从距离目标参考图像帧对预设时间内的其他参考图像帧对中,获取具有目标跟踪ID的多张待跟踪可见光图像帧。在具体实现时,将目标参考图像帧对的拍摄时间作为基准时间,预设时间可以为基准时间之前的一段时间、基准时间之后的一段时间或者为包含基准时间的一段时间。距离目标参考图像帧对预设时间内的其他参考图像帧对有多对,将各对参考图像帧对中的可见光图像帧选取为待跟踪可见光图像帧。
第三步,根据待跟踪可见光图像帧的拍摄时间和拍摄地点对发热人员进行追踪。基于预设时间的多种可能,对应的待跟踪可见光图像帧可以为历史可见光图像帧和/或最新可见光图像帧;在此情况下,可以根据历史可见光图像帧的拍摄时间和拍摄地点确定发热人员的历史运动轨迹,并基于发热人员的历史运动轨迹预测发热人员未来可能的运动轨迹;或者,还可以根据最新可见光图像帧的拍摄地点确定发热人员的最新位置。从而,可以将历史运动轨迹、未来可能的运动轨迹和最新位置中的至少一项作为发热人员的追踪信息。
综上,上述实施例提供的温度修正方法,考虑到距离对温度测量的影响,并根据距离对测得温度进行了修正,从而能够在非接触式下测温时,有效提高人员温度检测的准确性。
参见图5所示的一种温度修正装置的结构框图,该装置包括:
图像获取模块502,配置成获取包含有人员的目标参考图像帧对;其中,目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧;
图像转换模块504,配置成将目标可见光图像帧转换为深度图像帧;
图像输入模块506,配置成将深度图像帧和目标热成像图像帧输入大气逆散射模型;其中,大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;
第一温度修正模块508,配置成通过大气逆散射模型基于深度图像帧对目标热成像图像帧进行温度修正,得到目标温度。
相比于现有技术中,直接基于探测到目标热成像图像帧确定的人员温度,本公开实施例提供的上述温度修正装置,通过大气逆散射模型进行温度修正,在能够保证较高温度检测效率的同时,还有效利用了深度与温度修正值的关系来修正人员的温度,从而得到更为准确的结果。由于深度体现的是相机与人员的距离,从而该方式考虑到并修正了距离对温 度的影响,从而能够在非接触式下测温时,有效提高人员温度检测的准确性。
在一种或多种实施方式中,上述温度修正装置还包括跟踪模块(图中未示出),该跟踪模块配置成:根据跟踪算法确定目标可见光图像帧中人员的跟踪信息;其中,跟踪信息包括定位信息和跟踪ID;定位信息包括人脸和/或人体在目标可见光图像帧中的位置信息;跟踪ID用于标识目标可见光图像帧中不同的人员,且同一人员在不同可见光图像帧中的跟踪ID相同。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:通过大气逆散射模型根据目标人员的定位信息提取深度图像帧的像素深度值和目标热成像图像帧的像素温度值;其中,目标人员为基于跟踪ID确定的;根据深度与温度修正值的关系确定像素深度值对应的温度修正值;根据温度修正值对像素温度值进行修正,得到目标人员的目标温度。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:通过大气逆散射模型中的深度与温度修正值的关系,基于深度图像帧确定第一温度修正值;根据第一温度修正值和预设的温度修正因素对目标热成像图像帧进行温度修正,得到目标人员的目标温度;其中,预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素,目标人员为基于跟踪ID确定的。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:根据性别年龄修正因素和预设的性别年龄映射表确定目标人员的第二温度修正值;其中,性别年龄映射表中记录有不同性别和不同年龄区间分别对应的温度修正值;和/或,根据时间修正因素和预设的时间映射表确定目标人员的第三温度修正值;其中,时间映射表中记录有不同时间区间分别对应的温度修正值;根据预设的权重将第一温度修正值与第二温度修正值和/或第三温度修正值进行加权,得到目标温度修正值;根据目标温度修正值对目标热成像图像帧的定位框内的像素温度值进行修正,得到目标人员的目标温度;其中,定位框为基于目标人员的定位信息确定的。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:根据第一温度修正值对目标热成像图像帧的定位框内的像素温度值进行修正,得到目标热成像图像帧的第一修正热成像图像帧;其中,定位框为基于目标人员的定位信息确定的;根据性别年龄修正因素和/或时间修正因素,对第一修正热成像图像帧的定位框内的像素温度值进行二次修正,得到目标人员的目标温度。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:根据性别年龄修正因素对第一修正热成像图像帧的定位框内分布的像素温度值进行第二次修正,得到第二修正热成像图像帧;根据时间修正因素对第二修正热成像图像帧的定位框内分布的像素温度值进行第三次修正,得到第三修正热成像图像帧;根据第三修正热成像图像帧的定位框内 分布的修正后像素温度值确定目标人员的目标温度。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:提取目标可见光图像帧中的目标人员的人脸特征;根据人脸特征识别目标人员的性别年龄信息;根据性别年龄信息以及预设的性别年龄映射表,对第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,性别年龄映射表中记录有不同性别和不同年龄区间分别对应的温度修正值。
在一种或多种实施方式中,上述第一温度修正模块508还配置成:获取目标参考图像帧对的拍摄时间;根据拍摄时间和预设的时间映射表,对第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,时间映射表中记录有不同时间区间分别对应的温度修正值。
在一种或多种实施方式中,上述温度修正装置还包括第二温度修正模块(图中未示出),该第二文档修正模块配置成:根据目标可见光图像帧确定人员与目标参考图像帧对的拍摄设备之间的拍摄距离;基于拍摄距离和预设的温度修正因素对目标热成像图像帧进行温度修正;其中,预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素。
在一种或多种实施方式中,上述第二温度修正模块还配置成:根据目标可见光图像帧中目标人员的人脸对应的位置信息,确定目标人员的人脸区域在目标可见光图像帧中的像素占比值;其中,目标人员为基于跟踪ID确定的;根据像素占比值确定目标人员与目标参考图像帧对的拍摄设备之间的拍摄距离。
在一种或多种实施方式中,上述图像获取模块502还配置成:通过双光相机对指定区域进行图像采集,得到多对原始参考图像帧对;其中,双光相机包含可见光相机和红外相机,可见光相机采集的原始可见光图像帧与红外相机采集的原始热成像图像帧互相对应,形成原始参考图像帧对;通过对原始可见光图像帧进行人脸检测,在原始可见光图像帧中确定包含有人员的目标可见光图像帧;将目标可见光图像帧对应的原始参考图像帧对确定为包含有人员的目标参考图像帧对。
在一种或多种实施方式中,上述图像转换模块504还配置成:将目标可见光图像帧输入至预设的深度图转换模型,通过深度图转换模型将目标可见光图像帧转换为深度图像帧。
在一种或多种实施方式中,上述温度修正装置还包括发热人员追踪模块(图中未示出),该发热人员追踪模块配置成:根据各个人员的目标温度和预设温度阈值,在目标可见光图像帧中确定发热人员,并获取发热人员的目标跟踪ID;从距离目标参考图像帧对预设时间内的其他参考图像帧对中,获取具有目标跟踪ID的多张待跟踪可见光图像帧;根据待跟踪可见光图像帧的拍摄时间和拍摄地点对发热人员进行追踪。
本实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简 要描述,本实施例部分未提及之处,可参考前述中相应内容。
基于前述实施例,本实施例给出了一种温度修正系统,该系统包括:图像采集设备、处理器和存储设备;其中,图像采集设备配置成采集目标参考图像帧对;存储设备上存储有计算机程序,计算机程序在被处理器运行时执行本文所述的方法的步骤,特别是前述方法中所提供的任一项温度修正方法。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
可选地,本实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理设备运行时执行本文所述的方法的步骤,特别是上述方法的步骤。
本公开实施例所提供的一种温度修正方法、装置及系统的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行前面方法实施例中所述的方法,具体实现可参见方法实施例,在此不再赘述。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
工业实用性
本公开通过大气逆散射模型进行温度修正的方式,在能够保证较高温度检测效率的同时,还有效利用了深度与温度修正值的关系来修正人员的温度,从而得到更为准确的结果。 由于深度体现的是相机与人员的距离,从而该方式考虑到并修正了距离对温度的影响,从而能够在非接触式下测温时,有效提高人员温度检测的准确性。

Claims (18)

  1. 一种温度修正方法,包括:
    获取包含有人员的目标参考图像帧对;其中,所述目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧;
    将所述目标可见光图像帧转换为深度图像帧;
    将所述深度图像帧和所述目标热成像图像帧输入大气逆散射模型;其中,所述大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;
    通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据跟踪算法确定所述目标可见光图像帧中人员的跟踪信息;其中,所述跟踪信息包括定位信息和跟踪ID;所述定位信息包括人脸和/或人体在所述目标可见光图像帧中的位置信息;所述跟踪ID用于标识所述目标可见光图像帧中不同的人员,且同一人员在不同可见光图像帧中的跟踪ID相同。
  3. 根据权利要求2所述的方法,其中,通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度的步骤,包括:
    通过所述大气逆散射模型根据目标人员的定位信息提取所述深度图像帧的像素深度值和所述目标热成像图像帧的像素温度值;其中,所述目标人员为基于所述跟踪ID确定的;
    根据所述深度与温度修正值的关系确定所述像素深度值对应的温度修正值;
    根据所述温度修正值对所述像素温度值进行修正,得到所述目标人员的目标温度。
  4. 根据权利要求2或3所述的方法,其中,通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度的步骤,包括:
    通过所述大气逆散射模型中的深度与温度修正值的关系,基于所述深度图像帧确定第一温度修正值;
    根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度;其中,所述预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素,所述目标人员为基于所述跟踪ID确定的。
  5. 根据权利要求4所述的方法,其中,根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度的步骤,包括:
    根据所述性别年龄修正因素和预设的性别年龄映射表确定目标人员的第二温度修正值;其中,所述性别年龄映射表中记录有不同性别和不同年龄区间分别对应的温度修正值;和/或,根据所述时间修正因素和预设的时间映射表确定所述目标人员的第三温度修正值;其中,所述时间映射表中记录有不同时间区间分别对应的温度修正值;
    根据预设的权重将所述第一温度修正值与所述第二温度修正值和/或所述第三温度修正值进行加权,得到目标温度修正值;
    根据所述目标温度修正值对所述目标热成像图像帧的定位框内的像素温度值进行修正,得到目标人员的目标温度;其中,所述定位框为基于所述目标人员的定位信息确定的。
  6. 根据权利要求4所述的方法,其中,根据所述第一温度修正值和预设的温度修正因素对所述目标热成像图像帧进行温度修正,得到目标人员的目标温度的步骤,包括:
    根据所述第一温度修正值对所述目标热成像图像帧的定位框内的像素温度值进行修正,得到所述目标热成像图像帧的第一修正热成像图像帧;其中,所述定位框为基于所述目标人员的定位信息确定的;
    根据所述性别年龄修正因素和/或所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正,得到所述目标人员的目标温度。
  7. 根据权利要求6所述的方法,其中,根据所述性别年龄修正因素和/或所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正,得到所述目标人员的目标温度的步骤,包括:
    根据所述性别年龄修正因素对所述第一修正热成像图像帧的所述定位框内分布的像素温度值进行修正,得到第二修正热成像图像帧;
    根据所述时间修正因素对所述第二修正热成像图像帧的所述定位框内分布的像素温度值进行修正,得到第三修正热成像图像帧;
    根据所述第三修正热成像图像帧的所述定位框内分布的修正后像素温度值确定目标人员的目标温度。
  8. 根据权利要求6所述的方法,其中,根据所述性别年龄修正因素和/或所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正,得到所述目标人员的目标温度的步骤,包括:
    根据所述时间修正因素对所述第一修正热成像图像帧的所述定位框内分布的像素温度值进行修正,得到第二修正热成像图像帧;
    根据所述性别年龄修正因素对所述第二修正热成像图像帧的所述定位框内分布的 像素温度值进行修正,得到第三修正热成像图像帧;
    根据所述第三修正热成像图像帧的所述定位框内分布的修正后像素温度值确定目标人员的目标温度。
  9. 根据权利要求6-8中任一项所述的方法,其中,根据所述性别年龄修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正的步骤,包括:
    提取所述目标可见光图像帧中的所述目标人员的人脸特征;
    根据所述人脸特征识别所述目标人员的性别年龄信息;
    根据所述性别年龄信息以及预设的性别年龄映射表,对所述第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,所述性别年龄映射表中记录有不同性别和不同年龄区间分别对应的温度修正值。
  10. 根据权利要求6-9中任一项所述的方法,其中,根据所述时间修正因素,对所述第一修正热成像图像帧的所述定位框内的像素温度值进行二次修正的步骤,包括:
    获取所述目标参考图像帧对的拍摄时间;
    根据所述拍摄时间和预设的时间映射表,对所述第一修正热成像图像帧的定位框内分布的像素温度值进行二次修正;其中,所述时间映射表中记录有不同时间区间分别对应的温度修正值。
  11. 根据权利要求1或2所述的方法,其中,所述方法还包括:
    根据所述目标可见光图像帧确定人员与所述目标参考图像帧对的拍摄设备之间的拍摄距离;
    基于所述拍摄距离和预设的温度修正因素对所述目标热成像图像帧进行温度修正;其中,所述预设的温度修正因素包括:性别年龄修正因素和/或时间修正因素。
  12. 根据权利要求11所述的方法,其中,根据所述目标可见光图像帧确定人员与所述目标参考图像帧对的拍摄设备之间的拍摄距离的步骤,包括:
    根据所述目标可见光图像帧中目标人员的人脸对应的位置信息,确定所述目标人员的人脸区域在所述目标可见光图像帧中的像素占比值;其中,所述目标人员为基于所述跟踪ID确定的;
    根据所述像素占比值确定所述目标人员与所述目标参考图像帧对的拍摄设备之间的拍摄距离。
  13. 根据权利要求1-12中任一项所述的方法,其中,获取包含有人员的目标参考图像帧对的步骤,包括:
    通过双光相机对指定区域进行图像采集,得到多对原始参考图像帧对;其中,所 述双光相机包含可见光相机和红外相机,所述可见光相机采集的原始可见光图像帧与所述红外相机采集的原始热成像图像帧互相对应,形成所述原始参考图像帧对;
    通过对所述原始可见光图像帧进行人脸检测,在所述原始可见光图像帧中确定包含有人员的目标可见光图像帧;
    将所述目标可见光图像帧对应的原始参考图像帧对确定为包含有人员的目标参考图像帧对。
  14. 根据权利要求1-13中任一项所述的方法,其中,将所述目标可见光图像帧转换为深度图像帧的步骤,包括:
    将所述目标可见光图像帧输入至预设的深度图转换模型,通过所述深度图转换模型将所述目标可见光图像帧转换为深度图像帧。
  15. 根据权利要求2所述的方法,其中,所述方法还包括:
    根据各个人员的目标温度和预设温度阈值,在所述目标可见光图像帧中确定发热人员,并获取所述发热人员的目标跟踪ID;
    从距离所述目标参考图像帧对预设时间内的其他参考图像帧对中,获取具有所述目标跟踪ID的多张待跟踪可见光图像帧;
    根据所述待跟踪可见光图像帧的拍摄时间和拍摄地点对所述发热人员进行追踪。
  16. 一种温度修正装置,包括:
    图像获取模块,配置成获取包含有人员的目标参考图像帧对;其中,所述目标参考图像帧对包括目标可见光图像帧和目标热成像图像帧;
    图像转换模块,配置成将所述目标可见光图像帧转换为深度图像帧;
    图像输入模块,配置成将所述深度图像帧和所述目标热成像图像帧输入大气逆散射模型;其中,所述大气逆散射模型为预先对深度与温度修正值的关系进行拟合得到的神经网络模型;
    温度修正模块,配置成通过所述大气逆散射模型基于所述深度图像帧对所述目标热成像图像帧进行温度修正,得到目标温度。
  17. 一种温度修正系统,其中,所述系统包括:图像采集装置、处理器和存储装置;
    所述图像采集装置,配置成采集目标参考图像帧对;
    所述存储装置上存储有计算机程序,所述计算机程序在被所述处理器运行时执行如权利要求1至15任一项所述的方法。
  18. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,其中,所述计算机程序被处理器运行时执行上述权利要求1至15任一项所述的方法的步骤。
PCT/CN2020/119468 2020-04-07 2020-09-30 温度修正方法、装置及系统 WO2021203644A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/795,176 US20230162397A1 (en) 2020-04-07 2020-09-30 Temperature correction method, device and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010267395.7A CN111426393B (zh) 2020-04-07 2020-04-07 温度修正方法、装置及系统
CN202010267395.7 2020-04-07

Publications (1)

Publication Number Publication Date
WO2021203644A1 true WO2021203644A1 (zh) 2021-10-14

Family

ID=71555781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119468 WO2021203644A1 (zh) 2020-04-07 2020-09-30 温度修正方法、装置及系统

Country Status (3)

Country Link
US (1) US20230162397A1 (zh)
CN (1) CN111426393B (zh)
WO (1) WO2021203644A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114216569A (zh) * 2022-02-22 2022-03-22 深圳金三立视频科技股份有限公司 一种基于可信度评估的红外测温方法及终端

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111426393B (zh) * 2020-04-07 2021-11-16 北京迈格威科技有限公司 温度修正方法、装置及系统
CN112163519A (zh) * 2020-09-28 2021-01-01 浙江大华技术股份有限公司 图像映射处理方法、装置、存储介质及电子装置
CN112315432B (zh) * 2020-09-29 2022-12-27 北京化工大学 信息监测方法、信息监测装置及计算机可读存储介质
CN112556857B (zh) * 2020-11-26 2022-08-16 浙江大华技术股份有限公司 对象温度的确定方法及装置、存储介质、电子装置
CN112729565B (zh) * 2020-12-25 2022-06-03 快优智能技术有限公司 一种通过红外透镜增加距离的测温方法
CN115115653A (zh) * 2022-06-13 2022-09-27 广东众志检测仪器有限公司 一种冷热冲击试验箱精细化温度校准方法
CN117152397B (zh) * 2023-10-26 2024-01-26 慧医谷中医药科技(天津)股份有限公司 一种基于热成像投影的三维人脸成像方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104568157A (zh) * 2014-12-25 2015-04-29 北京农业信息技术研究中心 一种提高热红外成像测温精度的装置及方法
CN105182983A (zh) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 基于移动机器人的人脸实时跟踪方法和跟踪系统
CN106934894A (zh) * 2017-03-14 2017-07-07 深圳万智联合科技有限公司 一种安全可靠的门禁系统
CN107657635A (zh) * 2017-10-17 2018-02-02 深圳奥比中光科技有限公司 深度相机温度误差校正方法及系统
US20190379879A1 (en) * 2015-09-25 2019-12-12 Intel Corporation Online compensation of thermal distortions in a stereo depth camera
CN111426393A (zh) * 2020-04-07 2020-07-17 北京迈格威科技有限公司 温度修正方法、装置及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021548A (zh) * 2014-05-16 2014-09-03 中国科学院西安光学精密机械研究所 一种获取场景4d信息的方法
CN106611430A (zh) * 2015-10-15 2017-05-03 杭州海康威视数字技术股份有限公司 一种rgb-d图像生成方法、装置及摄像机
CN109798981A (zh) * 2019-02-18 2019-05-24 浙江大华技术股份有限公司 温度确定方法、测温设备、存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104568157A (zh) * 2014-12-25 2015-04-29 北京农业信息技术研究中心 一种提高热红外成像测温精度的装置及方法
US20190379879A1 (en) * 2015-09-25 2019-12-12 Intel Corporation Online compensation of thermal distortions in a stereo depth camera
CN105182983A (zh) * 2015-10-22 2015-12-23 深圳创想未来机器人有限公司 基于移动机器人的人脸实时跟踪方法和跟踪系统
CN106934894A (zh) * 2017-03-14 2017-07-07 深圳万智联合科技有限公司 一种安全可靠的门禁系统
CN107657635A (zh) * 2017-10-17 2018-02-02 深圳奥比中光科技有限公司 深度相机温度误差校正方法及系统
CN111426393A (zh) * 2020-04-07 2020-07-17 北京迈格威科技有限公司 温度修正方法、装置及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114216569A (zh) * 2022-02-22 2022-03-22 深圳金三立视频科技股份有限公司 一种基于可信度评估的红外测温方法及终端
CN114216569B (zh) * 2022-02-22 2022-06-10 深圳金三立视频科技股份有限公司 一种基于可信度评估的红外测温方法及终端

Also Published As

Publication number Publication date
US20230162397A1 (en) 2023-05-25
CN111426393A (zh) 2020-07-17
CN111426393B (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2021203644A1 (zh) 温度修正方法、装置及系统
WO2020103647A1 (zh) 物体关键点的定位方法、图像处理方法、装置及存储介质
US10080513B2 (en) Activity analysis, fall detection and risk assessment systems and methods
CN109477951B (zh) 在保护隐私的同时识别人及/或识别并量化疼痛、疲劳、情绪及意图的系统及方法
US10430942B2 (en) Image analysis for predicting body weight in humans
US11113842B2 (en) Method and apparatus with gaze estimation
WO2020199926A1 (zh) 一种图像识别网络模型训练方法、图像识别方法及装置
WO2018228218A1 (zh) 身份识别方法、计算设备及存储介质
CN111337142A (zh) 体温修正方法、装置及电子设备
KR20220066366A (ko) 예측적 개인별 3차원 신체 모델
JP2015503143A5 (zh)
TW201250608A (en) Image comparison system and method
WO2019080743A1 (zh) 一种目标检测方法、装置及计算机设备
JP2009230751A (ja) 年令推定装置
CN111639522A (zh) 活体检测方法、装置、计算机设备和存储介质
US20190059806A1 (en) Skin aging state assessment method and electronic device
JP2023521541A (ja) 画像検出方法、装置、システム、デバイス及び記憶媒体
WO2021217937A1 (zh) 姿态识别模型的训练方法及设备、姿态识别方法及其设备
CN110659617A (zh) 活体检测方法、装置、计算机设备和存储介质
JP2020027405A (ja) 物体検出プログラム、および物体検出装置
US9594967B2 (en) Method and apparatus for identifying a person by measuring body part distances of the person
CN112036269A (zh) 跌倒检测方法、装置、计算机设备和存储介质
WO2021000495A1 (zh) 一种图像处理方法以及装置
JP2014121079A5 (zh)
Hou et al. A low-cost in-situ system for continuous multi-person fever screening

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20929939

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20929939

Country of ref document: EP

Kind code of ref document: A1