WO2022100668A1 - 温度测量方法、装置、系统、存储介质及程序产品 - Google Patents

温度测量方法、装置、系统、存储介质及程序产品 Download PDF

Info

Publication number
WO2022100668A1
WO2022100668A1 PCT/CN2021/130101 CN2021130101W WO2022100668A1 WO 2022100668 A1 WO2022100668 A1 WO 2022100668A1 CN 2021130101 W CN2021130101 W CN 2021130101W WO 2022100668 A1 WO2022100668 A1 WO 2022100668A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
visible light
thermal infrared
position information
Prior art date
Application number
PCT/CN2021/130101
Other languages
English (en)
French (fr)
Inventor
杨平
庞成山
谢迪
浦世亮
马东伟
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP21891188.1A priority Critical patent/EP4242609A4/en
Publication of WO2022100668A1 publication Critical patent/WO2022100668A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0022Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
    • G01J5/0025Living bodies
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/02Constructional details
    • G01J5/08Optical arrangements
    • G01J5/0859Sighting arrangements, e.g. cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Definitions

  • the embodiments of the present application relate to the field of computer vision research, and in particular, to a temperature measurement method, device, system, storage medium, and program product.
  • Temperature measurement is often required in public places with large traffic such as stations, airports, hospitals, ports, schools, etc., and it is more important to find out people with fever, especially in the case of an epidemic.
  • the temperature measurement in public places generally adopts the non-contact temperature measurement method.
  • the commonly used temperature measurement systems include handheld temperature measurement guns and uncooled infrared temperature measurement systems. Since the handheld temperature measurement gun can only measure a single person each time, And it often occurs that each person needs to measure repeatedly to complete the temperature measurement, which is inefficient and less accurate. Therefore, most of the temperature measurement in public places is completed by uncooled infrared temperature measurement system.
  • the temperature measurement is usually based on the thermal infrared image captured by the thermal infrared camera in the uncooled infrared temperature measurement system. The target is detected in the thermal infrared image. Determine the temperature of the object in the image.
  • thermal infrared images may cause the target to be mixed with the background and other objects because the original resolution of the thermal infrared image is too low and the texture details are easily lost.
  • the target is identified, and the temperature of the target cannot be determined, resulting in a failed temperature measurement.
  • the embodiments of the present application provide a temperature measurement method, device, system, storage medium and program product, which can accurately locate the position of the target in the thermal infrared image, and then determine the temperature of the target according to the position of the target, so as to improve the accuracy of temperature measurement.
  • the technical solution is as follows:
  • a temperature measurement method comprising:
  • image interception processing is performed on the first visible light image and the first thermal infrared image to obtain a partial image
  • the first position information and the local image are input into the disparity estimation network model, and the target binocular disparity is obtained, and the target binocular disparity is used to indicate the difference between the visible light image and the thermal infrared image containing the target object.
  • visual aberration
  • the temperature of the object is determined from the first thermal infrared image.
  • performing image interception processing on the first visible light image and the first thermal infrared image based on the first position information, and the obtained partial image includes:
  • the image corresponding to the target position information is intercepted from the first visible light image to obtain a local visible light image, and the image corresponding to the target position information is intercepted from the first thermal infrared image to obtain a local thermal infrared image;
  • the partial image includes the partial visible light image and the partial thermal infrared image.
  • the first position information includes a first detection frame
  • the target position information includes a target detection frame
  • the center of the first detection frame coincides with the center of the target detection frame
  • the performing location expansion processing on the first location information to obtain target location information including:
  • the target size of the first detection frame is expanded around to obtain the target detection frame.
  • the disparity estimation network model includes a feature extraction network layer, a feature fusion network layer and a disparity estimation network layer;
  • the inputting the first position information and the local image into the disparity estimation network model to obtain the target binocular disparity including:
  • the disparity estimation network layer is called to perform disparity estimation processing on the fusion features of the target object to obtain the target binocular disparity.
  • the local image includes a local visible light image and a local thermal infrared image
  • the feature extraction network layer includes a first extraction sub-network layer, a second extraction sub-network layer and a third extraction sub-network layer;
  • the feature extraction network layer is invoked to perform feature extraction processing on the first position information and the local image to obtain high-order features of the target, including:
  • the high-order feature of the target includes the first high-order feature, the second high-order feature, and the third high-order feature.
  • the training process of the disparity estimation network model is as follows:
  • Acquire at least one training sample each of which includes a reference visible light image, a reference thermal infrared image, reference position information, and reference binocular disparity, where the reference position information is used to indicate the position of the target in the reference visible light image , the reference binocular disparity is the binocular disparity between the reference visible light image and the reference thermal infrared image;
  • the parameters of the disparity estimation network model are adjusted.
  • the method further includes:
  • the distance between the target object and the hetero-binocular camera is determined, and the hetero-binocular camera includes a visible light camera for capturing visible light images and a thermal infrared camera for capturing thermal infrared images camera;
  • the temperature is corrected according to the temperature decay value to obtain the corrected temperature of the target.
  • the acquiring the first visible light image and the first thermal infrared image includes:
  • the obtaining the correction parameters corresponding to the heterologous binocular camera includes:
  • the calibrated visible light image and the calibrated thermal infrared image are obtained by photographing the target temperature calibration plate by the heterogeneous binocular camera;
  • determining the second position information according to the first position information and the target binocular disparity includes:
  • the difference between the first position information and the target binocular disparity is used as the second position information.
  • a temperature measurement device comprising:
  • a first acquisition module configured to acquire a first visible light image and a first thermal infrared image
  • a target detection module configured to perform target detection processing on the first visible light image to obtain first position information, where the first position information is used to indicate the position of the target in the first visible light image;
  • an image interception module configured to perform image interception processing on the first visible light image and the first thermal infrared image based on the first position information to obtain a partial image
  • the disparity determination module is used to input the first position information and the local image into the disparity estimation network model to obtain the target binocular disparity, and the target binocular disparity is used to indicate the visible light image and thermal image containing the target object binocular disparity between infrared images;
  • a target determination module configured to determine second position information according to the first position information and the target binocular disparity, where the second position information is used to indicate the position of the target in the first thermal infrared image Location;
  • a temperature determination module configured to determine the temperature of the target object from the first thermal infrared image according to the second position information.
  • the image interception module includes:
  • a location expansion submodule for performing location expansion processing on the first location information to obtain target location information
  • the interception sub-module is used for intercepting the image corresponding to the target position information from the first visible light image to obtain a local visible light image, and intercepting the image corresponding to the target position information from the first thermal infrared image to obtain a local image thermal infrared images;
  • the partial image includes the partial visible light image and the partial thermal infrared image.
  • the first position information includes a first detection frame
  • the target position information includes a target detection frame
  • the center of the first detection frame coincides with the center of the target detection frame
  • the location expansion submodule is specifically used for:
  • the target size of the first detection frame is expanded around to obtain the target detection frame.
  • the disparity estimation network model includes a feature extraction network layer, a feature fusion network layer and a disparity estimation network layer;
  • the disparity determination module includes:
  • a feature extraction sub-module configured to call the feature extraction network layer, perform feature extraction processing on the first position information and the local image, and obtain high-level features of the target;
  • a feature fusion sub-module used for calling the feature fusion network layer to perform feature fusion processing on the high-order features of the target, to obtain the fusion features of the target;
  • the disparity estimation sub-module is used for invoking the disparity estimation network layer to perform disparity estimation processing on the fusion feature of the target object to obtain the target binocular disparity.
  • the local image includes a local visible light image and a local thermal infrared image
  • the feature extraction network layer includes a first extraction sub-network layer, a second extraction sub-network layer and a third extraction sub-network layer;
  • the feature extraction submodule includes:
  • a first extraction subunit configured to call the first extraction sub-network layer, perform feature extraction processing on the first position information, and obtain a first high-order feature
  • the second extraction subunit is used to call the second extraction sub-network layer to perform feature extraction processing on the local visible light image to obtain second high-order features;
  • the third extraction subunit is used to call the third extraction sub-network layer, and perform feature extraction processing on the local thermal infrared image to obtain third high-order features;
  • the high-order feature of the target includes the first high-order feature, the second high-order feature, and the third high-order feature.
  • the device further includes:
  • the first acquisition module is further configured to acquire at least one training sample, each of which includes a reference visible light image, a reference thermal infrared image, a reference position information and a reference binocular disparity, and the reference position information is used to indicate a target the position of the object in the reference visible light image, and the reference binocular disparity is the binocular disparity between the reference visible light image and the reference thermal infrared image;
  • the disparity determination module is further configured to call the disparity estimation network model, and process the reference visible light image, the reference thermal infrared image and the reference position information to obtain estimated binocular disparity;
  • a training module configured to calculate the predicted loss value of the disparity estimation network model according to the estimated binocular disparity and the reference binocular disparity;
  • An adjustment module configured to adjust the parameters of the disparity estimation network model according to the predicted loss value.
  • the device further includes:
  • a distance determination module configured to determine the distance between the target object and the hetero-binocular camera according to the target binocular parallax, the hetero-binocular camera includes a visible light camera for photographing visible light images and a camera for photographing Thermal infrared cameras for thermal infrared images;
  • a second acquisition module configured to acquire a temperature attenuation value corresponding to the distance
  • a temperature correction module configured to perform correction processing on the temperature according to the temperature decay value to obtain the corrected temperature of the target object.
  • the first acquisition module includes:
  • a first acquisition submodule configured to acquire a second visible light image and a second thermal infrared image, the second visible light image and the second thermal infrared image are captured by a heterologous binocular camera;
  • an image adjustment sub-module for scaling the second visible light image and the second thermal infrared image to obtain a processed visible light image and a processed thermal infrared image
  • the second acquisition sub-module is used to acquire the correction parameters corresponding to the heterologous binocular camera
  • the correction submodule is configured to perform binocular correction processing on the processed visible light image and the processed thermal infrared image according to the correction parameters to obtain the first visible light image and the first thermal infrared image.
  • the second acquisition submodule includes:
  • a first acquisition subunit configured to acquire a calibrated visible light image and a calibrated thermal infrared image, the calibrated visible light image and the calibrated thermal infrared image are obtained by photographing the target temperature calibration plate by the hetero-source binocular camera;
  • an image adjustment subunit configured to perform scaling processing on the calibrated visible light image and the calibrated thermal infrared image to obtain a processed calibrated visible light image and a processed calibrated thermal infrared image
  • the correction subunit is configured to perform binocular correction processing on the processed calibrated visible light image and the processed calibrated thermal infrared image to obtain the correction parameters.
  • the target determination module is used for:
  • the difference between the first position information and the target binocular disparity is used as the second position information.
  • a temperature measurement system comprising a heterologous binocular camera and a computer device;
  • the heterologous binocular camera includes a visible light camera and a thermal infrared camera
  • the visible light camera used for capturing a first visible light image
  • the thermal infrared camera used for capturing a first thermal infrared image
  • the computer device includes a processor for:
  • image interception processing is performed on the first visible light image and the first thermal infrared image to obtain a partial image
  • the first position information and the local image are input into the disparity estimation network model, and the target binocular disparity is obtained, and the target binocular disparity is used to indicate the difference between the visible light image and the thermal infrared image containing the target object.
  • visual aberration
  • the temperature of the object is determined from the first thermal infrared image.
  • the computer device further includes a display
  • the display is used to display the temperature of the target.
  • a computer-readable storage medium is provided, and instructions are stored on the computer-readable storage medium, and when the instructions are executed by a processor, the temperature measurement method described in the above aspects is implemented.
  • a computer program product which when executed, is used to implement the temperature measurement method described in the above aspects.
  • the position of the target in the visible light image and the binocular disparity between the visible light image and the thermal infrared image are used to determine the position of the target in the thermal infrared image, and then according to the target
  • the position of the object in the thermal infrared, and the temperature of the object is determined from the thermal infrared image.
  • visible light images have the characteristics of high resolution and rich texture details. Therefore, compared with determining the position of the target object directly based on thermal infrared images, the position determination may be inaccurate.
  • the position of the target object in the thermal infrared image is indirectly determined based on the visible light image, and more texture details are fused in the process of position determination, so as to accurately and effectively determine the position of the target object in the thermal infrared image, which helps to determine the position of the target object in the thermal infrared image.
  • the precise position of the target the determined temperature of the target.
  • the binocular disparity between the visible light image and the thermal infrared image is determined according to the disparity estimation network model.
  • the binocular disparity can be obtained by inputting the partial visible light image and partial thermal infrared image containing the target into the disparity estimation network model, which can avoid a series of trivial and complicated mathematical operations, and provide a A simple and efficient way to determine binocular disparity.
  • the disparity estimation network model improves the efficiency of determining the binocular disparity, and the position of the target is determined according to the binocular disparity, which also improves the efficiency of determining the location of the target and further improves the temperature measurement efficiency of the target.
  • the input of the disparity estimation network model includes a partial image captured according to the position of the target object in the visible light image. Since the complete captured image includes not only the target, but also interference information such as the background and other objects, compared to directly inputting the complete captured image into the disparity estimation network model, the embodiment of the present application inputs the partial image related to the target into the disparity estimation network model.
  • the neural network model removes the interference information in the captured image, helps to obtain more accurate binocular parallax, and fully improves the efficiency of temperature measurement.
  • FIG. 1 is a schematic diagram of a temperature measurement system provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a heterologous binocular camera provided by an embodiment of the present application
  • FIG. 3 is a flowchart of a method for binocular correction provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a dual-target determination provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a partial image interception provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a disparity estimation network model provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a temperature measurement device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • Heterogeneous binocular camera Compared with the normal binocular camera, the two cameras of the heterogenous binocular camera are different in design parameters, such as the target light wavelength, focal length, resolution, pixel size, sensor, at least in the lens.
  • design parameters such as the target light wavelength, focal length, resolution, pixel size, sensor, at least in the lens.
  • a hardware design specification is inconsistent.
  • Thermal Infrared Spectral name. The wavelength of the infrared spectrum is between 0.76-1000 microns, of which 0.76-3.0 microns are reflected infrared wavelengths, and 3-18 microns are emission infrared wavelengths.
  • the thermal infrared camera mentioned in the embodiments of the present application refers to a thermal imaging camera type photographing device capable of measuring temperature.
  • Thermal infrared cameras work by using optoelectronic devices to detect and measure radiation and establish a correlation between radiation and surface temperature. All objects above absolute zero (-273°C) emit infrared radiation. Thermal infrared cameras use infrared detectors and optical imaging objective lenses to receive infrared radiation from the measured target, and reflect the radiation energy distribution pattern to the photosensitive element of the infrared detector. to obtain a thermal infrared image, which corresponds to the thermal distribution field on the surface of the object. That is, thermal infrared cameras can convert invisible infrared energy emitted by objects into visible thermal infrared images. The different colors in the thermal infrared image represent the different temperatures of the object being measured. By viewing the thermal infrared image, the overall temperature distribution of the measured object can be observed, and the heating of the target can be studied, so as to judge the next work.
  • FIG. 1 is a schematic diagram of a temperature measurement system provided by the implementation of the present application.
  • the temperature measurement system 100 includes a heterologous binocular camera 101 and a computer device 102 .
  • the hetero-binocular camera 101 applied in the temperature measurement system 100 may adopt a combination of a visible light camera and a thermal infrared camera.
  • FIG. 2 is a schematic structural diagram of a heterologous binocular camera provided by an embodiment of the present application.
  • the hetero-binocular camera 101 includes a visible light camera 1011 and a thermal infrared camera 1012.
  • the visible light camera 1011 is used for shooting high-definition visible light images
  • the thermal infrared camera 1012 is used for shooting thermal infrared images reflecting temperature information.
  • the visible light image is used for object detection, and the thermal infrared image is used to determine the temperature of the object.
  • the computer device 102 includes a processor and a display, wherein the processor is used to determine the temperature of the object based on the visible light image and the thermal infrared image, and the display is used to display the temperature.
  • the computer device 102 may include an image acquisition module 1021 , an image correction module 1022 , a target detection module 1023 , a disparity estimation network model 1024 and a temperature determination module 1025 .
  • the image correction module 1022 is an optional module.
  • the computer device further includes an image capturing module.
  • the image acquisition module 1021 is configured to acquire the visible light image and the thermal infrared image captured by the heterologous binocular camera 101 . After the image acquisition module 1021 acquires the visible light image and the thermal infrared image, the visible light image and the thermal infrared image are sent to the image correction module 1022 .
  • the image correction module 1022 performs binocular correction processing on the visible light image and the thermal infrared image, so that the corresponding pixel points in the processed visible light image and the thermal infrared image are located on the same horizontal line, that is, the pixel point in the visible light image and the thermal infrared image.
  • the corresponding pixels in the image only differ in the abscissa.
  • the target detection module 1023 is used to detect the target object from the visible light image, and determine the position of the target object in the visible light image. For example, the image correction module 1022 sends the processed visible light image to the target detection module 1023, so that the target detection module 1023 detects the target in the visible light image.
  • the target detection module 1023 inputs the position information of the target, the visible light image containing the target, and the thermal infrared image captured at the same time into the disparity estimation network model 1024 .
  • the target detection module 1023 sends the position information of the target, the visible light image containing the target, and the thermal infrared image captured at the same time to the image capturing module.
  • the image interception module cuts the visible light image and the thermal infrared image based on the position information of the target to obtain a partial image.
  • the image interception module inputs the position information of the target object and the cropped partial image into the disparity estimation network model 1024 .
  • the target object is not detected in the visible light image, the visible light image and the corresponding thermal infrared image are not processed, and the target object detection is continued in the next frame of visible light image.
  • the disparity estimation network model 1024 determines the binocular disparity between the visible light image and the thermal infrared image through a deep learning algorithm according to the input position information of the target object, the visible light image and the thermal infrared image. Alternatively, the disparity estimation network model 1024 determines the binocular disparity between the visible light image and the thermal infrared image based on the input position information of the target object and the local image. The disparity estimation network model 1024 is also used to send the determined binocular disparity to the temperature determination module 1025 .
  • the temperature determination module 1025 determines the precise position of the target in the thermal infrared image according to the position of the target in the visible light image and the binocular parallax, so as to determine the temperature of the target in the thermal infrared image according to the position.
  • the above-mentioned heterologous binocular camera 101 and the computer device 102 may be two independent devices.
  • the above-mentioned heterologous binocular camera 101 and the computer device 102 can also be assembled into a temperature measurement device, and the temperature measurement device can perform temperature measurement as a whole.
  • the computer device 102 may not include the image acquisition module 1021, and the hetero-binocular camera 101 directly sends the captured visible light image and thermal infrared image to the image Correction module 1022.
  • the above-mentioned computer device 102 includes, but is not limited to, the functional modules listed above, and may also include other functional modules to achieve more accurate temperature measurement.
  • the above functional modules may work independently, or may be assembled into fewer modules to complete temperature measurement, which is not limited in this embodiment of the present application, and is only explained with FIG. 1 as an example.
  • the pixels in the visible light image and the thermal infrared image are not necessarily located on the same horizontal line.
  • the visible light camera and the thermal infrared camera need to be calibrated to determine the calibration parameters between the visible light camera and the thermal infrared camera,
  • the calibration parameters include camera intrinsic parameters, extrinsic parameters and distortion coefficients.
  • the visible light image and the thermal infrared image can be further subjected to binocular correction processing, that is, the thermal infrared image is adjusted by rotation and translation, so that the feature points in the visible light image and the thermal infrared image match.
  • binocular correction processing that is, the thermal infrared image is adjusted by rotation and translation, so that the feature points in the visible light image and the thermal infrared image match.
  • the corresponding pixels only have differences in the abscissa.
  • the calibration of the visible light camera and the thermal infrared camera may be calibrated using the Zhang Zhengyou calibration method to determine the calibration parameters between the visible light camera and the thermal infrared camera.
  • the Zhang Zhengyou calibration method refers to the single-plane checkerboard camera calibration method proposed by Professor Zhang Zhengyou in 1998. This method is between the traditional calibration method and the self-calibration method, but overcomes the shortcomings of the high-precision calibration object required by the traditional calibration method. Just use a printed checkerboard. At the same time, compared with self-calibration, the accuracy is improved and the operation is convenient.
  • the realization process of calibrating the binocular camera by Zhang Zhengyou's calibration method is as follows: printing a checkerboard calibration board, and using the binocular camera to take pictures of the checkerboard calibration board from different angles to obtain the left-eye image and the right-eye image.
  • Detect the feature points in the image solve the intrinsic and extrinsic parameters of the binocular camera under ideal undistorted conditions according to the coordinates of the corresponding feature points in the left-eye image and the right-eye image, and use the maximum likelihood estimation to improve the accuracy, and then use the least squares method
  • the actual distortion coefficient is obtained, and finally the internal parameters, external parameters and distortion coefficients are synthesized, and the maximum likelihood estimation is used to optimize the estimation to improve the accuracy, complete the calibration, and determine the calibration parameters of the binocular camera. parameters and distortion coefficients.
  • the type of the binocular camera is a heterogeneous binocular camera including a visible light camera and a thermal infrared camera
  • the above left eye image is an image captured by a visible light camera
  • the right eye image is an image captured by a thermal infrared camera
  • the above left eye image is an image captured by a thermal infrared camera.
  • the image is an image captured by a thermal infrared camera
  • the right eye image is an image captured by a visible light camera, which is not limited in this embodiment of the present application.
  • FIG. 3 is a flowchart of a method for binocular correction provided by an embodiment of the present application. The method is applied to the computer device 102 in the above temperature measurement system 100. Referring to FIG. 3, the method includes:
  • Step 301 Acquire a calibrated visible light image and a calibrated thermal infrared image.
  • the calibrated visible light image and the calibrated thermal infrared image are obtained by photographing the target temperature calibration plate by the calibrated heterogeneous binocular camera.
  • the target temperature calibration plate is a checkerboard calibration plate as shown in FIG. 4 , and each grid is attached with a heat source to provide different temperature information.
  • the heterogeneous binocular camera shoots the target temperature calibration plate, and obtains multiple sets of calibrated visible light images and calibrated thermal infrared images.
  • Step 302 scaling the calibrated visible light image and the calibrated thermal infrared image to obtain a processed calibrated visible light image and a processed calibrated thermal infrared image.
  • the calibrated visible light image and the calibrated thermal infrared image may be different in size, so the calibrated visible light image and the calibrated thermal infrared image need to be scaled to make the calibrated visible light image after processing.
  • the image and the processed calibrated thermal infrared image have the same field of view and resolution.
  • the enlarged image is also called upsampling (Upsampling) or image interpolation (Interpolating), and the main purpose is to enlarge the original image so that it can be displayed on a higher resolution display device.
  • the reduced image is also called down-sampling (Subsampled) or down-sampling (Downsampled), and the main purpose is to generate a thumbnail corresponding to the original image, or to make the original image fit the size of the display area.
  • the calibrated thermal infrared image can be amplified to obtain the processed calibrated thermal infrared image.
  • the display size of the calibrated visible light image can also be reduced by downsampling according to the resolution of the calibrated thermal infrared image. Therefore, the calibrated visible light image can also be reduced to obtain a processed calibrated visible light image.
  • scaling processing can also be performed on both the calibrated visible light image and the calibrated thermal infrared image, so that the processed calibrated visible light image and the processed calibrated thermal infrared image have the same resolution. This embodiment of the present application does not limit this.
  • the calibrated thermal infrared image can be upsampled to enlarge the calibrated thermal infrared image, so that the processed calibration
  • the resolution of the thermal infrared image is 640*480, and the display size of the processed calibrated thermal infrared image and the calibrated visible light image is the same.
  • Step 303 Perform binocular correction processing on the processed calibrated visible light image and the processed calibrated thermal infrared image to obtain correction parameters.
  • the lens of the camera is not ideal for perspective imaging, and will have different degrees of distortion.
  • the distortion of the lens includes radial distortion and tangential distortion, but the tangential distortion has less influence, and usually only radial distortion is considered.
  • radial distortion is mainly caused by the radial curvature of the lens (light is more curved farther from the center of the lens than near the center), causing the true imaging point to deviate inward or outward from the ideal imaging point.
  • the distorted image point is offset radially outward relative to the ideal image point, and the distortion far from the center is called pincushion distortion; the distortion that the distorted image point moves closer to the center in the radial direction relative to the ideal image point is called barrel distortion.
  • FIG. 4 is only an example of barrel distortion for illustration, and is not intended to be limiting.
  • a feature point is determined in the processed calibrated visible light image, and the feature point can be any corner point of the black and white checkerboard, such as the A1 point shown in the figure, according to the position between the corresponding points Coordinates, determine the intrinsic and extrinsic parameters between the visible light camera and the thermal infrared camera ideally without distortion. Then, according to the internal and external parameters of the camera, the actual radial distortion coefficient is solved by the least square method. For the specific calibration process of the camera, reference may be made to relevant documents in the prior art, which will not be repeated in this embodiment of the present application. Distortion removal processing is performed on the image, so that there is no radial distortion in the processed calibrated visible light image and the processed calibrated thermal infrared image.
  • the calibrated visible light image after distortion elimination and the calibrated thermal infrared image after processing further, according to a series of feature point matching, it is possible to determine how the coordinates of the feature points in the thermal infrared image go through corresponding mathematical methods.
  • the correction parameters include distortion coefficients, translation vectors and rotation matrices.
  • the positions of the feature points corresponding to each other in the visible light image and the thermal infrared image can be adjusted on the same horizontal line, and there is only a difference in the abscissa between the corresponding feature points.
  • point A1 in the visible light image and feature point A2 in the thermal infrared image are located on the same horizontal line
  • point B1 in the visible light image and point B1 in the thermal infrared image are located on the same horizontal line.
  • image cropping that is, image interception processing
  • image interception processing can be performed on the calibrated visible light image and the calibrated thermal infrared image, so as to include the target object (Fig. 4).
  • the target object in the target temperature calibration plate is cut out from the local image, so as to determine the difference between the calibrated visible light image and the calibrated thermal infrared image according to the horizontal offset of the pixels of the target object in the calibrated visible light image and the calibrated thermal infrared image. binocular disparity.
  • the correction parameters between the visible light image and the thermal infrared image are obtained.
  • the calibration thermal infrared image can be translated and rotated, so that the corresponding feature points in the calibrated visible light image and the calibrated thermal infrared image can be located on the same horizontal line.
  • FIG. 5 is a flowchart of a temperature measurement method provided by an embodiment of the present application, and the method is used in the computer device 102 in the above temperature measurement system 100 .
  • the method includes:
  • Step 501 Acquire a first visible light image and a first thermal infrared image.
  • the computer device acquires the second visible light image and the second thermal infrared image, and the second visible light image and the second thermal infrared image are captured by a heterologous binocular camera.
  • the two thermal infrared images are zoomed to obtain a processed visible light image and a processed thermal infrared image.
  • the computer device obtains the correction parameters corresponding to the hetero-sourced binocular camera, and performs binocular correction processing on the processed visible light image and the processed thermal infrared image according to the correction parameters to obtain the first visible light image and the first thermal infrared image.
  • the first visible light image and the first thermal infrared image are processed by scaling, eliminating distortion, and feature point matching, and after the feature points are matched and aligned, the processed visible light image and thermal infrared image can also be cropped. , the first visible light image and the first thermal infrared image are obtained.
  • the cropping process for example, cropping off part of the edge, etc., to remove the redundant part of the edge.
  • the first visible light image and the first thermal infrared image have been processed by binocular correction. Therefore, there are only horizontal differences between the pixels in the first visible light image and the corresponding pixels in the thermal infrared image.
  • the offset, the offset in the horizontal direction is the binocular parallax between the first visible light image and the first thermal infrared image.
  • Step 502 Perform target detection processing on the first visible light image to obtain first position information.
  • the first position information is used to indicate the position of the target in the first visible light image.
  • the first location information may include the following two possible representation modes, and may also be considered as two possible display modes.
  • the same rectangular coordinate system is established, so that the coordinates of each feature point of the target can be determined in the rectangular coordinate system.
  • the first position information of the target in the first visible light image can be determined according to the coordinates of the feature points.
  • the computer device may establish the same rectangular coordinate system in the first visible light image and the first thermal infrared image, and in step 502, the computer device establishes the right angle based on each feature point of the target in the first visible light image. Coordinates in the coordinate system to obtain the first position information.
  • the first target area is directly framed in the first visible light image.
  • the position of the target can be determined in the first visible light image.
  • the background area may be framed, it can cover the entire area of the target, and there is no missing information. That is, the frame-selected target area includes the target area. Profile information is more complete.
  • Step 503 Based on the first position information, perform image interception processing on the first visible light image and the first thermal infrared image to obtain a partial image.
  • a local visible light image corresponding to the first position information can be intercepted from the first visible light image .
  • a local thermal infrared image corresponding to the first position information is intercepted from the first thermal infrared image.
  • the subsequent computer equipment inputs the first position information, the local visible light image and the local thermal infrared image into the disparity estimation network model through step 504 .
  • the first position information can be Carry out expansion processing, so that the intercepted partial image contains all the information of the target.
  • image interception processing is performed on the first visible light image and the first thermal infrared image to obtain a partial image.
  • the implementation process is: performing position expansion processing on the first position information to obtain Target position information; intercepting an image corresponding to the target position information from the first visible light image to obtain a local visible light image, and intercepting an image corresponding to the target position information from the first thermal infrared image to obtain a local thermal infrared image.
  • FIG. 6 is a schematic diagram of a partial image interception provided by an embodiment of the present application.
  • the target is a human face
  • the position of the face detected in the first visible light image by the face detection algorithm is the first position information
  • the first position information may include the face At least one of frame position information, face key point position information, and face orientation information.
  • the first position information is Position expansion processing to obtain target position information, wherein the first position information includes the first detection frame shown in FIG. 6 , the target position information includes the target detection frame shown in FIG. 6 , the center of the first detection frame and the Centers coincide.
  • the implementation process of performing position expansion processing on the first position information to obtain the target position information is as follows: based on the center of the first detection frame, the first detection frame is expanded around the target size to obtain the expanded target area.
  • the first detection frame is a rectangular area as shown in FIG. 6 and the preset target size is 2 cm, then according to the target size, the four sides of the first detection frame are expanded outward by 2 cm to obtain the target detection frame .
  • an image containing a human face is intercepted from the first visible light image to obtain a partial visible light image, and an image containing a human face is intercepted from the first thermal infrared image to obtain a partial thermal infrared image.
  • the position coordinates of the first detection frame, the captured local visible light image and the local thermal infrared image can be input into the disparity estimation network model, and the following step 504 can be performed.
  • the first position information is represented in the form of a picture
  • a pure black image of the same size can be intercepted from the first visible light image based on the expanded target detection frame shown in FIG. 6 , and the obtained face can be drawn in the pure black image.
  • the position of the first detection frame is obtained, and the first target image shown in FIG. 6 is obtained.
  • the acquired face key point information can also be drawn in the first detection frame in the pure black image to obtain the second target image shown in FIG. 6 .
  • the target image may also include other information reflecting the position of the face in the first visible light image, which is not limited in this embodiment of the present application.
  • the first target object image, the second target object image, the captured local visible light image and the captured local thermal infrared image have the same resolution.
  • the first object image, the local visible light image and the local thermal infrared image can be input into the disparity estimation network model, and the following step 504 is performed.
  • the second object image, the local visible light image and the local thermal infrared image can be input into the disparity estimation network model, and the following step 504 is performed.
  • the image interception processing is performed sequentially based on each A partial image containing the same object and the first position information of the object are input to the disparity estimation network model.
  • the first visible light image includes three persons A, B and C
  • a partial visible light image is intercepted from the first visible light image
  • a partial visible light image is obtained from the first visible light image.
  • a local thermal infrared image is obtained by intercepting a thermal infrared image.
  • the first position information of person A and two partial images are used as the first input group.
  • Perform the same processing operation on person B and person C in the first visible light image take the second position information of person B and the two partial images as the second input group, and use the third position information of person C and the two partial images as the second input group. as the third input group.
  • the first input group, the second input group and the third input group are sequentially input into the disparity estimation network model, and the following step 504 is executed.
  • the above-mentioned first position is used to indicate the position of person A in the first visible light image
  • the second position is used to indicate the position of person B in the first visible light image
  • the third position is used to indicate the position of person C in the first visible light image
  • the image clipping process is separately performed for each object.
  • Step 504 Input the first position information and the local image into the disparity estimation network model to obtain the target binocular disparity, and the target binocular disparity is used to indicate the binocular disparity between the visible light image containing the target object and the thermal infrared image.
  • the disparity estimation network model includes a feature extraction network layer, a feature fusion network layer and a disparity estimation network layer.
  • the implementation process of step 504 is as follows: the computer device invokes the feature extraction network layer, performs feature extraction processing on the first position information and the local image, obtains high-level features of the target, and invokes the feature fusion network layer , perform feature fusion processing on the high-order features of the target object, obtain the fusion feature of the target object, call the disparity estimation network layer, perform the disparity estimation process on the fusion feature of the target object, and obtain the target binocular disparity.
  • the feature extraction network layer is used to perform at least one convolution process on the input first position information and the local image to extract high-level features of the target.
  • the embodiment of the present application does not limit the number of convolution layers and convolution kernels .
  • the feature extraction network layer may include a first extraction sub-network layer, a second extraction sub-network layer and a third extraction sub-network layer.
  • the first position information is used to indicate the position of the target in the first visible light image.
  • FIG. 7 is a schematic diagram of a disparity estimation network model provided by an embodiment of the present application.
  • the implementation process of calling the feature extraction network layer to perform feature extraction processing on the local image, and obtaining the high-order features of the target object may be: calling the first extraction sub-network layer, performing feature extraction processing on the first position information, and obtaining the first extraction sub-network layer.
  • a high-order feature The second extraction sub-network layer is called to perform feature extraction processing on the local visible light image to obtain the second high-order feature.
  • the third extraction sub-network layer is called to perform feature extraction processing on the local thermal infrared image to obtain the third high-order feature.
  • the high-order features of the target include a first high-order feature, a second high-order feature, and a third high-order feature.
  • disparity estimation network model Before using the above-mentioned disparity estimation network model to determine the target binocular disparity between the first visible light image and the first thermal infrared image, it is also necessary to train the disparity estimation network model. If the visual disparity is within the allowable error range, the training of the disparity estimation network model is completed.
  • the training process of the disparity estimation network model is as follows: at least one training sample is obtained, and each training sample includes a reference visible light image, a reference thermal infrared image, a reference position information, and a reference binocular disparity.
  • the reference position information is used to indicate the position of the target in the reference visible light image
  • the reference binocular disparity is the binocular disparity between the reference visible light image and the reference thermal infrared image.
  • the disparity estimation network model is called to process the reference visible light image, the reference thermal infrared image and the reference position information to obtain the estimated binocular disparity.
  • the predicted loss value of the disparity estimation network model is calculated, and the parameters of the disparity estimation network model are adjusted according to the predicted loss value.
  • the objects in the reference visible light image and the reference thermal infrared image belong to the same type of objects as the objects in the first visible light image and the first thermal infrared image. Or vehicles, etc., but the embodiment of the present application does not limit whether the target object in each image is the same object, that is, the target objects in different images may be different objects or the same object.
  • the reference visible light image and the reference thermal infrared image included in each training sample are images obtained by simultaneously capturing a target object with a heterologous binocular camera.
  • the reference binocular disparity can be obtained by a device such as a lidar, a depth camera, or by manual annotation to determine the reference binocular disparity between the reference visible light image and the reference thermal infrared image. No restrictions.
  • Step 505 Determine second position information according to the first position information and the target binocular disparity, where the second position information is used to indicate the position of the target in the first thermal infrared image.
  • the computer device uses the difference between the first position information and the target binocular disparity as the second position information.
  • the target binocular disparity may be a disparity value between the first visible light image containing the target object and the first thermal infrared image, and the disparity value reflects the target object in the first thermal infrared image relative to the first visible light image. The overall offset of the target object.
  • the implementation process of the above step 505 is: the abscissa of each pixel indicated by the first position information in the first thermal infrared image and the disparity value. The difference is used as the second position information.
  • the target binocular disparity may also be a disparity map between the first visible light image and the first thermal infrared image containing the object, and the gray value of each pixel in the disparity map represents the first visible light image and the first The difference between the abscissas of the corresponding pixels in the thermal infrared image.
  • the implementation process of the above step 505 is: the abscissa of each pixel point indicated by the first position information in the first thermal infrared image and the pixel point of the corresponding position in the disparity map The difference between the grayscale values is used as the second position information.
  • Step 506 Determine the temperature of the target object from the first thermal infrared image according to the second position information.
  • the computer device can determine the temperature of the target object according to the color presented by the second position information indicating area.
  • the computer equipment is based on the location of the target in the first thermal infrared image. There is a deviation in the displayed color of the area at which the temperature of the target is determined. That is, the measurement temperature will attenuate with the temperature measurement distance.
  • the distance between the target object and the heterologous binocular camera is relatively far, the temperature of the target object and the temperature of the target object determined by the computer equipment from the first thermal infrared image are determined. The greater the deviation between the actual temperatures.
  • the computer device When the distance between the target and the heterologous binocular camera is closer, the temperature of the target determined by the computer device from the first thermal infrared image is closer to the actual temperature of the target. Therefore, after determining the temperature of the target from the first thermal infrared image, the computer device also needs to correct the temperature according to the distance, and use the corrected temperature as the actual temperature of the target.
  • the implementation process of the computer device correcting the temperature of the target object is: according to the target binocular parallax, determine the distance between the target object and the hetero-binocular camera, and obtain the temperature attenuation corresponding to the distance. value, the temperature is corrected according to the temperature decay value, and the corrected temperature of the target object is obtained.
  • Z is the distance between the target and the hetero-binocular camera
  • f is the focal length of the hetero-binocular camera
  • b is the distance between the optical centers of the visible light camera and the thermal infrared camera in the hetero-binocular camera
  • d is the target binocular disparity between the first visible light image and the first thermal infrared image.
  • the distance between the target object and the heterologous binocular camera can be determined according to the relationship between the above distance and parallax.
  • the computer equipment can map the distance and the temperature attenuation value according to the distance. relationship to determine the temperature attenuation value corresponding to the distance.
  • the computer device adds the temperature of the object determined from the first thermal infrared image to the temperature attenuation value corresponding to the distance to obtain the corrected temperature of the object.
  • the temperature decay value corresponding to this distance is 2°, then according to the temperature
  • the attenuation value is corrected for the temperature, and the corrected temperature of the target is 37°.
  • the temperature of the target is 37.5°, and the distance between the target and the heterologous binocular camera is 5 meters, and the temperature decay value corresponding to the distance is 0.5°, then according to this The temperature decay value is used to correct the temperature, and the corrected temperature of the target object is 38°.
  • the position of the target in the thermal infrared image is determined by the position of the target in the visible light image and the binocular parallax between the visible light image and the thermal infrared image, Then, according to the position of the target in the thermal infrared, the temperature of the target is determined from the thermal infrared image.
  • visible light images have the characteristics of high resolution and rich texture details. Therefore, compared with determining the position of the target object directly based on thermal infrared images, the position determination may be inaccurate.
  • the position of the target object in the thermal infrared image is indirectly determined based on the visible light image, and more texture details are fused during the position determination process to accurately and effectively determine the position of the target object in the thermal infrared image.
  • the precise position of the target, the determined temperature of the target is indirectly determined based on the visible light image, and more texture details are fused during the position determination process to accurately and effectively determine the position of the target object in the thermal infrared image.
  • the precise position of the target the determined temperature of the target.
  • the binocular disparity between the visible light image and the thermal infrared image is determined according to the disparity estimation network model.
  • the binocular parallax can be obtained by inputting the local visible light image and local thermal infrared image containing the target into the parallax estimation network model, which can avoid a series of trivial and complicated mathematical operations, and provide a A simple and efficient way to determine binocular disparity.
  • the disparity estimation network model improves the efficiency of determining the binocular disparity, and the position of the target is determined according to the binocular disparity, which also improves the efficiency of determining the location of the target and further improves the temperature measurement efficiency of the target.
  • the input of the disparity estimation network model includes a partial image captured according to the position of the target object in the visible light image. Since the complete captured image includes not only the target, but also interference information such as the background and other objects, compared to directly inputting the complete captured image into the disparity estimation network model, the embodiment of the present application inputs the partial image related to the target into the disparity estimation network model.
  • the neural network model removes the interference information in the captured image, helps to obtain more accurate binocular parallax, and fully improves the efficiency of temperature measurement.
  • the device 800 includes: a first acquisition module 801, a target detection module 802, an image capture module 803, a parallax determination module 804, a target determination module 805, and a temperature Determine block 806 .
  • a first acquisition module 801 configured to acquire a first visible light image and a first thermal infrared image
  • the target detection module 802 is used for detecting the target object on the first visible light image to obtain first position information, where the first position information is used to indicate the position of the target object in the first visible light image;
  • An image interception module 803, configured to perform image interception processing on the first visible light image and the first thermal infrared image based on the first position information to obtain a partial image
  • the disparity determination module 804 is used to input the first position information and the local image into the disparity estimation network model to obtain the target binocular disparity, and the target binocular disparity is used to indicate the visible light image containing the target object and the thermal infrared image. visual aberration;
  • the target determination module 805 is configured to determine the second position information according to the first position information and the target binocular disparity, and the second position information is used to indicate the position of the target in the first thermal infrared image;
  • the temperature determination module 806 is configured to determine the temperature of the target object from the first thermal infrared image according to the second position information.
  • the image capturing module 803 includes:
  • the position expansion submodule is used to carry out position expansion processing to the first position information to obtain the target position information
  • the interception submodule is used for intercepting an image corresponding to the target position information from the first visible light image to obtain a local visible light image, and intercepting an image corresponding to the target position information from the first thermal infrared image to obtain a local thermal infrared image;
  • the partial image includes partial visible light image and partial thermal infrared image.
  • the first position information includes a first detection frame
  • the target position information includes a target detection frame
  • the center of the first detection frame coincides with the center of the target detection frame
  • Location expansion sub-module specifically for:
  • the target size is expanded around the first detection frame to obtain the target detection frame.
  • the disparity estimation network model includes a feature extraction network layer, a feature fusion network layer and a disparity estimation network layer;
  • the disparity determination module 804 includes:
  • the feature extraction sub-module is used to call the feature extraction network layer to perform feature extraction processing on the first position information and the local image to obtain the high-order features of the target;
  • the feature fusion sub-module is used to call the feature fusion network layer to perform feature fusion processing on the high-level features of the target, and obtain the fusion features of the target;
  • the disparity estimation sub-module is used to call the disparity estimation network layer to perform disparity estimation processing on the fusion features of the target object to obtain the target binocular disparity.
  • the local image includes a local visible light image and a local thermal infrared image
  • the feature extraction network layer includes a first extraction sub-network layer, a second extraction sub-network layer and a third extraction sub-network layer;
  • Feature extraction sub-module including:
  • the first extraction subunit is used to call the first extraction sub-network layer to perform feature extraction processing on the first position information to obtain the first high-order feature;
  • the second extraction subunit is used to call the second extraction sub-network layer to perform feature extraction processing on the local visible light image to obtain the second high-order feature;
  • the third extraction sub-unit is used to call the third extraction sub-network layer to perform feature extraction processing on the local thermal infrared image to obtain the third high-order feature;
  • the high-order features of the target include a first high-order feature, a second high-order feature, and a third high-order feature.
  • apparatus 800 further includes:
  • the first acquisition module is used to acquire at least one training sample, each training sample includes a reference visible light image, a reference thermal infrared image, a reference position information and a reference binocular disparity, and the reference position information is used to indicate the target object in the reference visible light image.
  • the reference binocular disparity is the binocular disparity between the reference visible light image and the reference thermal infrared image;
  • the disparity determination module is used to call the disparity estimation network model to process the reference visible light image, the reference thermal infrared image and the reference position information to obtain the estimated binocular disparity;
  • the training module is used to calculate the prediction loss value of the disparity estimation network model according to the estimated binocular disparity and the reference binocular disparity;
  • the adjustment module is used to adjust the parameters of the disparity estimation network model according to the predicted loss value.
  • apparatus 800 further includes:
  • the distance determination module is used to determine the distance between the target object and the hetero-binocular camera according to the target binocular parallax.
  • the hetero-binocular camera includes a visible light camera for taking visible light images and a thermal infrared camera for taking thermal infrared images camera;
  • the second acquisition module is used to acquire the temperature attenuation value corresponding to the distance
  • the temperature correction module is used to correct the temperature according to the temperature attenuation value to obtain the corrected temperature of the target object.
  • the first obtaining module 801 includes:
  • a first acquisition sub-module for acquiring a second visible light image and a second thermal infrared image, the second visible light image and the second thermal infrared image are captured by a heterologous binocular camera;
  • an image adjustment sub-module for scaling the second visible light image and the second thermal infrared image to obtain the processed visible light image and the processed thermal infrared image
  • the second acquisition sub-module is used to acquire the correction parameters corresponding to the heterologous binocular camera
  • the correction sub-module is configured to perform binocular correction processing on the processed visible light image and the processed thermal infrared image according to the correction parameters to obtain a first visible light image and a first thermal infrared image.
  • the second acquisition submodule includes:
  • the first acquisition subunit is used for acquiring the calibrated visible light image and the calibrated thermal infrared image, and the calibrated visible light image and the calibrated thermal infrared image are obtained by photographing the target temperature calibration plate by the heterologous binocular camera;
  • the image adjustment subunit is used to perform image scaling processing on the calibrated visible light image and the calibrated thermal infrared image, and obtain the processed calibrated visible light image and the processed calibrated thermal infrared image;
  • the correction subunit is used to perform binocular correction processing on the processed calibrated visible light image and the processed calibrated thermal infrared image to obtain correction parameters.
  • the target determination module 805 is used for:
  • the difference between the first position information and the target binocular disparity is used as the second position information.
  • the position of the target in the thermal infrared image is determined by the position of the target in the visible light image and the binocular parallax between the visible light image and the thermal infrared image, Then, according to the position of the target in the thermal infrared, the temperature of the target is determined from the thermal infrared image.
  • visible light images have the characteristics of high resolution and rich texture details. Therefore, compared with determining the position of the target object directly based on thermal infrared images, the position determination may be inaccurate.
  • the position of the target object in the thermal infrared image is indirectly determined based on the visible light image, and more texture details are fused in the process of position determination, so as to accurately and effectively determine the position of the target object in the thermal infrared image, which helps to determine the position of the target object in the thermal infrared image.
  • the precise position of the target the determined temperature of the target.
  • the binocular disparity between the visible light image and the thermal infrared image is determined according to the disparity estimation network model.
  • the binocular disparity can be obtained by inputting the partial visible light image and partial thermal infrared image containing the target into the disparity estimation network model, which can avoid a series of trivial and complicated mathematical operations, and provide a A simple and efficient way to determine binocular disparity.
  • the disparity estimation network model improves the efficiency of determining the binocular disparity, and the position of the target is determined according to the binocular disparity, which also improves the efficiency of determining the location of the target and further improves the temperature measurement efficiency of the target.
  • the input of the disparity estimation network model includes a partial image captured according to the position of the target object in the visible light image. Since the complete captured image includes not only the target, but also interference information such as the background and other objects, compared to directly inputting the complete captured image into the disparity estimation network model, the embodiment of the present application inputs the partial image related to the target into the disparity estimation network model.
  • the neural network model removes the interference information in the captured image, helps to obtain more accurate binocular parallax, and fully improves the efficiency of temperature measurement.
  • the temperature measurement device provided in the above embodiment determines the temperature of the target object through thermal infrared images
  • only the division of the above functional modules is used as an example. Different functional modules are completed, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
  • the temperature measurement device and the temperature measurement method embodiments provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, which will not be repeated here.
  • FIG. 9 is a structural block diagram of a computer device provided by an embodiment of the present application, and the computer device 900 may be used to implement the functions of the above example of the temperature measurement method. Specifically:
  • the computer device 900 includes a processing unit (such as a CPU (Central Processing Unit, central processing unit), a GPU (Graphics Processing Unit, graphics processing unit), and an FPGA (Field Programmable Gate Array, Field Programmable Logic Gate Array, etc.) 901, including RAM (Random-Access Memory, random access memory) 902 and ROM (Read-Only Memory, read-only memory) 903 system memory 904, and a system bus 905 connecting the system memory 904 and the central processing unit 901.
  • a processing unit such as a CPU (Central Processing Unit, central processing unit), a GPU (Graphics Processing Unit, graphics processing unit), and an FPGA (Field Programmable Gate Array, Field Programmable Logic Gate Array, etc.) 901, including RAM (Random-Access Memory, random access memory) 902 and ROM (Read-Only Memory, read-only memory) 903 system memory 904, and a system bus 905 connecting the system memory 904 and the central processing unit 901.
  • the computer device 900 also includes an I/O system (Input Output System, basic input/output system) 906 that helps to transfer information between various devices within the computing computer device, and is used to store an operating system 913, application programs 914 and other programs Module 915 of mass storage device 907 .
  • I/O system Input Output System, basic input/output system
  • the basic input/output system 906 includes a display 908 for displaying information and input devices 909 such as a mouse, keyboard, etc., for user input of information.
  • the display 908 and the input device 909 are both connected to the central processing unit 901 through the input and output controller 910 connected to the system bus 905 .
  • the basic input/output system 906 may also include an input output controller 910 for receiving and processing input from a number of other devices such as a keyboard, mouse, or electronic stylus.
  • input output controller 910 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 907 is connected to the central processing unit 901 through a mass storage controller (not shown) connected to the system bus 905 .
  • the mass storage device 907 and its associated computer-readable media provide non-volatile storage for the computer device 900 . That is, the mass storage device 907 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory) drive.
  • Computer-readable media can include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory or Other solid-state storage technologies, CD-ROM, DVD (Digital Video Disc, High Density Digital Video Disc) or other optical storage, cassettes, magnetic tape, disk storage or other magnetic storage devices.
  • the system memory 904 and the mass storage device 907 described above may be collectively referred to as memory.
  • the computer device 900 may also be connected to a remote computer on the network through a network such as the Internet to run. That is, the computer device 900 can be connected to the network 912 through the network interface unit 911 connected to the system bus 905, or can also use the network interface unit 911 to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes at least one instruction, at least one program, code set or set of instructions stored in the memory and configured to be executed by one or more processors, to achieve the above-mentioned temperature measurement method.
  • FIG. 9 does not constitute a limitation on the computer device 900, and may include more or less components than the one shown, or combine some components, or adopt different component arrangements.
  • a computer-readable storage medium is also provided, and instructions are stored on the computer-readable storage medium, and the instructions, when executed by a processor, implement the above temperature measurement method.
  • references herein to "a plurality” means two or more.
  • "And/or" which describes the association relationship of the associated objects, means that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects are an "or" relationship.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Radiation Pyrometers (AREA)
  • Image Processing (AREA)

Abstract

一种温度测量方法、温度测量装置(800)、测温系统(100)、存储介质及程序产品,属于计算机视觉研究领域。温度测量方法包括:获取第一可见光图像和第一热红外图像(501),对第一可见光图像进行目标物的检测处理,得到第一位置信息(502),第一位置信息用于指示目标物在第一可见光图像中的位置;基于第一位置信息对第一可见光图像和第一热红外图像进行图像截取处理,得到局部图像(503),将第一位置信息和局部图像输入至视差估计网络模型,得到目标双目视差(504),根据第一位置信息和目标双目视差,确定第二位置信息(505),按照第二位置信息,从第一热红外图像中确定目标物的温度(506)。如此,根据目标双目视差和第一位置信息,可以在第一热红外图像中确定目标物的位置,进而确定目标物的温度。

Description

温度测量方法、装置、系统、存储介质及程序产品
本申请实施例要求于2020年11月13日提交的申请号为202011273221.8、发明名称为“温度测量方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本申请实施例涉及计算机视觉研究领域,特别涉及一种温度测量方法、装置、系统、存储介质及程序产品。
背景技术
车站、机场、医院、港口、学校等人流量较大的公共场所经常需要进行温度测量,找出发热人员,特别是出现疫情的情况下温度测量就更为重要。
通常,公共场所的温度测量一般都采用非接触式测温方式,常用的测温系统有手持测温枪和非制冷红外测温系统,由于手持测温枪使用时每次只能测量单个人员,且经常出现每人需要反复测量才能完成测温的情况,效率低且精度较差。因此,公共场所的温度测量大多采用非制冷红外测温系统来完成。相关技术中,由于热红外图像中的不同颜色代表被测物体的不同温度,通过查看热红外图像,可以观察到被测物体的整体温度分布状况。因此,温度测量通常是基于非制冷红外测温系统中热红外相机所拍摄的热红外图像,在热红外图像中进行目标物检测,定位目标物所处的位置后,即按照该位置从热红外图像中确定目标物的温度。
然而,使用热红外图像来定位目标物的位置,可能会由于热红外图像的原始分辨率过低且纹理细节易丢失,而导致目标物和背景、其它物体等混在一起,无法从热红外图像中识别出目标物,进而无法确定目标物的温度,导致温度测量失败。
发明内容
本申请实施例提供了一种温度测量方法、装置、系统、存储介质及程序产 品,可以在热红外图像中精准定位目标物所处位置,进而按照目标物所处位置确定目标物的温度,提高了温度测量的精确度。所述技术方案如下:
一方面,提供了一种温度测量方法,所述方法包括:
获取第一可见光图像和第一热红外图像;
对所述第一可见光图像进行目标物的检测处理,得到第一位置信息,所述第一位置信息用于指示所述目标物在所述第一可见光图像中的位置;
基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像;
将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,所述目标双目视差用于指示包含所述目标物的可见光图像和热红外图像之间的双目视差;
根据所述第一位置信息和所述目标双目视差,确定第二位置信息,所述第二位置信息用于指示所述目标物在所述第一热红外图像中的位置;
按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度。
可选地,所述基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到的局部图像,包括:
对所述第一位置信息进行位置扩充处理,得到目标位置信息;
从所述第一可见光图像中截取所述目标位置信息对应的图像,得到局部可见光图像,从所述第一热红外图像中截取所述目标位置信息对应的图像,得到局部热红外图像;
其中,所述局部图像包括所述局部可见光图像和所述局部热红外图像。
可选地,所述第一位置信息包括第一检测框,所述目标位置信息包括目标检测框,所述第一检测框的中心和所述目标检测框的中心重合;
所述对所述第一位置信息进行位置扩充处理,得到目标位置信息,包括:
基于所述第一检测框的中心,将所述第一检测框向四周扩充目标尺寸,得到所述目标检测框。
可选地,所述视差估计网络模型包括特征提取网络层、特征融合网络层和视差估计网络层;
所述将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,包括:
调用所述特征提取网络层,对所述第一位置信息和所述局部图像进行特征 提取处理,得到所述目标物的高阶特征;
调用所述特征融合网络层,对所述目标物的高阶特征进行特征融合处理,得到所述目标物的融合特征;
调用所述视差估计网络层,对所述目标物的融合特征进行视差估计处理,得到所述目标双目视差。
可选地,所述局部图像包括局部可见光图像和局部热红外图像,所述特征提取网络层包括第一提取子网络层、第二提取子网络层和第三提取子网络层;
所述调用所述特征提取网络层,对所述第一位置信息和所述局部图像进行特征提取处理,得到所述目标物的高阶特征,包括:
调用所述第一提取子网络层,对所述第一位置信息进行特征提取处理,得到第一高阶特征;
调用所述第二提取子网络层,对所述局部可见光图像进行特征提取处理,得到第二高阶特征;
调用所述第三提取子网络层,对所述局部热红外图像进行特征提取处理,得到第三高阶特征;
其中,所述目标物的高阶特征包括所述第一高阶特征、所述第二高阶特征以及所述第三高阶特征。
可选地,所述视差估计网络模型的训练过程如下:
获取至少一个训练样本,每个所述训练样本包括参考可见光图像、参考热红外图像、参考位置信息以及参考双目视差,所述参考位置信息用于指示目标物在所述参考可见光图像中的位置,所述参考双目视差为所述参考可见光图像和所述参考热红外图像之间的双目视差;
调用所述视差估计网络模型,对所述参考可见光图像、所述参考热红外图像和所述参考位置信息进行处理,得到估计双目视差;
根据所述估计双目视差和所述参考双目视差,计算所述视差估计网络模型的预测损失值;
按照所述预测损失值,调整所述视差估计网络模型的参数。
可选地,所述按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度之后,所述方法还包括:
根据所述目标双目视差,确定所述目标物和异源双目相机之间的距离,所述异源双目相机包括用于拍摄可见光图像的可见光相机和用于拍摄热红外图像 的热红外相机;
获取与所述距离对应的温度衰减值;
按照所述温度衰减值对所述温度进行修正处理,得到所述目标物的修正温度。
可选地,所述获取第一可见光图像和第一热红外图像,包括:
获取第二可见光图像和第二热红外图像,所述第二可见光图像和所述第二热红外图像是异源双目相机拍摄得到的;
对所述第二可见光图像和所述第二热红外图像进行缩放处理,得到处理后的可见光图像和处理后的热红外图像;
获取所述异源双目相机对应的校正参数;
按照所述校正参数对所述处理后的可见光图像和所述处理后的热红外图像进行双目校正处理,得到所述第一可见光图像和所述第一热红外图像。
可选地,所述获取所述异源双目相机对应的校正参数,包括:
获取标定可见光图像和标定热红外图像,所述标定可见光图像和所述标定热红外图像是所述异源双目相机对目标温度标定板进行拍摄得到的;
对所述标定可见光图像和所述标定热红外图像进行缩放处理,得到处理后的标定可见光图像和处理后的标定热红外图像;
对所述处理后的标定可见光图像和所述处理后的标定热红外图像进行双目校正处理,得到所述校正参数。
可选地,所述根据所述第一位置信息和所述目标双目视差,确定第二位置信息,包括:
将所述第一位置信息和所述目标双目视差之间的差值,作为所述第二位置信息。
另一方面,提供了一种温度测量装置,所述装置包括:
第一获取模块,用于获取第一可见光图像和第一热红外图像;
目标检测模块,用于对所述第一可见光图像进行目标物的检测处理,得到第一位置信息,所述第一位置信息用于指示所述目标物在所述第一可见光图像中的位置;
图像截取模块,用于基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像;
视差确定模块,用于将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,所述目标双目视差用于指示包含所述目标物的可见光图像和热红外图像之间的双目视差;
目标确定模块,用于根据所述第一位置信息和所述目标双目视差,确定第二位置信息,所述第二位置信息用于指示所述目标物在所述第一热红外图像中的位置;
温度确定模块,用于按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度。
可选地,所述图像截取模块,包括:
位置扩充子模块,用于对所述第一位置信息进行位置扩充处理,得到目标位置信息;
截取子模块,用于从所述第一可见光图像中截取所述目标位置信息对应的图像,得到局部可见光图像,从所述第一热红外图像中截取所述目标位置信息对应的图像,得到局部热红外图像;
其中,所述局部图像包括所述局部可见光图像和所述局部热红外图像。
可选地,所述第一位置信息包括第一检测框,所述目标位置信息包括目标检测框,所述第一检测框的中心和所述目标检测框的中心重合;
所述位置扩充子模块,具体用于:
基于所述第一检测框的中心,将所述第一检测框向四周扩充目标尺寸,得到所述目标检测框。
可选地,所述视差估计网络模型包括特征提取网络层、特征融合网络层和视差估计网络层;
所述视差确定模块,包括:
特征提取子模块,用于调用所述特征提取网络层,对所述第一位置信息和所述局部图像进行特征提取处理,得到所述目标物的高阶特征;
特征融合子模块,用于调用所述特征融合网络层,对所述目标物的高阶特征进行特征融合处理,得到所述目标物的融合特征;
视差估计子模块,用于调用所述视差估计网络层,对所述目标物的融合特征进行视差估计处理,得到所述目标双目视差。
可选地,所述局部图像包括局部可见光图像和局部热红外图像,所述特征提取网络层包括第一提取子网络层、第二提取子网络层和第三提取子网络层;
所述特征提取子模块,包括:
第一提取子单元,用于调用所述第一提取子网络层,对所述第一位置信息进行特征提取处理,得到第一高阶特征;
第二提取子单元,用于调用所述第二提取子网络层,对所述局部可见光图像进行特征提取处理,得到第二高阶特征;
第三提取子单元,用于调用所述第三提取子网络层,对所述局部热红外图像进行特征提取处理,得到第三高阶特征;
其中,所述目标物的高阶特征包括所述第一高阶特征、所述第二高阶特征以及所述第三高阶特征。
可选地,所述装置还包括:
所述第一获取模块,还用于获取至少一个训练样本,每个所述训练样本包括参考可见光图像、参考热红外图像、参考位置信息以及参考双目视差,所述参考位置信息用于指示目标物在所述参考可见光图像中的位置,所述参考双目视差为所述参考可见光图像和所述参考热红外图像之间的双目视差;
所述视差确定模块,还用于调用所述视差估计网络模型,对所述参考可见光图像、所述参考热红外图像和所述参考位置信息进行处理,得到估计双目视差;
训练模块,用于根据所述估计双目视差和所述参考双目视差,计算所述视差估计网络模型的预测损失值;
调整模块,用于按照所述预测损失值,调整所述视差估计网络模型的参数。
可选地,所述装置还包括:
距离确定模块,用于根据所述目标双目视差,确定所述目标物和异源双目相机之间的距离,所述异源双目相机包括用于拍摄可见光图像的可见光相机和用于拍摄热红外图像的热红外相机;
第二获取模块,用于获取与所述距离对应的温度衰减值;
温度修正模块,用于按照所述温度衰减值对所述温度进行修正处理,得到所述目标物的修正温度。
可选地,所述第一获取模块,包括:
第一获取子模块,用于获取第二可见光图像和第二热红外图像,所述第二可见光图像和所述第二热红外图像是异源双目相机拍摄得到的;
图像调节子模块,用于对所述第二可见光图像和所述第二热红外图像进行 缩放处理,得到处理后的可见光图像和处理后的热红外图像;
第二获取子模块,用于获取所述异源双目相机对应的校正参数;
校正子模块,用于按照所述校正参数对所述处理后的可见光图像和所述处理后的热红外图像进行双目校正处理,得到所述第一可见光图像和所述第一热红外图像。
可选地,所述第二获取子模块,包括:
第一获取子单元,用于获取标定可见光图像和标定热红外图像,所述标定可见光图像和所述标定热红外图像是所述异源双目相机对目标温度标定板进行拍摄得到的;
图像调节子单元,用于对所述标定可见光图像和所述标定热红外图像进行缩放处理,得到处理后的标定可见光图像和处理后的标定热红外图像;
校正子单元,用于对所述处理后的标定可见光图像和所述处理后的标定热红外图像进行双目校正处理,得到所述校正参数。
可选地,所述目标确定模块,用于:
将所述第一位置信息和所述目标双目视差之间的差值,作为所述第二位置信息。
另一方面,提供了一种温度测量系统,所述温度测量系统包括异源双目相机和计算机设备;
所述异源双目相机包括可见光相机和热红外相机;
所述可见光相机,用于拍摄第一可见光图像;
所述热红外相机,用于拍摄第一热红外图像;
所述计算机设备包括处理器,所述处理器,用于:
对所述第一可见光图像进行目标物的检测处理,得到第一位置信息,所述第一位置信息用于指示所述目标物在所述第一可见光图像中的位置;
基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像;
将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,所述目标双目视差用于指示包含所述目标物的可见光图像和热红外图像之间的双目视差;
根据所述第一位置信息和所述目标双目视差,确定第二位置信息,所述第 二位置信息用于指示所述目标物在所述第一热红外图像中的位置;
按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度。
可选地,所述计算机设备还包括显示器;
所述显示器,用于显示所述目标物的温度。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被处理器执行时实现上述方面所述的温度测量方法。
另一方面,提供了一种计算机程序产品,当所述计算机程序产品被执行时,用于实现上述方面所述的温度测量方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
在本申请实施例提供的技术方案中,通过目标物在可见光图像中的位置,以及可见光图像和热红外图像之间的双目视差,来确定目标物在热红外图像中的位置,进而按照目标物在热红外中的位置,从热红外图像中确定目标物的温度。由于可见光图像相比于热红外图像,具有分辨率高、纹理细节信息丰富等特点,因而,相比于直接基于热红外图像来确定目标物的位置可能会导致位置确定不准确等,本申请实施例依据可见光图像来间接确定目标物在热红外图像中的位置,在位置确定过程中融合更多纹理细节信息,准确有效地确定了目标物在热红外图像中的位置,从而有助于依据该目标物的精确位置,确定的目标物的温度。
而且,本申请实施例提供的技术方案中,可见光图像和热红外图像之间的双目视差是依据视差估计网络模型来确定的。在视差估计网络模型训练完成后,将包含目标物的局部可见光图像和局部热红外图像输入至视差估计网络模型即可得到双目视差,从而可以避免一系列琐碎和繁杂的数学运算,提供了一种简单高效地确定双目视差的方式。通过视差估计网络模型提升了双目视差的确定效率,且目标物的位置依据双目视差来确定,从而也提升了目标物位置的确定效率,以进一步提升目标物的温度测量效率。
此外,本申请实施例提供的技术方案中,视差估计网络模型的输入包括依据目标物在可见光图像中的位置所截取的局部图像。由于完整的拍摄图像除了包括目标物,还包括背景、其它物体等干扰信息,相比于直接将完整的拍摄图 像输入至视差估计网络模型,本申请实施例将与目标物相关的局部图像输入至神经网络模型,去除了拍摄图像中的干扰信息,有助于得到更加准确的双目视差,充分提升了温度测量效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种测温系统示意图;
图2是本申请实施例提供的一种异源双目相机的结构示意图;
图3是本申请实施例提供的一种双目校正的方法流程图;
图4是本申请实施例提供的一种双目标定的示意图;
图5是本申请实施例提供的一种温度测量方法的流程图;
图6是本申请实施例提供的一种局部图像的截取示意图;
图7是本申请实施例提供的一种视差估计网络模型的示意图;
图8是本申请实施例提供的一种温度测量装置的结构示意图;
图9是本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施例方式作进一步地详细描述。
在对本申请实施例进行详细地解释说明之前,先对本申请实施例涉及的名词和的应用场景予以说明。
异源双目相机:相比于正常双目相机,异源双目相机的两个相机在设计参数上是不同的,例如目标光波长、焦距、分辨率、像元尺寸、传感器、镜头中至少一个硬件设计规格不一致。
热红外:光谱名。红外谱段波长在0.76-1000微米之间,其中0.76-3.0微米为反射红外波段,3-18微米为发射红外波段。本申请实施例中提及的热红外相机代指能够测量温度的热成像相机类型的拍摄装置。
热红外相机的工作原理是使用光电设备来检测和测量辐射,并在辐射与表 面温度之间建立相互联系。所有高于绝对零度(-273℃)的物体都会发出红外辐射,热红外相机利用红外探测器和光学成像物镜接受被测目标的红外辐射,并将辐射能量分布图形反映到红外探测器的光敏元件上,从而获得热红外图像,这种热红外图像与物体表面的热分布场相对应。也即是,热红外相机可以将物体发出的不可见红外能量转变为可见的热红外图像。热红外图像中的不同颜色代表被测物体的不同温度。通过查看热红外图像,可以观察到被测物体的整体温度分布状况,研究目标物的发热情况,从而进行下一步工作的判断。
介绍完本申请实施例涉及的相关名词后,接下来对本申请实施例的应用场景进行介绍。
参见图1,图1是本申请实施提供的一种测温系统示意图,该测温系统100包括异源双目相机101和计算机设备102。
相比于可见光相机拍摄的高清的可见光图像,热红外相机拍摄的灰度图分辨率较低、成像质量较差,很难在热红外图像中精确稳定检测到目标物。因此,应用于测温系统100中的异源双目相机101可以采用可见光相机和热红外相机的组合。
作为一个示例,参见图2,图2是本申请实施例提供的一种异源双目相机的结构示意图。该异源双目相机101包括可见光相机1011和热红外相机1012,可见光相机1011用于拍摄高清的可见光图像,热红外相机1012用于拍摄反映温度信息的热红外图像。可见光图像用于目标物检测,热红外图像用于确定目标物的温度。
计算机设备102包括处理器和显示器,其中,处理器用于根据可见光图像和热红外图像,确定目标物的温度,显示器用于显示温度。
作为一个示例,如图1所示,计算机设备102可以包括图像获取模块1021、图像校正模块1022、目标检测模块1023、视差估计网络模型1024和温度确定模块1025。其中,图像校正模块1022为可选的模块。可选地,计算机设备还包括图像截取模块。
图像获取模块1021用于获取异源双目相机101拍摄的可见光图像和热红外图像。在图像获取模块1021获取到可见光图像和热红外图像后,将可见光图像和热红外图像发送至图像校正模块1022。图像校正模块1022对可见光图像和热红外图像进行双目校正处理,使得处理后的可见光图像和热红外图像中对应的像素点位于同一水平线上,也即是,可见光图像中的像素点和热红外图像中对 应的像素点只有横坐标上的差异。
目标检测模块1023用于从可见光图像中检测目标物,并确定目标物在可见光图像中所处的位置。例如,图像校正模块1022将处理后的可见光图像发送至目标检测模块1023,以通过目标检测模块1023检测可见光图像中的目标物。当检测到目标物时,目标检测模块1023将目标物的位置信息、包含目标物的可见光图像以及同一时刻拍摄的热红外图像输入至视差估计网络模型1024。或者,当检测到目标物时,目标检测模块1023将目标物的位置信息、包含目标物的可见光图像以及同一时刻拍摄的热红外图像发送至图像截取模块。图像截取模块基于目标物的位置信息,对可见光图像和热红外图像进行裁剪,得到局部图像。图像截取模块将目标物的位置信息、裁剪得到的局部图像输入至视差估计网络模型1024。当在可见光图像中未检测到目标物时,对该可见光图像和对应的热红外图像不作处理,继续在下一帧可见光图像中进行目标物检测。
视差估计网络模型1024根据输入的目标物的位置信息、可见光图像和热红外图像,通过深度学习算法,确定可见光图像和热红外图像之间的双目视差。或者,视差估计网络模型1024基于输入的目标物的位置信息和局部图像,确定可见光图像和热红外图像之间的双目视差。视差估计网络模型1024还用于将所确定的双目视差发送至温度确定模块1025。
温度确定模块1025根据可见光图像中目标物所处的位置和双目视差,确定目标物在热红外图像中的精准位置,从而按照该位置,在热红外图像中确定目标物的温度。
需要说明的是,上述异源双目相机101和计算机设备102可以为独立的两个设备。上述异源双目相机101和计算机设备102也可以集合为一个测温设备,该测温设备可以作为一个整体进行温度测量。在异源双目相机101和计算机设备102为一个测温设备的情况下,计算机设备102可以不包括图像获取模块1021,异源双目相机101将拍摄的可见光图像和热红外图像直接发送至图像校正模块1022。上述计算机设备102包括但不限于上述列举的功能模块,还可以包括其他功能模块,以实现更精准的温度测量。此外,上述功能模块可以单独工作,也可以集合成为更少的模块来完成温度测量,本申请实施例对此不做限制,仅以图1作为一个示例进行解释说明。
此外,基于上述图2所示的异源双目相机,由于可见光相机和热红外相机 的相机参数不同,可见光图像和热红外图像中的像素点并不一定位于同一水平线上,因此,基于异源双目相机拍摄的可见光图像和热红外图像,在使用本申请实施例提供的温度测量方法之前,需要对可见光相机和热红外相机进行标定,以确定可见光相机和热红外相机之间的标定参数,该标定参数包括相机内参、外参和畸变系数。从而可以根据该标定参数,可以进一步对可见光图像和热红外图像进行双目校正处理,也即是,对热红外图像进行旋转和平移等调整,使得可见光图像和热红外图像中相匹配的特征点位于同一水平线上,对应的像素点仅存在横坐标的差异。
作为一个示例,可见光相机和热红外相机的标定可以采用张正友标定法进行标定,以确定可见光相机和热红外相机之间的标定参数。
张正友标定法是指张正友教授1998年提出的单平面棋盘格的摄像机标定方法,该方法介于传统标定法和自标定法之间,但克服了传统标定法需要的高精度标定物的缺点,而仅需使用一个打印出来的棋盘格就可以。同时也相对于自标定而言,提高了精度,便于操作。
其中,张正友标定法标定双目相机的实现过程为:打印一个棋盘格标定板,使用双目相机从不同角度对棋盘格标定板进行拍照,以获取左目图像和右目图像。检测图像中的特征点,根据左目图像和右目图像中对应特征点的坐标求解理想无畸变情况下的双目相机的内参和外参,并采用极大似然估计提升精度,再利用最小二乘法求出实际的畸变系数,最后综合内参、外参和畸变系数,使用极大似然估计优化估算,以提升精度,完成标定,确定了双目相机的标定参数,该标定参数包括相机内参、外参和畸变系数。
示例性地,当双目相机的类型是包括可见光相机和热红外相机的异源双目相机时,上述左目图像为可见光相机拍摄的图像,右目图像为热红外相机拍摄的图像;或者,上述左目图像为热红外相机拍摄的图像,右目图像为可见光相机拍摄的图像,本申请实施例对此不做限制。
接下来,基于标定后的异源双目相机,对可见光图像和热红外图像的双目校正过程进行解释说明。图3是本申请实施例提供的一种双目校正的方法流程图,该方法应用于上述测温系统100中的计算机设备102,参见图3,该方法包括:
步骤301:获取标定可见光图像和标定热红外图像。
其中,标定可见光图像和标定热红外图像是标定后的异源双目相机对目标温度标定板进行拍摄得到的。
示例性地,目标温度标定板如图4所示的棋盘标定板,每一格都附带有一个热源,以提供不同的温度信息。
基于不同的拍摄角度,异源双目相机对目标温度标定板进行拍摄,得到多组标定可见光图像和标定热红外图像。
步骤302:对标定可见光图像和标定热红外图像进行缩放处理,得到处理后的标定可见光图像和处理后的标定热红外图像。
由于可见光相机和热红外相机的分辨率不同,因此拍摄出来的标定可见光图像和标定热红外图像可能尺寸不同,那么就需要对标定可见光图像和标定热红外图像进行缩放处理,使得处理后的标定可见光图像和处理后的标定热红外图像具有相同的视场角和分辨率。
其中,放大图像也称上采样(Upsampling)或图像插值(Interpolating),主要目的是放大原图像,从而可以显示在更高分辨率的显示设备上。缩小图像也称下采样(Subsampled)或降采样(Downsampled),主要目的是生成原图像对应的缩略图,或者使得原图像符合显示区域的大小。
标定热红外图像由于分辨率较低,往往需要通过上采样操作,达到标定可见光图像的分辨率。因此,可以对标定热红外图像进行放大处理,得到处理后的标定热红外图像。
为了避免对标定热红外图像进行插值导致上采样后的图像信息准确性较低的情况发生,也可以依据标定热红外图像的分辨率,通过下采样操作,缩小标定可见光图像的显示尺寸。因此,也可以对标定可见光图像进行缩小处理,得到处理后的标定可见光图像。
此外,也可以对标定可见光图像和标定热红外图像都进行缩放处理,使得处理后的标定可见光图像和处理后的标定热红外图像具有相同的分辨率。本申请实施例对此不做限制。
作为一个示例,假设标定可见光图像的分辨率为640*480,标定热红外图像的分辨率320*240,可以对标定热红外图像进行上采样处理,以放大标定热红外图像,使得处理后的标定热红外图像的分辨率为640*480,处理后的标定热红外图像和标定可见光图像的显示尺寸相同。
步骤303:对处理后的标定可见光图像和处理后的标定热红外图像进行双目 校正处理,得到校正参数。
首先,在实际拍摄中,相机的镜头并非理想的透视成像,会带有不同程度的畸变。理论上镜头的畸变包括径向畸变和切向畸变,但切向畸变影响较小,通常只考虑径向畸变。
其中,径向畸变主要由镜头径向曲率产生(光线在远离透镜中心的地方比靠近中心的地方更加弯曲),导致真实成像点向内或向外偏离理想成像点。其中畸变像点相对于理想像点沿径向向外偏移,远离中心的畸变称为枕形畸变;畸变像点相对于理想像点沿径向向中心靠拢的畸变称为桶状畸变。图4仅仅是以桶状畸变为例进行说明,并不在于限制。
如图4中(c)所示,在处理后的标定可见光图像中确定特征点,特征点可以是黑白棋盘格的任一角点,比如图中所示的A1点,根据对应点之间的位置坐标,确定无畸变理想情况下,可见光相机和热红外相机之间的内参和外参。进而根据相机内参和外参,使用最小二乘法求解出实际的径向畸变系数。相机的具体标定过程可以参考现有技术中的相关文档,本申请实施例对此不做赘述,在获取相机标定所求解的畸变系数后,对处理后的标定可见光图像和处理后的标定热红外图像进行畸变消除处理,使得处理后的标定可见光图像和处理后的标定热红外图像不存在径向畸变。
其次,对于畸变消除处理后的标定可见光图像和处理后的标定热红外图像,进一步地,根据一系列的特征点匹配,可以通过相应的数学方法确定热红外图像中的特征点的坐标经过怎么的平移和旋转处理,可以得到可见光图像对应的特征点的坐标,进而获得可见光图像和热红外图像之间的校正参数。其中,校正参数包括畸变系数、平移向量和旋转矩阵。
也即是,根据该校正参数,可以将可见光图像和热红外图像中相互对应的特征点的位置调整在同一水平线上,对应特征点之间仅存在横坐标的差异。
作为一个示例,参见图4中(c),经过双目校正处理后,可见光图像中的点A1和热红外图像中的特征点A2位于同一水平线上,可见光图像中的点B1和热红外图像中的特征点B2位于同一水平线上。
此外,参见图4,在对标定可见光图像和标定热红外图像进行双目校正后,可以对标定可见光图像和标定热红外图像进行图像裁剪(即图像截取处理),以将包含目标物(图4中的目标物为目标温度标定板)的局部图像截取出来,以便于根据标定可见光图像和标定热红外图像中目标物的像素点的水平偏移情 况,来确定标定可见光图像和标定热红外图像之间的双目视差。
在本申请实施例中,通过对标定可见光图像和标定热红外图像进行双目校正处理,得到可见光图像和热红外图像之间的校正参数。如此,基于校正参数,可以对标定热红外图像进行平移和旋转处理,使得标定可见光图像和标定热红外图像中的对应特征点可以位于同一水平线上。
接下来,基于可见光相机拍摄的可见光图像和热红外相机拍摄的热红外图像,对本申请实施例提供的温度测量方法进行详细地解释说明。
图5是本申请实施例提供的一种温度测量方法的流程图,该方法用于上述测温系统100中的计算机设备102。参见图5,该方法包括:
步骤501:获取第一可见光图像和第一热红外图像。
在一种可能的实现方式中,计算机设备获取第二可见光图像和第二热红外图像,第二可见光图像和第二热红外图像是异源双目相机拍摄得到的,对第二可见光图像和第二热红外图像进行缩放处理,得到处理后的可见光图像和处理后的热红外图像。计算机设备获取异源双目相机对应的校正参数,按照该校正参数对处理后的可见光图像和处理后的热红外图像进行双目校正处理,得到第一可见光图像和第一热红外图像。
其中,获取第一可见光图像和第一热红外图像的实现过程可以参见上述实施例3所示的双目标定过程,在此不再赘述。示例性地,对第一可见光图像和第一热红外图像进行缩放、消除畸变和特征点匹配等处理,且在特征点匹配对齐后,还可以对处理后的可见光图像和热红外图像进行裁剪处理,得到第一可见光图像和第一热红外图像。其中,裁剪处理例如裁剪掉部分边缘等,以去掉边缘冗余部分。
需要说明的是,第一可见光图像和第一热红外图像是经过双目校正处理过的,因此,第一可见光图像中的像素点和热红外图像中对应的像素点之间仅存在水平方向的偏移,该水平方向的偏移即是第一可见光图像和第一热红外图像之间的双目视差。
步骤502:对第一可见光图像进行目标物的检测处理,得到第一位置信息。
其中,第一位置信息用于指示目标物在第一可见光图像中的位置。第一位置信息可以包括以下两种可能的表示方式,也可以认为是两种可能的显示方式。
在一种可能的显示方式中,基于第一可见光图像,建立相同的直角坐标系, 从而可以在该直角坐标系中确定目标物的各个特征点的坐标。根据特征点的坐标即可确定目标物在第一可见光图像所处的第一位置信息。需要说明的是,计算机设备可以在第一可见光图像和第一热红外图像中建立相同的直角坐标系,在该步骤502中计算机设备基于目标物的各个特征点在第一可见光图像所建立的直角坐标系中的坐标,得到第一位置信息。
在另一种可能的显示方式中,通过对第一可见光图像进行目标物的检测处理,直接在第一可见光图像中框选第一目标区域。如此,可以在第一可见光图像中确定目标物的位置,虽然可能框选到背景区域,但可以涵盖目标物的全部区域,不存在遗漏信息,也即是,框选的目标区域包含目标物的轮廓信息更完整。
步骤503:基于第一位置信息,对第一可见光图像和第一热红外图像进行图像截取处理,得到局部图像。
其中,为了减少第一可见光图像和第一热红外图像中干扰物信息,在检测到目标物,且确定第一位置信息后,可以从第一可见光图像中截取第一位置信息对应的局部可见光图像。同时,基于该第一位置信息,从第一热红外图像中截取第一位置信息对应的局部热红外图像。后续计算机设备通过步骤504将第一位置信息、局部可见光图像和局部热红外图像输入至视差估计网络模型中。
可选地,由于第一可见光图像和第一热红外图像之间存在双目视差,且第一热红外图像的像素低,拍摄的目标物清晰度较差,目标物和其他的背景物之间的存在边缘黏连的情况,导致基于第一位置信息从第一热红外图像中截取的局部图像不一定包含目标物的全部信息,因此,在执行下述步骤504之前,可以对第一位置信息进行扩充处理,使得截取的局部图像中包含目标物的全部信息。
在一种可能的实现方式中,基于第一位置信息,对第一可见光图像和第一热红外图像进行图像截取处理,得到局部图像的实现过程为:对第一位置信息进行位置扩充处理,得到目标位置信息;从第一可见光图像中截取目标位置信息对应的图像,得到局部可见光图像,从第一热红外图像中截取目标位置信息对应的图像,得到局部热红外图像。
作为一个示例,参见图6,图6是本申请实施例提供的一种局部图像的截取示意图。假设目标物是人脸,基于第一可见光图像和第一热红外图像,通过人脸检测算法检测到人脸在第一可见光图像中的位置为第一位置信息,第一位置 信息可以包括人脸框位置信息、人脸关键点位置信息、人脸朝向信息中的至少一种。
由于第一热红外图像和第一可见光图像之间存在双目视差,且第一热红外图像中目标物的轮廓较模糊,因此为了保证裁剪的目标区域包含该人脸,将第一位置信息进行位置扩充处理,得到目标位置信息,其中,第一位置信息包括图6所示的第一检测框,目标位置信息包括图6所示的目标检测框,第一检测框的中心和目标检测框的中心重合。
其中,对第一位置信息进行位置扩充处理,得到目标位置信息的实现过程为:基于第一检测框的中心,将第一检测框向四周扩充目标尺寸,得到扩充目标区域。
作为一个示例,假设第一检测框为图6所示的矩形区域,预先设置的目标尺寸为2cm,则根据目标尺寸,将第一检测框的四个边均向外扩充2cm,得到目标检测框。
基于位置扩充处理后的目标检测框,从第一可见光图像中截取包含人脸的图像,得到局部可见光图像,从第一热红外图像中截取包含人脸的图像,得到局部热红外图像。
作为一个示例,当第一位置信息以坐标的形式表示时,可以将第一检测框的位置坐标、截取得到的局部可见光图像和局部热红外图像输入至视差估计网络模型,执行下述步骤504。
当第一位置信息以图片的形式表示时,可以基于图6所示的扩充后的目标检测框,从第一可见光图像中截取同样大小的纯黑色图像,在纯黑色图像中绘制获取的人脸的第一检测框的位置,得到图6中所示的第一目标物图像。进一步地,还可以在纯黑色图像中的第一检测框中绘制获取的人脸关键点信息,得到图6中所示的第二目标物图像。此外,目标物图像中还可以包括其他反映人脸在第一可见光图像中所处位置的信息,本申请实施例对此不做限制。
如此,第一目标物图像、第二目标物图像、截取得到的局部可见光图像和截取得到的局部热红外图像具有相同的分辨率。
作为另一个示例,当第一位置信息以图片的形式表示时,可以将第一目标物图像、局部可见光图像和局部热红外图像输入至视差估计网络模型,执行下述步骤504。或者,可以将第二目标物图像、局部可见光图像和局部热红外图像输入至视差估计网络模型,执行下述步骤504。
需要说明的是,在第一可见光图像中存在至少两个目标物的情况下,若从第一可见光图像中检测到目标物,则基于每个目标物,依次进行图像截取处理,将截取处理后包含同一目标物的局部图像和该目标物的第一位置信息输入至视差估计网络模型。
示例性地,当第一可见光图像中包括三个人物A、B和C时,基于人物A在第一可见光图像中的第一位置信息,从第一可见光图像中截取得到局部可见光图像,从第一热红外图像中截取得到局部热红外图像。将人物A的第一位置信息和两张局部图像作为第一输入组。对第一可见光图像中的人物B和人物C执行相同的处理操作,将人物B的第二位置信息和两张局部图像作为第二输入组,将人物C的第三位置信息和两张局部图像作为第三输入组。将第一输入组、第二输入组和第三输入组依次输入视差估计网络模型,执行下述步骤504。其中,上述第一位置用于指示人物A在第一可见光图像中的位置,第二位置用于指示人物B在第一可见光图像中的位置,第三位置用于指示人物C在第一可见光图像中的位置,上述“第一”、“第二”和“第三”的命名仅仅针对该示例,并不构成对本申请其他示例性实施例的限制。
也即是,当获取的第一可见光图像中存在多个目标物时,针对每个目标物单独进行图像截取处理。
步骤504:将第一位置信息和局部图像输入至视差估计网络模型,得到目标双目视差,目标双目视差用于指示包含目标物的可见光图像和热红外图像之间的双目视差。
其中,视差估计网络模型包括特征提取网络层、特征融合网络层和视差估计网络层。
在一种可能的实现方式中,步骤504的实现过程为:计算机设备调用特征提取网络层,对第一位置信息和局部图像进行特征提取处理,得到目标物的高阶特征,调用特征融合网络层,对目标物的高阶特征进行特征融合处理,得到目标物的融合特征,调用视差估计网络层,对目标物的融合特征进行视差估计处理,得到目标双目视差。
其中,特征提取网络层用于对输入的第一位置信息和局部图像进行至少一次卷积处理,以提取目标物的高阶特征,本申请实施例对卷积层数和卷积核不做限制。
由于视差估计网络模型的输入包括第一位置信息、局部可见光图像和局部 热红外图像,进一步地,特征提取网络层可以包括第一提取子网络层、第二提取子网络层和第三提取子网络层。其中,第一位置信息用于指示目标物在第一可见光图像中的位置。作为一个示例,参见图7,图7是本申请实施例提供的一种视差估计网络模型的示意图。
进一步地,调用特征提取网络层,对局部图像进行特征提取处理,得到目标物的高阶特征的实现过程可以为:调用第一提取子网络层,对第一位置信息进行特征提取处理,得到第一高阶特征。调用第二提取子网络层,对局部可见光图像进行特征提取处理,得到第二高阶特征。调用第三提取子网络层,对局部热红外图像进行特征提取处理,得到第三高阶特征。其中,目标物的高阶特征包括第一高阶特征、第二高阶特征以及第三高阶特征。
此外,在使用上述视差估计网络模型确定第一可见光图像和第一热红外图像之间的目标双目视差之前,还需要对视差估计网络模型进行训练,当视差估计网络模型输出的训练样本的双目视差在误差允许范围内,则视差估计网络模型训练完成。
视差估计网络模型的训练过程为:获取至少一个训练样本,每个训练样本包括参考可见光图像、参考热红外图像、参考位置信息以及参考双目视差。参考位置信息用于指示目标物在参考可见光图像中的位置,参考双目视差为参考可见光图像和参考热红外图像之间的双目视差。调用视差估计网络模型,对参考可见光图像、参考热红外图像和参考位置信息进行处理,得到估计双目视差。根据估计双目视差和参考双目视差,计算视差估计网络模型的预测损失值,按照预测损失值,调整视差估计网络模型的参数。需要说明的是,参考可见光图像和参考热红外图像中的目标物与前述第一可见光图像和第一热红外图像中的目标物是属于同一种类型的物体,例如目标物都是人脸、人体或车辆等,但本申请实施例不限定每张图像中的目标物是否是同一个物体,也即不同图像中的目标物可以是不同的物体,也可以是相同的物体。另外,每个训练样本所包括的参考可见光图像和参考热红外图像为一个异源双目相机同时拍摄一个目标物得到的图像。
作为一个示例,参考双目视差可以通过激光雷达、深度相机等设备获取,也可以通过人工标注的方式,确定参考可见光图像和参考热红外图像之间的参考双目视差,本申请实施例对此不做限制。
步骤505:根据第一位置信息和目标双目视差,确定第二位置信息,第二位 置信息用于指示目标物在第一热红外图像中的位置。
在一种可能的实现方式中,计算机设备将第一位置信息和目标双目视差之间的差值,作为第二位置信息。
其中,目标双目视差可以是包含目标物的第一可见光图像和第一热红外图像之间的视差值,该视差值反映的是第一热红外图像中目标物相对于第一可见光图像中目标物整体的偏移情况。
当目标双目视差是用视差值表示时,上述步骤505的实现过程为:将第一热红外图像中第一位置信息指示的的每个像素点的横坐标和该视差值之间的差值,作为第二位置信息。
此外,目标双目视差也可以是包含目标物的第一可见光图像和第一热红外图像之间的视差图,该视差图中每个像素点的灰度值都表示第一可见光图像和第一热红外图像中对应像素点的横坐标差值。
当目标双目视差是用视差图表示时,上述步骤505的实现过程为:将第一热红外图像中第一位置信息指示的每个像素点的横坐标和视差图中的对应位置的像素点的灰度值之间的差值,作为第二位置信息。
步骤506:按照第二位置信息,从第一热红外图像中确定目标物的温度。
由于热红外图像可以反映拍摄场景内所有高于绝对零度的物体的温度信息,且温度的高低在热红外图像中的显示颜色不同,因此,在第一热红外图像中确定第二位置信息后,计算机设备即可根据该第二位置信息指示区域所呈现的颜色,确定目标物的温度。
需要说明的是,由于第一热红外图像是通过异源双目相机远程拍摄获取的,并非像手持测温枪一样近距离测温获得,因此,计算机设备根据第一热红外图像中目标物所处区域的显示颜色确定目标物的温度时存在偏差。也即是,测量温度会随着测温距离衰减,当目标物和异源双目相机之间的距离较远时,计算机设备从第一热红外图像中确定的目标物的温度和目标物的实际温度之间的偏差越大。当目标物和异源双目相机之间的距离较近时,计算机设备从第一热红外图像中确定的目标物的温度越接近目标物的实际温度。因此,在从第一热红外图像中确定目标物的温度后,计算机设备还需要依据距离对该温度进行修正,将修正后的温度作为目标物的实际温度。
在一种可能的实现方式中,计算机设备对目标物的温度进行修正的实现过程为:根据目标双目视差,确定目标物和异源双目相机之间的距离,获取与距 离对应的温度衰减值,按照温度衰减值对温度进行修正处理,得到目标物的修正温度。
其中,距离和视差的关系为:
Figure PCTCN2021130101-appb-000001
其中,Z为目标物和异源双目相机之间的距离,f为异源双目相机的焦距,b为异源双目相机中可见光相机和热红外相机的光心之间的距离,也称为基线距,d为第一可见光图像和第一热红外图像之间的目标双目视差。
也即是,在确定第一可见光图像和第一热红外图像之间的目标双目视差d的情况下,可以根据上述距离和视差的关系,确定目标物和异源双目相机之间的距离。
需要说明的是,距离和温度衰减值之间存在映射关系,在确定目标物和异源双目相机之间的距离后,计算机设备即可根据该距离,依据距离和温度衰减值之间的映射关系,确定该距离对应的温度衰减值。可选地,计算机设备将从第一热红外图像中确定的目标物的温度,加上该距离对应的温度衰减值,以得到目标物的修正温度。
比如,从第一热红外图像中确定目标物的温度为35°,且目标物和异源双目相机之间的距离为30米,该距离对应的温度衰减值为2°,则按照该温度衰减值对温度进行修正处理,得到目标物的修正温度为37°。
又比如,从第一热红外图像中确定目标物的温度为37.5°,且目标物和异源双目相机之间的距离为5米,该距离对应的温度衰减值为0.5°,则按照该温度衰减值对温度进行修正处理,得到目标物的修正温度为38°。
综上,在本申请实施例提供的技术方案中,通过目标物在可见光图像中的位置,以及可见光图像和热红外图像之间的双目视差,来确定目标物在热红外图像中的位置,进而按照目标物在热红外中的位置,从热红外图像中确定目标物的温度。由于可见光图像相比于热红外图像,具有分辨率高、纹理细节信息丰富等特点,因而,相比于直接基于热红外图像来确定目标物的位置可能会导致位置确定不准确等,本申请实施例依据可见光图像来间接确定目标物在热红外图像中的位置,在位置确定过程中融合更多纹理细节信息,准确有效地确定了目标物在热红外图像中的位置,从而有助于依据该目标物的精确位置,确定的目标物的温度。
而且,本申请实施例提供的技术方案中,可见光图像和热红外图像之间的双目视差是依据视差估计网络模型来确定的。在视差估计网络模型训练完成后, 将包含目标物的局部可见光图像和局部热红外图像输入至视差估计网络模型即可得到双目视差,从而可以避免一系列琐碎和繁杂的数学运算,提供了一种简单高效地确定双目视差的方式。通过视差估计网络模型提升了双目视差的确定效率,且目标物的位置依据双目视差来确定,从而也提升了目标物位置的确定效率,以进一步提升目标物的温度测量效率。
此外,本申请实施例提供的技术方案中,视差估计网络模型的输入包括依据目标物在可见光图像中的位置所截取的局部图像。由于完整的拍摄图像除了包括目标物,还包括背景、其它物体等干扰信息,相比于直接将完整的拍摄图像输入至视差估计网络模型,本申请实施例将与目标物相关的局部图像输入至神经网络模型,去除了拍摄图像中的干扰信息,有助于得到更加准确的双目视差,充分提升了温度测量效率。
上述所有可选技术方案,均可按照任意结合形成本申请的可选实施例,本申请实施例对此不再一一赘述。
图8是本申请实施例提供的一种温度测量装置的结构示意图,该装置800包括:第一获取模块801、目标检测模块802、图像截取模块803、视差确定模块804、目标确定模块805和温度确定模块806。
第一获取模块801,用于获取第一可见光图像和第一热红外图像;
目标检测模块802,用于对第一可见光图像进行目标物的检测处理,得到第一位置信息,第一位置信息用于指示目标物在第一可见光图像中的位置;
图像截取模块803,用于基于第一位置信息,对第一可见光图像和第一热红外图像进行图像截取处理,得到局部图像;
视差确定模块804,用于将第一位置信息和局部图像,输入至视差估计网络模型,得到目标双目视差,目标双目视差用于指示包含目标物的可见光图像和热红外图像之间的双目视差;
目标确定模块805,用于根据第一位置信息和目标双目视差,确定第二位置信息,第二位置信息用于指示目标物在第一热红外图像中的位置;
温度确定模块806,用于按照第二位置信息,从第一热红外图像中确定目标物的温度。
可选地,图像截取模块803,包括:
位置扩充子模块,用于对第一位置信息进行位置扩充处理,得到目标位置 信息;
截取子模块,用于从第一可见光图像中截取目标位置信息对应的图像,得到局部可见光图像,从第一热红外图像中截取目标位置信息对应的图像,得到局部热红外图像;
其中,局部图像包括局部可见光图像和局部热红外图像。
可选地,第一位置信息包括第一检测框,目标位置信息包括目标检测框,第一检测框的中心和目标检测框的中心重合;
位置扩充子模块,具体用于:
基于第一检测框的中心,将第一检测框向四周扩充目标尺寸,得到目标检测框。
可选地,视差估计网络模型包括特征提取网络层、特征融合网络层和视差估计网络层;
视差确定模块804,包括:
特征提取子模块,用于调用特征提取网络层,对第一位置信息和局部图像进行特征提取处理,得到目标物的高阶特征;
特征融合子模块,用于调用特征融合网络层,对目标物的高阶特征进行特征融合处理,得到目标物的融合特征;
视差估计子模块,用于调用视差估计网络层,对目标物的融合特征进行视差估计处理,得到目标双目视差。
可选地,局部图像包括局部可见光图像和局部热红外图像,特征提取网络层包括第一提取子网络层、第二提取子网络层和第三提取子网络层;
特征提取子模块,包括:
第一提取子单元,用于调用第一提取子网络层,对第一位置信息进行特征提取处理,得到第一高阶特征;
第二提取子单元,用于调用第二提取子网络层,对局部可见光图像进行特征提取处理,得到第二高阶特征;
第三提取子单元,用于调用第三提取子网络层,对局部热红外图像进行特征提取处理,得到第三高阶特征;
其中,目标物的高阶特征包括第一高阶特征、第二高阶特征以及第三高阶特征。
可选地,装置800还包括:
第一获取模块,用于获取至少一个训练样本,每个训练样本包括参考可见光图像、参考热红外图像、参考位置信息以及参考双目视差,参考位置信息用于指示目标物在参考可见光图像中的位置,参考双目视差为参考可见光图像和参考热红外图像之间的双目视差;
视差确定模块,用于调用视差估计网络模型,对参考可见光图像、参考热红外图像和参考位置信息进行处理,得到估计双目视差;
训练模块,用于根据估计双目视差和参考双目视差,计算视差估计网络模型的预测损失值;
调整模块,用于按照预测损失值,调整视差估计网络模型的参数。
可选地,装置800还包括:
距离确定模块,用于根据目标双目视差,确定目标物和异源双目相机之间的距离,异源双目相机包括用于拍摄可见光图像的可见光相机和用于拍摄热红外图像的热红外相机;
第二获取模块,用于获取与距离对应的温度衰减值;
温度修正模块,用于按照温度衰减值对温度进行修正处理,得到目标物的修正温度。
可选地,第一获取模块801,包括:
第一获取子模块,用于获取第二可见光图像和第二热红外图像,第二可见光图像和第二热红外图像是异源双目相机拍摄得到的;
图像调节子模块,用于对第二可见光图像和第二热红外图像进行缩放处理,得到处理后的可见光图像和处理后的热红外图像;
第二获取子模块,用于获取异源双目相机对应的校正参数;
校正子模块,用于按照校正参数对处理后的可见光图像和处理后的热红外图像进行双目校正处理,得到第一可见光图像和第一热红外图像。
可选地,第二获取子模块,包括:
第一获取子单元,用于获取标定可见光图像和标定热红外图像,标定可见光图像和标定热红外图像是异源双目相机对目标温度标定板进行拍摄得到的;
图像调节子单元,用于对标定可见光图像和标定热红外图像进行图像缩放处理,得到处理后的标定可见光图像和处理后的标定热红外图像;
校正子单元,用于对处理后的标定可见光图像和处理后的标定热红外图像进行双目校正处理,得到校正参数。
可选地,目标确定模块805,用于:
将第一位置信息和目标双目视差之间的差值,作为第二位置信息。
综上,在本申请实施例提供的技术方案中,通过目标物在可见光图像中的位置,以及可见光图像和热红外图像之间的双目视差,来确定目标物在热红外图像中的位置,进而按照目标物在热红外中的位置,从热红外图像中确定目标物的温度。由于可见光图像相比于热红外图像,具有分辨率高、纹理细节信息丰富等特点,因而,相比于直接基于热红外图像来确定目标物的位置可能会导致位置确定不准确等,本申请实施例依据可见光图像来间接确定目标物在热红外图像中的位置,在位置确定过程中融合更多纹理细节信息,准确有效地确定了目标物在热红外图像中的位置,从而有助于依据该目标物的精确位置,确定的目标物的温度。
而且,本申请实施例提供的技术方案中,可见光图像和热红外图像之间的双目视差是依据视差估计网络模型来确定的。在视差估计网络模型训练完成后,将包含目标物的局部可见光图像和局部热红外图像输入至视差估计网络模型即可得到双目视差,从而可以避免一系列琐碎和繁杂的数学运算,提供了一种简单高效地确定双目视差的方式。通过视差估计网络模型提升了双目视差的确定效率,且目标物的位置依据双目视差来确定,从而也提升了目标物位置的确定效率,以进一步提升目标物的温度测量效率。
此外,本申请实施例提供的技术方案中,视差估计网络模型的输入包括依据目标物在可见光图像中的位置所截取的局部图像。由于完整的拍摄图像除了包括目标物,还包括背景、其它物体等干扰信息,相比于直接将完整的拍摄图像输入至视差估计网络模型,本申请实施例将与目标物相关的局部图像输入至神经网络模型,去除了拍摄图像中的干扰信息,有助于得到更加准确的双目视差,充分提升了温度测量效率。
需要说明的是:上述实施例提供的温度测量装置在通过热红外图像确定目标物的温度时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的温度测量装置与温度测量方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图9是本申请实施例提供的计算机设备的结构框图,该计算机设备900可用于实现上述温度测量方法示例的功能。具体来讲:
该计算机设备900包括处理单元(如CPU(Central Processing Unit,中央处理器)、GPU(Graphics Processing Unit,图形处理器)和FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列)等)901、包括RAM(Random-Access Memory,随机存储器)902和ROM(Read-Only Memory,只读存储器)903的系统存储器904,以及连接系统存储器904和中央处理单元901的系统总线905。该计算机设备900还包括帮助计算计算机设备内的各个器件之间传输信息的I/O系统(Input Output System,基本输入/输出系统)906,和用于存储操作系统913、应用程序914和其他程序模块915的大容量存储设备907。
该基本输入/输出系统906包括有用于显示信息的显示器908和用于用户输入信息的诸如鼠标、键盘之类的输入设备909。其中,该显示器908和输入设备909都通过连接到系统总线905的输入输出控制器910连接到中央处理单元901。该基本输入/输出系统906还可以包括输入输出控制器910以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器910还提供输出到显示屏、打印机或其他类型的输出设备。
该大容量存储设备907通过连接到系统总线905的大容量存储控制器(未示出)连接到中央处理单元901。该大容量存储设备907及其相关联的计算机可读介质为计算机设备900提供非易失性存储。也就是说,该大容量存储设备907可以包括诸如硬盘或者CD-ROM(Compact Disc Read-Only Memory,只读光盘)驱动器之类的计算机可读介质(未示出)。
不失一般性,该计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-Only Memory,电可擦写可编程只读存储器)、闪存或其他固态存储其技术,CD-ROM、DVD(Digital Video Disc,高密度数字视频光盘)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知该计算机存储介质不局限于上述几种。上述的系统存储器904和大容量存储设备907可以统称为存储器。
根据本申请实施例,该计算机设备900还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备900可以通过连接在该系统总线905上的网络接口单元911连接到网络912,或者说,也可以使用网络接口单元911来连接到其他类型的网络或远程计算机系统(未示出)。
该存储器还包括至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集存储于存储器中,且经配置以由一个或者一个以上处理器执行,以实现上述温度测量方法。
本领域技术人员可以理解,图9中示出的结构并不构成对计算机设备900的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性的实施例中,还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被处理器执行时实现上述温度测量方法。
在示例性实施例中,还提供了一种计算机程序产品,当该计算机程序产品被执行时,其用于实现上述温度测量方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (15)

  1. 一种温度测量方法,其特征在于,所述方法包括:
    获取第一可见光图像和第一热红外图像;
    对所述第一可见光图像进行目标物的检测处理,得到第一位置信息,所述第一位置信息用于指示所述目标物在所述第一可见光图像中的位置;
    基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像;
    将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,所述目标双目视差用于指示包含所述目标物的可见光图像和热红外图像之间的双目视差;
    根据所述第一位置信息和所述目标双目视差,确定第二位置信息,所述第二位置信息用于指示所述目标物在所述第一热红外图像中的位置;
    按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像,包括:
    对所述第一位置信息进行位置扩充处理,得到目标位置信息;
    从所述第一可见光图像中截取所述目标位置信息对应的图像,得到局部可见光图像,从所述第一热红外图像中截取所述目标位置信息对应的图像,得到局部热红外图像;
    其中,所述局部图像包括所述局部可见光图像和所述局部热红外图像。
  3. 根据权利要求2所述的方法,其特征在于,所述第一位置信息包括第一检测框,所述目标位置信息包括目标检测框,所述第一检测框的中心和所述目标检测框的中心重合;
    所述对所述第一位置信息进行位置扩充处理,得到目标位置信息,包括:
    基于所述第一检测框的中心,将所述第一检测框向四周扩充目标尺寸,得到所述目标检测框。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述视差估计网络模型包括特征提取网络层、特征融合网络层和视差估计网络层;
    所述将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,包括:
    调用所述特征提取网络层,对所述第一位置信息和所述局部图像进行特征提取处理,得到所述目标物的高阶特征;
    调用所述特征融合网络层,对所述目标物的高阶特征进行特征融合处理,得到所述目标物的融合特征;
    调用所述视差估计网络层,对所述目标物的融合特征进行视差估计处理,得到所述目标双目视差。
  5. 根据权利要求4所述的方法,其特征在于,所述局部图像包括局部可见光图像和局部热红外图像,所述特征提取网络层包括第一提取子网络层、第二提取子网络层和第三提取子网络层;
    所述调用所述特征提取网络层,对所述第一位置信息和所述局部图像进行特征提取处理,得到所述目标物的高阶特征,包括:
    调用所述第一提取子网络层,对所述第一位置信息进行特征提取处理,得到第一高阶特征;
    调用所述第二提取子网络层,对所述局部可见光图像进行特征提取处理,得到第二高阶特征;
    调用所述第三提取子网络层,对所述局部热红外图像进行特征提取处理,得到第三高阶特征;
    其中,所述目标物的高阶特征包括所述第一高阶特征、所述第二高阶特征以及所述第三高阶特征。
  6. 根据权利要求1至3任一所述的方法,其特征在于,所述视差估计网络模型的训练过程如下:
    获取至少一个训练样本,每个所述训练样本包括参考可见光图像、参考热红外图像、参考位置信息以及参考双目视差,所述参考位置信息用于指示目标物在所述参考可见光图像中的位置,所述参考双目视差为所述参考可见光图像 和所述参考热红外图像之间的双目视差;
    调用所述视差估计网络模型,对所述参考可见光图像、所述参考热红外图像和所述参考位置信息进行处理,得到估计双目视差;
    根据所述估计双目视差和所述参考双目视差,计算所述视差估计网络模型的预测损失值;
    按照所述预测损失值,调整所述视差估计网络模型的参数。
  7. 根据权利要求1至3任一所述的方法,其特征在于,所述按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度之后,所述方法还包括:
    根据所述目标双目视差,确定所述目标物和异源双目相机之间的距离,所述异源双目相机包括用于拍摄可见光图像的可见光相机和用于拍摄热红外图像的热红外相机;
    获取与所述距离对应的温度衰减值;
    按照所述温度衰减值对所述温度进行修正处理,得到所述目标物的修正温度。
  8. 根据权利要求1至3任一所述的方法,其特征在于,所述获取第一可见光图像和第一热红外图像,包括:
    获取第二可见光图像和第二热红外图像,所述第二可见光图像和所述第二热红外图像是异源双目相机拍摄得到的;
    对所述第二可见光图像和所述第二热红外图像进行缩放处理,得到处理后的可见光图像和处理后的热红外图像;
    获取所述异源双目相机对应的校正参数;
    按照所校正参数对所述处理后的可见光图像和所述处理后的热红外图像进行双目校正处理,得到所述第一可见光图像和所述第一热红外图像。
  9. 根据权利要求8所述的方法,其特征在于,所述获取所述异源双目相机对应的校正参数,包括:
    获取标定可见光图像和标定热红外图像,所述标定可见光图像和所述标定 热红外图像是所述异源双目相机对目标温度标定板进行拍摄得到的;
    对所述标定可见光图像和所述标定热红外图像进行缩放处理,得到处理后的标定可见光图像和处理后的标定热红外图像;
    对所述处理后的标定可见光图像和所述处理后的标定热红外图像进行双目校正处理,得到所述校正参数。
  10. 根据权利要求1至3任一所述的方法,其特征在于,所述根据所述第一位置信息和所述目标双目视差,确定第二位置信息,包括:
    将所述第一位置信息和所述目标双目视差之间的差值,作为所述第二位置信息。
  11. 一种温度测量装置,其特征在于,所述装置包括:
    第一获取模块,用于获取第一可见光图像和第一热红外图像;
    目标检测模块,用于对所述第一可见光图像进行目标物的检测处理,得到第一位置信息,所述第一位置信息用于指示所述目标物在所述第一可见光图像中的位置;
    图像截取模块,用于基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像;
    视差确定模块,用于将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,所述目标双目视差用于指示包含所述目标物的可见光图像和热红外图像之间的双目视差;
    目标确定模块,用于根据所述第一位置信息和所述目标双目视差,确定第二位置信息,所述第二位置信息用于指示所述目标物在所述第一热红外图像中的位置;
    温度确定模块,用于按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度。
  12. 一种温度测量系统,其特征在于,所述温度测量系统包括异源双目相机和计算机设备;
    所述异源双目相机包括可见光相机和热红外相机;
    所述可见光相机,用于拍摄第一可见光图像;
    所述热红外相机,用于拍摄第一热红外图像;
    所述计算机设备包括处理器,所述处理器,用于:
    对所述第一可见光图像进行目标物的检测处理,得到第一位置信息,所述第一位置信息用于指示所述目标物在所述第一可见光图像中的位置;
    基于所述第一位置信息,对所述第一可见光图像和所述第一热红外图像进行图像截取处理,得到局部图像;
    将所述第一位置信息和所述局部图像输入至视差估计网络模型,得到目标双目视差,所述目标双目视差用于指示包含所述目标物的可见光图像和热红外图像之间的双目视差;
    根据所述第一位置信息和所述目标双目视差,确定第二位置信息,所述第二位置信息用于指示所述目标物在所述第一热红外图像中的位置;
    按照所述第二位置信息,从所述第一热红外图像中确定所述目标物的温度。
  13. 根据权利要求12所述的温度测量系统,其特征在于,所述计算机设备还包括显示器;
    所述显示器,用于显示所述目标物的温度。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-10任一所述方法的步骤。
  15. 一种计算机程序产品,其特征在于,所述计算机程序产品包含计算机指令,所述计算机指令在计算机上运行时实现权利要求1-10任一所述方法的步骤。
PCT/CN2021/130101 2020-11-13 2021-11-11 温度测量方法、装置、系统、存储介质及程序产品 WO2022100668A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21891188.1A EP4242609A4 (en) 2020-11-13 2021-11-11 TEMPERATURE MEASUREMENT METHOD, DEVICE AND SYSTEM, STORAGE MEDIUM AND PROGRAM PRODUCT

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011273221.8A CN114485953A (zh) 2020-11-13 2020-11-13 温度测量方法、装置及系统
CN202011273221.8 2020-11-13

Publications (1)

Publication Number Publication Date
WO2022100668A1 true WO2022100668A1 (zh) 2022-05-19

Family

ID=81491218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/130101 WO2022100668A1 (zh) 2020-11-13 2021-11-11 温度测量方法、装置、系统、存储介质及程序产品

Country Status (3)

Country Link
EP (1) EP4242609A4 (zh)
CN (1) CN114485953A (zh)
WO (1) WO2022100668A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115183876B (zh) * 2022-09-09 2022-12-09 国网山西省电力公司电力科学研究院 电力设备温度测量方法及装置、存储介质、计算机设备
CN118050087A (zh) * 2022-11-15 2024-05-17 华为技术有限公司 一种设备测温方法及其相关设备
CN117788532A (zh) * 2023-12-26 2024-03-29 四川新视创伟超高清科技有限公司 一种安防领域基于fpga的超高清双光融合配准方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060045329A1 (en) * 2004-08-26 2006-03-02 Jones Graham R Image processing
CN104902182A (zh) * 2015-05-28 2015-09-09 努比亚技术有限公司 一种实现连续自动对焦的方法和装置
CN109308719A (zh) * 2018-08-31 2019-02-05 电子科技大学 一种基于三维卷积的双目视差估计方法
CN110060272A (zh) * 2018-01-18 2019-07-26 杭州海康威视数字技术股份有限公司 人脸区域的确定方法、装置、电子设备及存储介质
CN111623881A (zh) * 2020-05-21 2020-09-04 平安国际智慧城市科技股份有限公司 基于图像处理的双光摄像机测温方法及相关设备
CN111639522A (zh) * 2020-04-17 2020-09-08 北京迈格威科技有限公司 活体检测方法、装置、计算机设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7924312B2 (en) * 2008-08-22 2011-04-12 Fluke Corporation Infrared and visible-light image registration
CN109846463A (zh) * 2019-03-04 2019-06-07 武汉迅检科技有限公司 红外人脸测温方法、系统、设备及存储介质
CN111723926B (zh) * 2019-03-22 2023-09-12 北京地平线机器人技术研发有限公司 用于确定图像视差的神经网络模型的训练方法和训练装置
CN111835959B (zh) * 2019-04-17 2022-03-01 杭州海康微影传感科技有限公司 用于双光融合的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060045329A1 (en) * 2004-08-26 2006-03-02 Jones Graham R Image processing
CN104902182A (zh) * 2015-05-28 2015-09-09 努比亚技术有限公司 一种实现连续自动对焦的方法和装置
CN110060272A (zh) * 2018-01-18 2019-07-26 杭州海康威视数字技术股份有限公司 人脸区域的确定方法、装置、电子设备及存储介质
CN109308719A (zh) * 2018-08-31 2019-02-05 电子科技大学 一种基于三维卷积的双目视差估计方法
CN111639522A (zh) * 2020-04-17 2020-09-08 北京迈格威科技有限公司 活体检测方法、装置、计算机设备和存储介质
CN111623881A (zh) * 2020-05-21 2020-09-04 平安国际智慧城市科技股份有限公司 基于图像处理的双光摄像机测温方法及相关设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4242609A4

Also Published As

Publication number Publication date
EP4242609A4 (en) 2024-04-17
CN114485953A (zh) 2022-05-13
EP4242609A1 (en) 2023-09-13

Similar Documents

Publication Publication Date Title
JP6722323B2 (ja) 撮像装置のモデリングおよび校正のためのシステムおよびその方法
WO2022100668A1 (zh) 温度测量方法、装置、系统、存储介质及程序产品
WO2022127918A1 (zh) 双目相机的立体标定方法、装置、系统及双目相机
US9762871B2 (en) Camera assisted two dimensional keystone correction
US20170243373A1 (en) Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline
EP2328125B1 (en) Image splicing method and device
JP5580164B2 (ja) 光学情報処理装置、光学情報処理方法、光学情報処理システム、光学情報処理プログラム
US7137707B2 (en) Projector-camera system with laser pointers
TWI433530B (zh) 具有立體影像攝影引導的攝影系統與方法及自動調整方法
US20080158340A1 (en) Video chat apparatus and method
US20070159527A1 (en) Method and apparatus for providing panoramic view with geometric correction
WO2018072433A1 (zh) 一种含有多个不同波长激光器的三维扫描方法及扫描仪
JP2001346226A (ja) 画像処理装置、立体写真プリントシステム、画像処理方法、立体写真プリント方法、及び処理プログラムを記録した媒体
JP2003502925A (ja) 一台の携帯カメラによる3d情景の撮影法
CN109325981B (zh) 基于聚焦像点的微透镜阵列型光场相机几何参数标定方法
CN110322485A (zh) 一种异构多相机成像系统的快速图像配准方法
TW202013956A (zh) 影像裝置的校正方法及其相關影像裝置和運算裝置
CN109598763A (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
US10154241B2 (en) Depth map based perspective correction in digital photos
CN109785225B (zh) 一种用于图像矫正的方法和装置
WO2020134123A1 (zh) 全景拍摄方法及装置、相机、移动终端
JP7489253B2 (ja) デプスマップ生成装置及びそのプログラム、並びに、デプスマップ生成システム
JP2001016621A (ja) 多眼式データ入力装置
JPH09305796A (ja) 画像情報処理装置
TW200841702A (en) Adaptive image acquisition system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21891188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021891188

Country of ref document: EP

Effective date: 20230609