WO2022257794A1 - Method and apparatus for processing visible light image and infrared image - Google Patents

Method and apparatus for processing visible light image and infrared image Download PDF

Info

Publication number
WO2022257794A1
WO2022257794A1 PCT/CN2022/095838 CN2022095838W WO2022257794A1 WO 2022257794 A1 WO2022257794 A1 WO 2022257794A1 CN 2022095838 W CN2022095838 W CN 2022095838W WO 2022257794 A1 WO2022257794 A1 WO 2022257794A1
Authority
WO
WIPO (PCT)
Prior art keywords
visible light
camera
infrared
image
optical axis
Prior art date
Application number
PCT/CN2022/095838
Other languages
French (fr)
Chinese (zh)
Inventor
刘若鹏
栾琳
陈其勇
Original Assignee
深圳光启空间技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110650324.XA external-priority patent/CN115457090A/en
Priority claimed from CN202110645248.3A external-priority patent/CN115457089A/en
Priority claimed from CN202110909327.0A external-priority patent/CN113792592B/en
Application filed by 深圳光启空间技术有限公司 filed Critical 深圳光启空间技术有限公司
Publication of WO2022257794A1 publication Critical patent/WO2022257794A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods

Definitions

  • the present invention relates to the field of image processing, in particular to a method and device for processing visible light images and infrared images.
  • infrared temperature measurement and face detection and recognition devices there are many infrared temperature measurement and face detection and recognition devices. These temperature measurement and face detection and recognition devices are fixed installation devices, and the infrared camera module is often integrated with the visible light camera module. The relative position of the two camera modules and the relative angle is fixed. For example, a dual-light (infrared and visible light or white light) module design is adopted. Because of the dual-light module design, the central optical axes of the two camera modules are fixed in parallel and do not change, and the positions of the Z-axis directions of the centers of the two camera modules are the same.
  • the vertical (Y-axis direction) height is the same and fixed
  • the horizontal (X-axis direction) relative position is fixed and the distance is very small
  • the vertical (Y-axis direction) height relative position deviation of the centers of the two camera modules is fixed
  • the horizontal (X-axis direction) The positions are the same and fixed.
  • These fixed infrared temperature measurement and face detection and recognition devices have the following three characteristics: the central optical axes of the two camera modules are parallel, the zero coordinate positions of the two camera modules on the Z axis are the same, and the zero coordinate positions on the Y axis are the same or on the X axis. The coordinates of the zero point are the same.
  • the two modules of the infrared camera and the visible light camera are not designed at the same position (X, Y, Z three-axis zero point The coordinates are not the same).
  • X, Y, Z three-axis zero point The coordinates are not the same.
  • it is often necessary to adjust the angle of the visible light camera or the infrared camera when people of different heights wear it and apply in different scenes and the two camera modules of the infrared camera and the visible light camera are often not binocular modules.
  • the central optical axes of the two camera modules are not parallel (there is an included angle), and the zero point coordinates of the X, Y, and Z axes of the spatial positions of the other two camera modules are also different.
  • the existing equipment is equipped with an infrared camera and a visible light camera, it has not realized the registration and fusion of dual images, and can only perform data analysis and processing on one camera picture, and the equipment cannot automatically combine the data of the visible light picture with the data of the infrared picture. The data are fused into one image.
  • the present invention provides a method and device for processing visible light images and infrared images to at least solve the problem in the related art that equipment equipped with dual cameras cannot automatically fuse data of visible light images and infrared images.
  • One aspect of the present invention provides a method for processing visible light images and infrared images, which is applied to a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the visible light camera or the optical axis of the infrared camera and the device
  • the front view plane is vertical, including the following steps:
  • the visible light image and the infrared image of the target object are registered and fused according to the registration model.
  • the processing of visible light images and infrared images is located on a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the visible light camera or the optical axis of the infrared camera is in line with the front view of the device Plane verticals include:
  • the registration model building module is used to establish the visible light image of the target object collected by the visible light camera and the infrared image according to the relative spatial position of the visible light camera and the infrared camera, the conversion parameters, and the horizontal distance between the target object and the visible light camera.
  • An image fusion module configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
  • the visible light image of the target object collected by the visible light camera and the infrared camera image are established according to the spatial relative positions of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera.
  • the registration model of the coordinate position of the infrared image of the collected target object, and according to the registration model, the visible light image of the target object and the infrared image are registered and fused, thereby solving the problem of configuring a dual camera that can combine light and infrared
  • the equipment cannot process the fusion of visible light image data and infrared image data.
  • FIG. 1 is a flow chart of a registration and fusion method for a visible light image and an infrared image according to an embodiment of the present invention
  • Fig. 2 is a structural block diagram of a registration and fusion device for visible light images and infrared images according to an embodiment of the present invention
  • Fig. 3 is a schematic diagram of the horizontal positions of a visible light camera and an infrared camera according to an embodiment of the present invention
  • Fig. 4 is a schematic diagram of the horizontal positions of a visible light camera and an infrared camera according to another embodiment of the present invention.
  • Fig. 5 is a flow chart of a registration and fusion method for a visible light image and an infrared image according to an embodiment of the present invention
  • Fig. 6 is a flow chart of a registration and fusion method for a visible light image and an infrared image according to another embodiment of the present invention.
  • Fig. 7 is a flowchart of a registration and fusion method for a visible light image and an infrared image according to an embodiment of the present invention
  • Fig. 8 is a structural block diagram of a device for registration and fusion of visible light images and infrared images according to an embodiment of the present invention
  • Fig. 9 is a schematic diagram of horizontal positions of a visible light camera and an infrared camera according to an embodiment of the present invention.
  • Fig. 10 is a schematic diagram of the horizontal positions of a visible light camera and an infrared camera according to another embodiment of the present invention.
  • Fig. 11 is a flowchart of a method for registration and fusion of visible light images and infrared images according to an embodiment of the present invention
  • Fig. 12 is a flowchart of a registration and fusion method for visible light images and infrared images according to another embodiment of the present invention
  • FIG. 13 is a schematic diagram of the layout of the camera on the XOZ plane according to the image acquisition and processing method according to the embodiment of the present invention.
  • Fig. 14 is a schematic diagram of the arrangement of cameras on a YOZ plane according to an image acquisition and processing method according to an embodiment of the present invention
  • Fig. 15 is a schematic diagram of the layout of the camera on the XOZ plane according to another embodiment of the present invention.
  • Fig. 16 is a schematic diagram of the layout of the camera on the YOZ plane according to another embodiment of the present invention.
  • FIG. 17 is a schematic flowchart of an image acquisition and processing method according to an embodiment of the present invention.
  • Fig. 18 is a schematic diagram of a partial interface of a device adopting an image acquisition and processing method according to an embodiment of the present invention.
  • the present invention provides a method for processing a visible light image and an infrared image, which is applied to a device equipped with a visible light camera and an infrared camera.
  • a device equipped with a visible light camera and an infrared camera To implement the present invention, it is necessary to configure the optical axis of one of the optical camera or the infrared light and the plane where the front view of the device is located to be vertical.
  • Fig. 1 is a flow chart of a method for processing visible light images and infrared images according to an embodiment of the present invention. As shown in Fig. 1, the process includes the following steps:
  • Step S102 establishing the visible light image of the target object collected by the visible light camera and the infrared image of the target object collected by the infrared camera according to the spatial relative positions of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera.
  • the registration model of the coordinate position of the image
  • Step S104 performing registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
  • step S102 of this embodiment for example, the following registration model can be established:
  • A, B, C, and D are the first, second, third, and fourth conversion parameters, respectively, and m, n, and d are the relative distances between the X, Y, and Z axes of the visible light camera and the infrared camera, respectively, and ⁇ are the horizontal and vertical angles between the optical axes of the visible light camera and the infrared camera, respectively, L is the horizontal distance between the target and the visible light camera, (x VR , y VR ) is the pixel coordinate of the visible light image, (x IR , y IR ) is the infrared image pixel coordinates.
  • the registration model can be used for fast registration and fusion of coordinate positions of objects in two camera images. From this registration model, it can be known that the non-zero items in the registration model matrix are not fixed values, but are a function of the horizontal distance L between the target object and the visible light camera. Therefore, it is impossible to determine the non-zero items in the registration model matrix by selecting the frame image collected by the scene with a specific horizontal distance from the visible light camera to the target object.
  • the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera can be calibrated in the following two ways:
  • the first calibration method includes the following steps:
  • the horizontal angle between the optical axes of the two cameras in the registration model can be calibrated by the following formula and longitudinal angle ⁇ :
  • L C is the first set distance
  • L VR and W VR are the horizontal length and vertical length of the same position of the first reference object in the visible light image respectively
  • L IR and W IR are the first reference The horizontal length and vertical length of the same position of the object in the infrared image.
  • the second calibration method includes the following steps:
  • the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image and the longitudinal intersection angle ⁇ of the two optical axes are respectively:
  • the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image and the longitudinal intersection angle ⁇ of the two optical axes are respectively:
  • the following steps may also be included: the optical axis of the visible light camera Substituting the horizontal and vertical angles with the optical axis of the infrared camera into the registration model to establish a height and width mapping model between the visible light image and the infrared image; based on multiple differences between the target object and the visible light camera
  • the horizontal distance is to establish a mapping relationship between the height of the designated area of the target object in multiple groups of visible light images and the horizontal distance between the target object and the visible light camera.
  • the registration model of this embodiment is applied to the temperature measurement scene of the human body, the following height and width mapping model between the visible light image and the infrared image is established:
  • A, C, ⁇ , n, L are all known, ⁇ is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • step S104 may include: obtaining the center position coordinate value of the specified area of the target object in the visible light image, and the height and width of the specified area of the target object in the visible light image, and according to the The height and width of the specified area of the target object in the visible light image, find the horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model; combine the corresponding horizontal distance value with the target
  • the center position coordinate value of the designated area of the object is input into the registration model, and the center position coordinate value of the designated area of the target object in the infrared image is calculated and obtained; the height and the height of the designated area of the target object in the visible light image are calculated
  • the width is input into the height and width mapping model, and the height and width of the specified area of the target object in the infrared image are calculated and obtained; according to the center position coordinates of the specified area of the target object in the infrared image, and the The height and width of the specified area of the target object in the infrared image determine the
  • it may further include: acquiring the highest temperature value in the specified area of the target object in the infrared image; Annotating a specified location of a specified region of the target object in the visible light image.
  • Fig. 2 is a structural block diagram of a device for registering and fusing visible light images and infrared images according to an embodiment of the present invention.
  • the plane of the front view is vertical, as shown in FIG. 2 , the device includes a registration model building module 210 and an image fusion module 220 .
  • a registration model establishment module 210 configured to establish the visible light image of the target object collected by the visible light camera and the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera.
  • the image fusion module 220 is configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
  • each of the above-mentioned modules can be implemented by software or hardware.
  • it can be implemented in the following manner, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules are respectively located in multiple in the processor.
  • the optical axis of the infrared camera is perpendicular to the plane where the front view of the device is located.
  • FIG. 7 is a flow chart of a method for registration and fusion of visible light images and infrared images according to an embodiment of the present invention. As shown in FIG. 7 , the process includes the following steps:
  • Step S702 Establish the visible light image of the target object collected by the visible light camera and the infrared image of the target object collected by the infrared camera according to the spatial relative positions of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera.
  • the registration model of the coordinate position of the image
  • Step S704 performing registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
  • step S702 of this embodiment for example, the following registration model can be established:
  • A, B, C, and D are the first, second, third, and fourth conversion parameters, respectively, and m, n, and d are the relative distances between the X, Y, and Z axes of the visible light camera and the infrared camera, respectively, and ⁇ are the horizontal and vertical angles between the optical axes of the visible light camera and the infrared camera, respectively, L is the horizontal distance between the target and the visible light camera, (x VR , y VR ) is the pixel coordinate of the visible light image, (x IR , y IR ) is the infrared image pixel coordinates.
  • the registration model can be used for fast registration and fusion of coordinate positions of objects in two camera images. From this registration model, it can be known that the non-zero items in the registration model matrix are not fixed values, but are a function of the horizontal distance L between the target object and the visible light camera. Therefore, it is impossible to determine the non-zero items in the registration model matrix by selecting the frame image collected by the scene with a specific horizontal distance from the visible light camera to the target object.
  • the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera can also be calibrated in the following two ways:
  • the first calibration method includes the following steps:
  • the horizontal angle between the optical axes of the two cameras in the registration model can be calibrated by the following formula and longitudinal angle ⁇ :
  • LC is the first set distance
  • (x VR-C , y VR-C ) is the coordinates of the same position of the first reference object in the visible light image, and the coordinates of the same position of the first reference object in the infrared image (x IR-C ,y IR-C ).
  • the second calibration method includes the following steps:
  • the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image and the longitudinal intersection angle ⁇ of the two optical axes are respectively:
  • the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image and the longitudinal intersection angle ⁇ of the two optical axes are respectively:
  • the following steps may also be included: the optical axis of the visible light camera Substituting the horizontal and vertical angles with the optical axis of the infrared camera into the registration model to establish a height and width mapping model between the visible light image and the infrared image; based on multiple differences between the target object and the visible light camera
  • the horizontal distance is to establish a mapping relationship between the height of the designated area of the target object in multiple groups of visible light images and the horizontal distance between the target object and the visible light camera.
  • the registration model of this embodiment is applied to the temperature measurement scene of the human body, the following height and width mapping model between the visible light image and the infrared image is established:
  • A, C, ⁇ , n, L are all known, ⁇ is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • step S704 may include: obtaining the center position coordinate value of the specified area of the target object in the visible light image, and the height and width of the specified area of the target object in the visible light image, and according to the The height and width of the specified area of the target object in the visible light image, find the horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model; combine the corresponding horizontal distance value with the target
  • the center position coordinate value of the designated area of the object is input into the registration model, and the center position coordinate value of the designated area of the target object in the infrared image is calculated and obtained; the height and the height of the designated area of the target object in the visible light image are calculated
  • the width is input into the height and width mapping model, and the height and width of the specified area of the target object in the infrared image are calculated and obtained; according to the center position coordinates of the specified area of the target object in the infrared image, and the The height and width of the specified area of the target object in the infrared image determine
  • it may further include: acquiring the highest temperature value in the specified area of the target object in the infrared image; Annotating a specified location of a specified region of the target object in the visible light image.
  • a device for registering and fusing visible light images and infrared images is also provided.
  • the device is used to implement the above embodiments and preferred implementation modes, and those that have already been described will not be repeated.
  • the term "module" may be a combination of software and/or hardware that realizes a predetermined function.
  • the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
  • Fig. 8 is a structural block diagram of an apparatus for registration and fusion of a visible light image and an infrared image according to an embodiment of the present invention
  • the apparatus is located on a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the infrared camera and the device
  • the front view plane is vertical, as shown in FIG. 8
  • the device includes a registration model building module 810 and an image fusion module 820 .
  • the registration model establishment module 810 is configured to establish the visible light image of the target object collected by the visible light camera and the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera.
  • the image fusion module 820 is configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
  • each of the above-mentioned modules can be implemented by software or hardware.
  • it can be implemented in the following manner, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules are respectively located in multiple in the processor.
  • An embodiment of the present invention provides a method for registration and fusion of visible light and infrared images. This method is applied to a device configured with a visible light camera and an infrared camera.
  • Figure 3 and Figure 4 are both schematic diagrams of the horizontal position of the visible light camera and the infrared camera on the device according to an embodiment of the present invention, wherein Figure 3 shows the device area 1, the infrared camera 2, the visible light camera 3, and the front view of the device Plane horizontal line 4, infrared camera optical axis 5, visible light camera optical axis 6, and lateral angle 7 between the two optical axes.
  • the optical axis 6 of the visible light camera is perpendicular to the front view plane of the device, and the infrared camera 2 is located above the visible light camera 3, and its optical axis is not perpendicular to the front view plane of the device, but intersects with the optical axis 6 of the visible light camera.
  • Fig. 4 shows the device area 1, the infrared camera 2, the visible light camera 3, the equipment front view plane horizontal line 4, the infrared camera optical axis 5, the visible light camera optical axis 6, and the lateral angle 7 between the two optical axes.
  • the optical axis 6 of the visible light camera is perpendicular to the plane of the front view of the device, and the infrared camera 2 is located above the visible light camera 3, and its optical axis 5 is not perpendicular to the plane of the front view of the device, but intersects with the optical axis 6 of the visible light camera .
  • the optical axis of the visible light camera is perpendicular to the plane of the front view of the device, and the optical axis of the infrared camera may not be perpendicular to the plane of the front view of the device.
  • the relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
  • the registration and fusion of visible light and infrared images may include the following steps:
  • Step S501 establishing a registration model of a visible light image and an infrared image.
  • this step according to the relative distances m, d, and n of the X, Y, and Z-axis spatial positions of the visible light and infrared cameras, the horizontal and vertical field angles, the display resolution, and the lateral clamping distance between the two optical axes Angle (that is, the angle between the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (that is, the angle between the ZOX planes of the two cameras), and the horizontal distance L between the target object and the visible light camera to establish a visible light image and an infrared image
  • A, B, C, and D are conversion parameters
  • m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
  • the horizontal distance L between the target object and the visible light camera is the amount of change.
  • x VR and y VR are the pixel coordinate values of the visible light image
  • x IR and y IR are the pixel coordinate values of the infrared image.
  • the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of visible light, the horizontal and vertical viewing angles of infrared light, and the display resolution parameters of infrared and visible light.
  • w VR is the horizontal display resolution of the visible light camera
  • h VR is the vertical display resolution of the visible light camera
  • w IR is the horizontal display resolution of the infrared camera
  • h IR is the vertical display resolution of the infrared camera
  • is the horizontal field of view of the visible light camera
  • is the vertical viewing angle of the visible light camera
  • is the horizontal viewing angle of the infrared camera
  • is the vertical viewing angle of the infrared camera.
  • Step S502 calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
  • the horizontal and vertical angles can be calibrated as follows:
  • the selection range of LC can be [0.3m ⁇ 7m];
  • the target object can be a regular cuboid, cube, or a certain part of the human body, such as a human head
  • the measurement method can be automatically calculated and measured by the built-in software of the device, or manually calculated and measured on the collected images.
  • the collected image method can be obtained through the server software connected to the device.
  • the horizontal length of the visible light image of the object is L VR
  • the vertical length is W VR
  • the horizontal length of the infrared image of the object is L IR
  • the vertical length is W IR ;
  • the lateral angle between the two optical axes can be calculated and the vertical angle ⁇ of the two optical axes:
  • Step S503 establishing the height and width mapping model of the visible light image and the infrared image as follows:
  • A, C, ⁇ , n, L are all known, ⁇ is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • the established height and width mapping models can further quickly fuse the size and ratio relationship of objects.
  • H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face , and W ir_face , the corresponding collected temperature information of the face area of the infrared image can be obtained.
  • Step S504 establishing a mapping table of the range intervals corresponding to different face heights in the visible light image and the horizontal distance L between the face and the visible light camera.
  • the H vr_face range interval can be divided into multiple intervals, wherein , H k+1 >H k >H k-1 >...>H 6 >H 5 >H 4 >H 3 >H 2 >H 1 , L 1 >L 2 >L 3 >L 4 >...>L k-2 >L k-1 >L k .
  • the human face detection module outputs one or more human face center position coordinates (x VR , y VR ) in the visible light image, and the height H vr_face and width W vr_face of the human face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image, Calculate and obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image, until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
  • Step S506 input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
  • Step S507 according to the human face picture center position coordinates (x IR , y IR ) of the corresponding infrared image of the detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
  • the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved.
  • the method of this embodiment when the position and area range data of multiple faces are detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be quickly and accurately obtained. Further accurate acquisition of face temperature data corresponding to the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature.
  • Another embodiment of the present invention provides another method for registration and fusion of visible light and infrared images, which can be applied to devices equipped with visible light cameras and infrared cameras.
  • the spatial position relationship between the visible light camera and the infrared camera can be referred to in FIG. 3 and FIG. 4 .
  • the optical axis of the visible light camera is perpendicular to the plane of the front view of the device, and the optical axis of the infrared camera may not be perpendicular to the plane of the front view of the device.
  • the relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
  • the shooting scene is that a helmet equipped with a visible light camera and an infrared camera shoots a face in front of the helmet.
  • the registration and fusion of visible light and infrared images provided in this embodiment may include the following steps:
  • Step S601 establishing a registration model of a visible light image and an infrared image.
  • the horizontal and vertical viewing angles, the display resolution, and the lateral direction of the two optical axes are used to establish a visible light image and an infrared image.
  • A, B, C, and D are conversion parameters
  • m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
  • are the horizontal and vertical angles between the optical axes of the two cameras, and the horizontal distance L between the target object and the visible light camera is the variation.
  • x VR and y VR are the pixel coordinate values of the visible light image
  • x IR and y IR are the pixel coordinate values of the infrared image.
  • the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of visible light, the horizontal and vertical viewing angles of infrared light, and the display resolution parameters of infrared and visible light.
  • w VR is the horizontal display resolution of the visible light camera
  • h VR is the vertical display resolution of the visible light camera
  • w IR is the horizontal display resolution of the infrared camera
  • h IR is the vertical display resolution of the infrared camera
  • is the horizontal field of view of the visible light camera
  • is the vertical viewing angle of the visible light camera
  • is the horizontal viewing angle of the infrared camera
  • is the vertical viewing angle of the infrared camera.
  • Step S602 calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
  • Step S602 calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
  • the selection range of LC can be [0.3m ⁇ 7m];
  • the lateral angle between the two optical axes can be calculated and the longitudinal intersection angle ⁇ of the two optical axes are respectively:
  • x VR , y VR , x IR , y IR are all 0, and the two optical axes can be calculated according to the registration model of the visible light image and the infrared image horizontal angle of and the longitudinal intersection angle ⁇ of the two optical axes are:
  • Step S603 establishing the height and width mapping model of the visible light image and the infrared image as follows:
  • A, C, ⁇ , n, L are all known, ⁇ is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • the established height and width mapping models can further quickly fuse the size and ratio relationship of objects.
  • H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face and W ir_face can obtain the corresponding collected temperature information of the face area of the infrared image.
  • Step S605 the face detection module outputs one or more coordinates of the center position of the face in the visible light image (x VR , y VR ), and the height H vr_face and width W vr_face of the face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image for calculation Obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
  • Step S606 input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
  • Step S607 according to the human face picture center position coordinates (x IR , y IR ) of the corresponding infrared image of the detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
  • the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved.
  • the problem of image registration and fusion under the condition that the positions of the faces are different, and further solves the problem of interference to the temperature detection of the face caused by abnormal ambient temperature around the face in the application of the temperature measurement scene.
  • the technical solution provided in this embodiment can solve the problem of image registration and fusion in the case where the central optical axes of the visible light camera and the infrared camera are not parallel, and the positions of the two cameras in the X-axis, Y-axis, and Z-axis directions are different. , to solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the temperature measurement scene application.
  • the technical solution provided by this embodiment is applied to the position and area range data of multiple human faces detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be obtained quickly and accurately, and the corresponding face ranges can be further accurately obtained.
  • the face temperature data in the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature.
  • the calibration process of the image registration and fusion model provided in this embodiment is simple and fast, and avoids a large amount of complicated computing resource consumption. Compared with other image fusion algorithms that require more computing resources to support It requires fewer computing resources and is more efficient.
  • the fusion processing of the visible light image and the infrared light image when the fusion processing of the visible light image and the infrared light image is performed, the corresponding relationship between each coordinate position of the visible light image and each coordinate position of the infrared image is obtained. According to such a relationship, all coordinate pixels can be traversed to fuse the two images into one, and then the fused image can be displayed or stored.
  • the fusion range can be calculated, and then image processing is only performed on the fusion range, so as to save computing resources and increase image fusion speed.
  • FIG. 13 and FIG. 14 show schematic diagrams of arrangement of cameras in an image acquisition and processing method according to an embodiment of the present invention.
  • a visible light camera 01 and an infrared light camera 02 are arranged in the device 03, the optical axis 11 of the visible light camera 01 is perpendicular to the front view plane 31 of the device 03, and the infrared light camera 01
  • the light camera 02 and the visible light camera 01 are arranged at intervals.
  • the projections of the infrared camera 02 and the visible light camera 01 on the X, Y, and Z axes are spaced apart from each other, and the projection distance on the Z axis is n.
  • the optical axis deviation and spacing of the visible light camera 01 and the infrared light camera 02 should not be too large, so as to ensure that the position of the same object in the picture does not rotate, reduce fusion registration deviation, and then improve the reliability of the obtained fusion area.
  • the image acquisition and processing method of the embodiment of the present invention can guarantee the image acquisition and processing efficiency when the visible light camera 01 and the infrared light camera 02 have a certain deviation, but its specific installation requirements are determined according to the actual situation, and are not specifically limited here.
  • the optical axis 21 of the infrared camera 02 intersects with the vertical axis of the front view plane of the device 03, and the projection between the optical axis 21 of the infrared camera 02 and the light 11 axis of the visible light camera on the ZOX plane Angle is
  • the included angle of the projection on the ZOY plane is ⁇ , and the included angle is pre-tested and calibrated, corresponding to the fact that the optical axis 21 of the infrared camera 02 is parallel to the optical axis 11 of the visible light camera 01, which is consistent with the actual situation of cameras in general application scenarios Matching, no need to adjust the hardware configuration of the device, the application is simple.
  • the angle between the optical axis 21 of the infrared camera 02 and the projection of the optical axis 11 of the visible light camera 01 on the ZOX plane includes:
  • the measurement of the horizontal length and the vertical length of the same position of the object can be obtained by automatic measurement and calculation by software, or by manual measurement and calculation based on the collected images.
  • the registration model of visible light image and infrared light image is:
  • parameters A and C refer to the following.
  • the angle between the optical axis 21 of the infrared camera 02 and the projection of the optical axis 11 of the visible light camera on the ZOX plane includes:
  • the pixel coordinates x IR and y IR of the corresponding visible light image at the specific position are 0, and the pixel coordinates x VR and y VR of the corresponding infrared light image are also 0, and the registration of the corresponding visible light image and the infrared light image
  • the model is:
  • the pixel coordinates x IR and y IR of the corresponding visible light image at a specific position are 0, the pixel coordinates x VR of the corresponding infrared light image are 200, and y VR is 100, then the registration of the corresponding visible light image and the infrared light image
  • the model is:
  • Fig. 17 shows a schematic flowchart of an image acquisition and processing method according to an embodiment of the present invention.
  • the image acquisition and processing method of the embodiment of the present invention includes:
  • Step S1701 Obtain an initial model of the fusion area based on the respective parameters and related parameters of the visible light camera and the infrared camera, so as to obtain the transition area according to the initial model of the fusion area.
  • the parameters of the initial model of the fusion area include:
  • x 1 ⁇ x VR ⁇ x 2 , y 1 ⁇ y VR ⁇ y 2 are conversion variables, which are used to simplify the above formulas of x 1 ⁇ y 2 to become larger, specifically: x VR and y VR correspond to the pixel coordinates in the visible light image, and the range obtained according to the initial model of the fusion area is the transition area, and m, n, and d are the X, Z, and Y coordinates of the visible light camera and the infrared light camera, respectively.
  • L max is the farthest distance that the visible light camera can detect the corresponding object of the image
  • L max is the farthest distance that the visible light camera can detect the corresponding object of the image
  • is the distance between the optical axis of the infrared camera and the optical axis of the visible camera on the ZOY plane
  • w IR is the horizontal display resolution of the infrared camera
  • h IR is the vertical display resolution of the infrared camera
  • w VR is the infrared camera Horizontal display resolution
  • h VR is the vertical display resolution of the infrared camera
  • is the horizontal viewing angle of visible light
  • is the vertical viewing angle of visible light
  • is the horizontal viewing angle of infrared light
  • is the vertical viewing angle of infrared light
  • is the vertical viewing angle of infrared light
  • is the vertical viewing angle of infrared light
  • is the vertical viewing angle of infrared light
  • the transition area is a square
  • x VR and y VR correspond to the pixel coordinates in the horizontal direction and the pixel coordinates in the vertical direction respectively.
  • the image resolution is M*N
  • the pixel coordinates in the horizontal direction correspond to the M parameter
  • the pixels in the vertical direction Coordinates correspond to N parameters.
  • FIG. 9 and FIG. 10 are schematic diagrams of horizontal positions of a visible light camera and an infrared camera on a device according to an embodiment of the present invention.
  • 9 shows the device area 1, the infrared camera 2, the visible light camera 3, the horizontal line 4 of the front view plane of the device, the optical axis 5 of the infrared camera, the optical axis 6 of the visible light camera, and the lateral angle 7 between the two optical axes.
  • the optical axis 5 of the infrared camera is perpendicular to the plane of the front view of the device, and the visible light camera 3 is located below the infrared camera 2, and its optical axis 6 is not perpendicular to the plane of the front view of the device, but intersects with the optical axis 5 of the infrared camera.
  • Figure 10 shows the equipment area 1, the infrared camera 2, the visible light camera 3, the equipment front view plane horizontal line 4, the infrared camera optical axis 5, the visible light camera optical axis 6, and the lateral angle 7 of the two optical axes.
  • the optical axis 5 of the infrared camera is perpendicular to the front view plane of the device
  • the visible light camera 3 is located below the infrared camera 2
  • the optical axis 6 of the visible light camera is not perpendicular to the front view plane of the device, but intersects with the optical axis 5 of the infrared camera.
  • the optical axis of the infrared camera is perpendicular to the plane of the front view of the device, and the optical axis of the visible light camera may not be perpendicular to the plane of the front view of the device.
  • the relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
  • the registration and fusion of visible light and infrared images may include the following steps:
  • Step S1101 establishing a registration model of a visible light image and an infrared image.
  • this step according to the relative distances m, d, and n of the X, Y, and Z-axis spatial positions of the visible light and infrared cameras, the horizontal and vertical field angles, the display resolution, and the lateral clamping distance between the two optical axes Angle (that is, the angle between the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (that is, the angle between the ZOX planes of the two cameras), and the horizontal distance L between the target object and the visible light camera to establish a visible light image and an infrared image
  • A, B, C, and D are conversion parameters
  • m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
  • are the horizontal and vertical angles between the optical axes of the two cameras, and the horizontal distance L between the target object and the visible light camera is the variation.
  • x VR and y VR are the pixel coordinate values of the visible light image
  • x IR and y IR are the pixel coordinate values of the infrared image.
  • the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of visible light, the horizontal and vertical viewing angles of infrared light, and the display resolution parameters of infrared and visible light.
  • w VR is the horizontal display resolution of the visible light camera
  • h VR is the vertical display resolution of the visible light camera
  • w IR is the horizontal display resolution of the infrared camera
  • h IR is the vertical display resolution of the infrared camera
  • is the horizontal field of view of the visible light camera
  • is the vertical viewing angle of the visible light camera
  • is the horizontal viewing angle of the infrared camera
  • is the vertical viewing angle of the infrared camera.
  • Step S1102 calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
  • the horizontal and vertical angles can be calibrated as follows:
  • the selection range of LC can be [0.5m ⁇ 7m];
  • the target object can be a regular cuboid, cube, or a part of the human body, such as the human head, facial features or glasses), and find out the visible light at the same position of the object Coordinates in the image (x VR-C , y VR-C ) and coordinates in the infrared image (x IR-C , y IR-C );
  • the lateral angle between the two optical axes can be calculated and the vertical angle ⁇ of the two optical axes:
  • Step S1103 establishing the height and width mapping model of the visible light image and the infrared image as follows:
  • A, C, ⁇ , n, L are all known, ⁇ is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • the established height and width mapping models can further quickly fuse the size and ratio relationship of objects.
  • H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face , and W ir_face , the corresponding collected temperature information of the face area of the infrared image can be obtained.
  • Step S1104 establish a mapping table of the range intervals corresponding to different face heights in the visible light image and the horizontal distance L between the face and the visible light camera, for example, as shown in Table 3, the H vr_face range interval can be divided into multiple intervals, where , H k+1 >H k >H k-1 >...>H 6 >H 5 >H 4 >H 3 >H 2 >H 1 , L 1 >L 2 >L 3 >L 4 >...>L k-2 >L k-1 >L k .
  • Step S1105 the human face detection module outputs one or more human face center position coordinates (x VR , y VR ) in the visible light image, and the height H vr_face and width W vr_face of the human face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image, Calculate and obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image, until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
  • Step S1106 input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
  • Step S1107 according to the human face picture center position coordinates (x IR , y IR ) of the infrared image corresponding to each detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
  • the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved.
  • the method of this embodiment when the position and area range data of multiple faces are detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be quickly and accurately obtained. Further accurate acquisition of face temperature data corresponding to the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature.
  • Yet another embodiment of the present invention provides another method for registration and fusion of visible light and infrared images, which can be applied to a device equipped with a visible light camera and an infrared camera.
  • the spatial positional relationship between the visible light camera and the infrared camera can be referred to FIG. 9 and FIG. 10 .
  • the optical axis of the infrared camera is perpendicular to the plane of the front view of the device, and the optical axis of the visible light camera may not be perpendicular to the plane of the front view of the device.
  • the relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
  • the shooting scene is that a helmet equipped with a visible light camera and an infrared camera shoots a face in front of the helmet.
  • the registration and fusion of visible light and infrared images provided in this embodiment may include the following steps:
  • Step S1201 establishing a registration model of a visible light image and an infrared image.
  • the horizontal and vertical viewing angles, the display resolution, and the lateral direction of the two optical axes are used to establish a visible light image and an infrared image.
  • A, B, C, and D are conversion parameters
  • m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
  • are the horizontal and vertical angles between the optical axis of the visible light camera and the infrared camera, and the horizontal distance L between the target object and the visible light camera is the variation.
  • x VR and y VR are the pixel coordinate values of the visible light image
  • x IR and y IR are the pixel coordinate values of the infrared image.
  • the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of the visible light camera, the horizontal and vertical viewing angles of the infrared camera, and the display resolution parameters of the infrared camera and the visible light camera.
  • w VR is the horizontal display resolution of the visible light camera
  • h VR is the vertical display resolution of the visible light camera
  • w IR is the horizontal display resolution of the infrared camera
  • h IR is the vertical display resolution of the infrared camera
  • is the horizontal field of view of the visible light camera
  • is the vertical viewing angle of the visible light camera
  • is the horizontal viewing angle of the infrared camera
  • is the vertical viewing angle of the infrared camera.
  • Step S1202 calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
  • Step S1202 calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
  • the selection range of LC can be [0.3m ⁇ 7m];
  • the lateral angle between the two optical axes can be calculated and the longitudinal intersection angle ⁇ of the two optical axes are respectively:
  • x VR , y VR , x IR , y IR are all 0, and the two optical axes can be calculated according to the registration model of the visible light image and the infrared image horizontal angle of and the longitudinal intersection angle ⁇ of the two optical axes are:
  • Step S1203 establishing the height and width mapping model of the visible light image and the infrared image as follows:
  • A, C, ⁇ , n, L are all known, ⁇ is a configuration parameter, and the range of ⁇ is 0.1 ⁇ 1.
  • the established height and width mapping models can further quickly fuse the size and ratio relationship of objects.
  • H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face and W ir_face can obtain the corresponding collected temperature information of the face area of the infrared image.
  • Step S1205 the face detection module outputs one or more coordinate values of the center position of the face in the visible light image (x VR , y VR ), and the height H vr_face and width W vr_face of the face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image for calculation Obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
  • Step S1206 input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
  • Step S1207 according to the human face picture center position coordinates (x IR , y IR ) of the infrared image corresponding to each detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
  • the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved.
  • the optical axis 21 of the infrared camera 02 is perpendicular to the plane of the front view of the device, and the optical axis 11 of the visible light camera 01 is not perpendicular to the plane of the front view of the device.
  • the initial model parameters of its fusion area include:
  • the transition region (x VR , y VR ) satisfies: x 1 ⁇ x VR ⁇ x 2 , y 1 ⁇ y VR ⁇ y 2 , where other parameters are the same as those in the foregoing embodiments, and will not be described in detail here. Subsequent processing of the transition region in this embodiment is the same as that of the transition region in the preceding embodiment, and details will not be repeated hereafter.
  • Fig. 18 shows a schematic diagram of a partial interface of a device adopting an image acquisition and processing method according to an embodiment of the present invention.
  • the corresponding visible light image area 40 can cover the entire area of the device interface, and the fusion area 41 is smaller than the visible light image area 40.
  • the image data within the range of the fusion area 41 is processed to obtain the information of the acquisition object A, and the acquired
  • the information of the object A is displayed separately in the lower left corner of the interface, and the information of the collected object B outside the fusion area 41 is not collected, which effectively reduces the amount of data processing.
  • the collection object A and the collection object B are, for example, human faces, and the infrared camera is used to collect the temperature of the face, and only face recognition is performed on the visible light image within the range of the fusion area 41, so that the collection object A can be quickly locked, and then the locked The face temperature of the collected object A is detected, and the test result is displayed separately in the lower left corner of the interface, which can improve the efficiency of face recognition and temperature detection.
  • the image collection and processing method of the present invention uses a visible light camera and an infrared camera to simultaneously collect images, and obtains the range of the fusion region according to the comparison between the initial model of the fusion region and the resolution of visible light, wherein the visible light image within the fusion region is analyzed to obtain the collected image
  • the feature information can reduce the amount of data processing, save computing resources, and improve the efficiency of image processing. It can effectively improve the processing efficiency in image analysis processing such as face recognition.
  • the initial model of the fusion area is obtained according to the fixed parameters and related parameters of the visible light camera and the infrared light camera.
  • the fixed parameters and related parameters of the visible light camera and the infrared light camera do not need to be adjusted after calibration, which ensures the convenience of use.
  • the technical solution provided in this embodiment can solve the problem of image registration and fusion in the case where the central optical axes of the visible light camera and the infrared camera are not parallel, and the positions of the two cameras in the X-axis, Y-axis, and Z-axis directions are different. , to solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the temperature measurement scene application.
  • the technical solution provided by this embodiment is applied to the position and area range data of multiple human faces detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be obtained quickly and accurately, and the corresponding face ranges can be further accurately obtained.
  • the face temperature data in the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature.
  • the calibration process of the image registration and fusion model provided in this embodiment is simple and fast, and avoids a large amount of complicated computing resource consumption. Compared with other image fusion algorithms that require more computing resources to support It requires fewer computing resources and is more efficient.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Provided in the present invention are a method and apparatus for processing a visible light image and an infrared image. The method and apparatus are applied to a device, which is configured with a visible light camera and an infrared camera, wherein the optical axis of the visible light camera or the infrared camera is perpendicular to a front view plane of the device. The method comprises: according to the spatial relative positions of a visible light camera and an infrared camera, a conversion parameter, and the horizontal distance of a target object from the visible light camera, establishing a registration model for the coordinate position of a visible light image of the target object that is acquired by the visible light camera and the coordinate position of an infrared image of the target object that is acquired by the infrared camera; and according to the registration model, performing registration and fusion on the visible light image and the infrared image of the target object. In the present invention, by means of an established registration model, data of a visible light image and data of an infrared image in a device, which is configured with dual cameras, can be fused into one image.

Description

可见光图像和红外图像的处理方法及装置Method and device for processing visible light image and infrared image 技术领域technical field
本发明涉及图像处理领域,具体而言,涉及一种可见光图像和红外图像的处理方法及装置。The present invention relates to the field of image processing, in particular to a method and device for processing visible light images and infrared images.
背景技术Background technique
目前存在很多红外线测温和人脸检测识别设备,这些测温和人脸检测识别设备都是固定式安装设备,其中往往红外摄像头模块与可见光摄像头模块集成在一起,这两种摄像头模块的相对位置和相对角度固定。例如采取双光(红外和可见光或白光)模组设计,由于是双光模组设计上就让两个摄像头模块中心光轴平行固定不发生改变,两个摄像头模块中心的Z轴方向的位置相同,纵向(Y轴方向)高度相同并且固定,横向(X轴方向)相对位置固定并且距离非常小,或者两个摄像头模块中心纵向(Y轴方向)高度相对位置偏差固定,横向(X轴方向)位置相同并且固定。这些固定式的红外线测温和人脸检测识别设备存在以下三个特点:两个摄像头模块中心光轴平行,两个摄像头模块Z轴的零点坐标位置相同,Y轴的零点坐标位置相同或X轴的零点坐标位置相同。At present, there are many infrared temperature measurement and face detection and recognition devices. These temperature measurement and face detection and recognition devices are fixed installation devices, and the infrared camera module is often integrated with the visible light camera module. The relative position of the two camera modules and the relative angle is fixed. For example, a dual-light (infrared and visible light or white light) module design is adopted. Because of the dual-light module design, the central optical axes of the two camera modules are fixed in parallel and do not change, and the positions of the Z-axis directions of the centers of the two camera modules are the same. , the vertical (Y-axis direction) height is the same and fixed, the horizontal (X-axis direction) relative position is fixed and the distance is very small, or the vertical (Y-axis direction) height relative position deviation of the centers of the two camera modules is fixed, and the horizontal (X-axis direction) The positions are the same and fixed. These fixed infrared temperature measurement and face detection and recognition devices have the following three characteristics: the central optical axes of the two camera modules are parallel, the zero coordinate positions of the two camera modules on the Z axis are the same, and the zero coordinate positions on the Y axis are the same or on the X axis. The coordinates of the zero point are the same.
这些特点给两个不同摄像头画面的数据融合提供有利条件,但在产品设计和各种实际解决方案中存红外摄像头、可见光摄像头这两个模块不是设计在同一位置(X、Y、Z三轴零点坐标位置也不相同)。例如,应用于穿戴式应用场景,不同高度人穿戴及不同场景应用时常常需要对可见光摄像头或红外摄像头的角度进行调整,而红外摄像头和可见光摄像头这两种摄像头模块往往又不是采用双目模组来实现,从而导致两个摄像头模块中心光轴不平行(存在夹角),另外两个摄像头模块空间位置的X、Y、Z三轴的零点坐标位置也不相同。以上这些与这些固定式的红外线测温和人脸检测识别设备的三个特点存在明显的差异,这些差异给穿戴式设备的两个不同摄像头画面的数据融合带来了极大挑战。These characteristics provide favorable conditions for the data fusion of two different camera images, but in product design and various practical solutions, the two modules of the infrared camera and the visible light camera are not designed at the same position (X, Y, Z three-axis zero point The coordinates are not the same). For example, when applied to wearable application scenarios, it is often necessary to adjust the angle of the visible light camera or the infrared camera when people of different heights wear it and apply in different scenes, and the two camera modules of the infrared camera and the visible light camera are often not binocular modules. In order to achieve this, the central optical axes of the two camera modules are not parallel (there is an included angle), and the zero point coordinates of the X, Y, and Z axes of the spatial positions of the other two camera modules are also different. There are obvious differences between the above and the three characteristics of these fixed infrared temperature measurement and face detection and recognition devices. These differences bring great challenges to the data fusion of two different camera images of wearable devices.
现有的设备虽然配备有红外摄像头和可见光摄像头,但是没有实现双图像的配准融合,都只能做到在一种摄像头画面进行数据分析处理,设备无法自动将可见光画面的数据和红外画面的数据融合到一个图像中。Although the existing equipment is equipped with an infrared camera and a visible light camera, it has not realized the registration and fusion of dual images, and can only perform data analysis and processing on one camera picture, and the equipment cannot automatically combine the data of the visible light picture with the data of the infrared picture. The data are fused into one image.
发明内容Contents of the invention
本发明提供了一种可见光图像和红外图像的处理方法及装置,以至少解决相关技术中配备有双摄像头的设备无法自动将可见光画面的数据和红外画面进行融合处理的问题。The present invention provides a method and device for processing visible light images and infrared images to at least solve the problem in the related art that equipment equipped with dual cameras cannot automatically fuse data of visible light images and infrared images.
本发明的一个方面,提供一种可见光图像和红外图像的处理方法,应用于配 置有可见光摄像头和红外摄像头的设备上,其中,所述可见光摄像头的光轴或红外摄像头的光轴与所述设备正视图平面垂直,包括以下步骤:One aspect of the present invention provides a method for processing visible light images and infrared images, which is applied to a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the visible light camera or the optical axis of the infrared camera and the device The front view plane is vertical, including the following steps:
根据所述可见光摄像头与红外摄像头的空间相对位置、转换参数以及目标物体距离所述可见光摄像头的水平距离,建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型;According to the spatial relative position of the visible light camera and the infrared camera, conversion parameters and the horizontal distance between the target object and the visible light camera, establish the visible light image of the target object collected by the visible light camera and the infrared image of the target object collected by the infrared camera. The registration model of the coordinate position of the image;
根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。The visible light image and the infrared image of the target object are registered and fused according to the registration model.
本发明的另一个方面,提供一种可见光图像和红外图像的处理位于配置有可见光摄像头和红外摄像头的设备上,其中,所述可见光摄像头的光轴或红外摄像头的光轴与所述设备正视图平面垂直包括:In another aspect of the present invention, it is provided that the processing of visible light images and infrared images is located on a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the visible light camera or the optical axis of the infrared camera is in line with the front view of the device Plane verticals include:
配准模型建立模块,用于根据所述可见光摄像头与红外摄像头的空间相对位置、转换参数、以及目标物体距离所述可见光摄像头的水平距离,建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型;The registration model building module is used to establish the visible light image of the target object collected by the visible light camera and the infrared image according to the relative spatial position of the visible light camera and the infrared camera, the conversion parameters, and the horizontal distance between the target object and the visible light camera. The registration model of the coordinate position of the infrared image of the target object collected by the infrared camera;
图像融合模块,用于根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。An image fusion module, configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
在本发明的上述实施例中,根据所述可见光摄像头与红外摄像头的空间相对位置、以及目标物体距离所述可见光摄像头的水平距离建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型,并根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合,从而解决了配置了可将光和红外双摄像头的设备无法将可见光图像的数据和红外图像的融合处理问题。In the above embodiments of the present invention, the visible light image of the target object collected by the visible light camera and the infrared camera image are established according to the spatial relative positions of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera. The registration model of the coordinate position of the infrared image of the collected target object, and according to the registration model, the visible light image of the target object and the infrared image are registered and fused, thereby solving the problem of configuring a dual camera that can combine light and infrared The equipment cannot process the fusion of visible light image data and infrared image data.
附图说明Description of drawings
图1是根据本发明实施例的可见光图像和红外图像的配准融合方法的流程图;FIG. 1 is a flow chart of a registration and fusion method for a visible light image and an infrared image according to an embodiment of the present invention;
图2是根据本发明实施例的可见光图像和红外图像的配准融合装置的结构框图;Fig. 2 is a structural block diagram of a registration and fusion device for visible light images and infrared images according to an embodiment of the present invention;
图3是根据本发明实施例的可见光摄像头和红外摄像头水平位置示意图;Fig. 3 is a schematic diagram of the horizontal positions of a visible light camera and an infrared camera according to an embodiment of the present invention;
图4是根据本发明另一实施例的可见光摄像头和红外摄像头水平位置示意图;Fig. 4 is a schematic diagram of the horizontal positions of a visible light camera and an infrared camera according to another embodiment of the present invention;
图5是根据本发明实施例的可见光图像和红外图像的配准融合方法的流程图;Fig. 5 is a flow chart of a registration and fusion method for a visible light image and an infrared image according to an embodiment of the present invention;
图6是根据本发明另一实施例的可见光图像和红外图像的配准融合方法的流程图;Fig. 6 is a flow chart of a registration and fusion method for a visible light image and an infrared image according to another embodiment of the present invention;
图7是根据本发明实施例的可见光图像和红外图像的配准融合方法的流程图;Fig. 7 is a flowchart of a registration and fusion method for a visible light image and an infrared image according to an embodiment of the present invention;
图8是根据本发明实施例的可见光图像和红外图像的配准融合装置的结构框图;Fig. 8 is a structural block diagram of a device for registration and fusion of visible light images and infrared images according to an embodiment of the present invention;
图9是根据本发明实施例的可见光摄像头和红外摄像头水平位置示意图;Fig. 9 is a schematic diagram of horizontal positions of a visible light camera and an infrared camera according to an embodiment of the present invention;
图10是根据本发明另一实施例的可见光摄像头和红外摄像头水平位置示意图;Fig. 10 is a schematic diagram of the horizontal positions of a visible light camera and an infrared camera according to another embodiment of the present invention;
图11是根据本发明一实施例的可见光图像和红外图像的配准融合方法的流程图;Fig. 11 is a flowchart of a method for registration and fusion of visible light images and infrared images according to an embodiment of the present invention;
图12是根据本发明另一实施例的可见光图像和红外图像的配准融合方法的流程图Fig. 12 is a flowchart of a registration and fusion method for visible light images and infrared images according to another embodiment of the present invention
图13是根据本发明实施例的图像采集处理方法的摄像头在XOZ平面布置示意图;13 is a schematic diagram of the layout of the camera on the XOZ plane according to the image acquisition and processing method according to the embodiment of the present invention;
图14是根据本发明实施例的图像采集处理方法的摄像头在YOZ平面布置示意图;Fig. 14 is a schematic diagram of the arrangement of cameras on a YOZ plane according to an image acquisition and processing method according to an embodiment of the present invention;
图15是根据本发明另一实施例的图像采集处理方法的摄像头在XOZ平面布置示意图;Fig. 15 is a schematic diagram of the layout of the camera on the XOZ plane according to another embodiment of the present invention;
图16是根据本发明另一实施例的图像采集处理方法的摄像头在YOZ平面布置示意图;Fig. 16 is a schematic diagram of the layout of the camera on the YOZ plane according to another embodiment of the present invention;
图17是根据本发明实施例的图像采集处理方法的流程示意图;FIG. 17 is a schematic flowchart of an image acquisition and processing method according to an embodiment of the present invention;
图18是采用根据本发明实施例的图像采集处理方法的设备的部分界面示意图。Fig. 18 is a schematic diagram of a partial interface of a device adopting an image acquisition and processing method according to an embodiment of the present invention.
具体实施方式Detailed ways
下文中将参考附图并结合实施例来详细说明本发明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。Hereinafter, the present invention will be described in detail with reference to the drawings and examples. It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other.
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that the terms "first" and "second" in the description and claims of the present invention and the above drawings are used to distinguish similar objects, but not necessarily used to describe a specific sequence or sequence.
在本发明所提供了一种可见光图像和红外图像的处理合方法,应用于配置有可见光摄像头和红外摄像头的设备上。实施本发明需要将可将光摄像头或者红外光之一的光轴与设备的正视图所在平面配置为垂直关系。The present invention provides a method for processing a visible light image and an infrared image, which is applied to a device equipped with a visible light camera and an infrared camera. To implement the present invention, it is necessary to configure the optical axis of one of the optical camera or the infrared light and the plane where the front view of the device is located to be vertical.
其一,可见光摄像头的光轴与所述设备正视图平面垂直。图1是根据本发明实施例的可见光图像和红外图像的处理方法的流程图,如图1所示,该流程包括如下步骤:First, the optical axis of the visible light camera is perpendicular to the plane of the front view of the device. Fig. 1 is a flow chart of a method for processing visible light images and infrared images according to an embodiment of the present invention. As shown in Fig. 1, the process includes the following steps:
步骤S102,根据所述可见光摄像头与红外摄像头的空间相对位置、以及目标物体距离所述可见光摄像头的水平距离建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型;Step S102, establishing the visible light image of the target object collected by the visible light camera and the infrared image of the target object collected by the infrared camera according to the spatial relative positions of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera. The registration model of the coordinate position of the image;
步骤S104,根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。Step S104, performing registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
其中,在本实施例的步骤S102中,可建立例如如下的配准模型:Wherein, in step S102 of this embodiment, for example, the following registration model can be established:
Figure PCTCN2022095838-appb-000001
Figure PCTCN2022095838-appb-000001
其中,A、B、C、D分别为第一、第二、第三和第四转换参数,m、n、d分别为可见光摄像头与红外摄像头的X、Y、Z轴的空间位置相对距离,
Figure PCTCN2022095838-appb-000002
和γ分别为可见光摄像头与红外摄像头的光轴的横向夹角和纵向交角,L为目标物距离可见光摄像头的水平距离,(x VR,y VR)为可见光图像像素坐标,(x IR,y IR)为红外图像像素坐标。
Among them, A, B, C, and D are the first, second, third, and fourth conversion parameters, respectively, and m, n, and d are the relative distances between the X, Y, and Z axes of the visible light camera and the infrared camera, respectively,
Figure PCTCN2022095838-appb-000002
and γ are the horizontal and vertical angles between the optical axes of the visible light camera and the infrared camera, respectively, L is the horizontal distance between the target and the visible light camera, (x VR , y VR ) is the pixel coordinate of the visible light image, (x IR , y IR ) is the infrared image pixel coordinates.
在本实施例中,该配准模型可用于快速配准融合两个摄像头画面物体的坐标位置。从这个配准模型可得知,配准模型矩阵中的非0项不是固定的值,是目标物离可见光摄像头的水平距离L的函数,是随着目标物离可见光摄像头的水平距离L变化而变化,因此,无法通过选择某一个特定的目标物离可见光摄像头的水平距离场景采集的画面图像来确定配准模型矩阵中的非零项。In this embodiment, the registration model can be used for fast registration and fusion of coordinate positions of objects in two camera images. From this registration model, it can be known that the non-zero items in the registration model matrix are not fixed values, but are a function of the horizontal distance L between the target object and the visible light camera. Therefore, it is impossible to determine the non-zero items in the registration model matrix by selecting the frame image collected by the scene with a specific horizontal distance from the visible light camera to the target object.
在本实施例的配准模型的应用前,需要进行标定配准模型中未知的参数,即标定可见光图像与红外图像的配准模型中的两光轴的横向夹角
Figure PCTCN2022095838-appb-000003
和纵向交角γ。例如,在本实施例中,可通过如下两种方式来标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角:
Before the application of the registration model in this embodiment, it is necessary to calibrate the unknown parameters in the registration model, that is, to calibrate the lateral angle between the two optical axes in the registration model of the visible light image and the infrared image
Figure PCTCN2022095838-appb-000003
and longitudinal angle γ. For example, in this embodiment, the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera can be calibrated in the following two ways:
第一种标定方式包括如下步骤:The first calibration method includes the following steps:
1)选取与所述可见光摄像头的水平距离为第一设定距离的第一参照物体;1) Selecting a first reference object whose horizontal distance from the visible light camera is a first set distance;
2)通过可见光摄像头和红外摄像头同时采集所述第一参照物体的图像,并分别测量出所述第一参照物体的相同位置分别在可见光图像和红外图像中的横向长度和纵向长度;2) Simultaneously collect images of the first reference object through the visible light camera and the infrared camera, and measure the horizontal length and vertical length of the same position of the first reference object in the visible light image and the infrared image respectively;
3)根据所述第一设定距离、以及所述第一参照物体的相同位置分别在可见光图像和红外图像中的横向长度和纵向长度在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。3) Calibrate the optical axis of the visible light camera in the registration model according to the first set distance and the horizontal length and vertical length of the same position of the first reference object in the visible light image and the infrared image respectively The horizontal and vertical angles with the optical axis of the infrared camera.
例如,可通过如下公式标定所述配准模型中两摄像头的光轴的横向夹角
Figure PCTCN2022095838-appb-000004
和纵向交角γ:
For example, the horizontal angle between the optical axes of the two cameras in the registration model can be calibrated by the following formula
Figure PCTCN2022095838-appb-000004
and longitudinal angle γ:
Figure PCTCN2022095838-appb-000005
Figure PCTCN2022095838-appb-000005
其中,L C为第一设定距离,L VR和W VR分别为所述第一参照物体的相同位置在可见光图像中的横向长度和纵向长度,L IR和W IR分别为所述第一参照物体的相同位置 在红外图像中的横向长度和纵向长度。 Among them, L C is the first set distance, L VR and W VR are the horizontal length and vertical length of the same position of the first reference object in the visible light image respectively, L IR and W IR are the first reference The horizontal length and vertical length of the same position of the object in the infrared image.
第二中标定方式包括如下步骤:The second calibration method includes the following steps:
1)选取与所述可见光摄像头的水平距离为第二设定距离的第二参照物体;1) Selecting a second reference object whose horizontal distance from the visible light camera is a second set distance;
2)调整可见光摄像头的光轴使得所述第二参照物体的相同位置位于可见光图像和红外图像的特定位置;2) adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image;
3)根据所述第二设定距离和所述特定位置的坐标值,在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。3) According to the second set distance and the coordinate value of the specific position, calibrate the horizontal angle and the vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
例如,假设特定位置的坐标值x VR、y VR,x IR、y IR都为0,根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000006
和两光轴纵向交角γ就分别为:
For example, assuming that the coordinate values x VR , y VR , x IR , and y IR of a specific position are all 0, the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure PCTCN2022095838-appb-000006
and the longitudinal intersection angle γ of the two optical axes are respectively:
Figure PCTCN2022095838-appb-000007
Figure PCTCN2022095838-appb-000007
例如:假设特定位置的坐标值x IR、y IR都为0,x VR为200、y VR为100,根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000008
和两光轴纵向交角γ就分别为:
For example: Assuming that the coordinate values x IR and y IR of a specific position are both 0, x VR is 200, and y VR is 100, the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure PCTCN2022095838-appb-000008
and the longitudinal intersection angle γ of the two optical axes are respectively:
Figure PCTCN2022095838-appb-000009
Figure PCTCN2022095838-appb-000009
在本实施例中,在标定所述配准模型中所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角之后,还可包括如下步骤:将所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角代入所述配准模型中,建立所述可见光图像与红外图像的高度和宽度映射模型;基于所述目标物体与所述可见光摄像头的多个不同水平距离,建立多组可见光图像中所述目标物体的指定区域的高度与所述目标物体距离所述可见光摄像头的水平距离之间的映射关系。In this embodiment, after calibrating the horizontal and vertical angles between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model, the following steps may also be included: the optical axis of the visible light camera Substituting the horizontal and vertical angles with the optical axis of the infrared camera into the registration model to establish a height and width mapping model between the visible light image and the infrared image; based on multiple differences between the target object and the visible light camera The horizontal distance is to establish a mapping relationship between the height of the designated area of the target object in multiple groups of visible light images and the horizontal distance between the target object and the visible light camera.
例如,如果将本实施例的配准模型应用于人体的测温场景中,则建立如下的可见光图像与红外图像的高度和宽度映射模型:For example, if the registration model of this embodiment is applied to the temperature measurement scene of the human body, the following height and width mapping model between the visible light image and the infrared image is established:
Figure PCTCN2022095838-appb-000010
Figure PCTCN2022095838-appb-000010
其中,A、C、
Figure PCTCN2022095838-appb-000011
γ,n,L都为已知,λ为配置参数,λ范围为0.1<λ≤1。在本实施例中,通过调整λ大小,可以避免人脸以外区域带来的背景温度带来干扰。
Among them, A, C,
Figure PCTCN2022095838-appb-000011
γ, n, L are all known, λ is a configuration parameter, and the range of λ is 0.1<λ≤1. In this embodiment, by adjusting the size of λ, the interference caused by the background temperature in areas other than the face can be avoided.
在本实施例中,步骤S104可包括:获取可见光图像中所述目标物体的指定区域的中心位置坐标值,以及所述可见光图像中所述目标物体的指定区域的高度和宽度,并根据所述可见光图像中所述目标物体的指定区域的高度和宽度,在所述高度和宽度映射模型中找到所述目标物体与所述可见光摄像头对应的水平距离值;将对应的水平 距离值和所述目标物体的指定区域的中心位置坐标值输入到所述配准模型中,计算获得红外图像中所述目标物体的指定区域的中心位置坐标值;将可见光图像中所述目标物体的指定区域的高度和宽度输入到所述高度和宽度映射模型中,计算获得红外图像中所述目标物体的指定区域的高度和宽度;根据所述红外图像中所述目标物体的指定区域的中心位置坐标,以及所述红外图像中所述目标物体的指定区域的高度和宽度,确定所述红外图像中所述目标物体的指定区域。In this embodiment, step S104 may include: obtaining the center position coordinate value of the specified area of the target object in the visible light image, and the height and width of the specified area of the target object in the visible light image, and according to the The height and width of the specified area of the target object in the visible light image, find the horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model; combine the corresponding horizontal distance value with the target The center position coordinate value of the designated area of the object is input into the registration model, and the center position coordinate value of the designated area of the target object in the infrared image is calculated and obtained; the height and the height of the designated area of the target object in the visible light image are calculated The width is input into the height and width mapping model, and the height and width of the specified area of the target object in the infrared image are calculated and obtained; according to the center position coordinates of the specified area of the target object in the infrared image, and the The height and width of the specified area of the target object in the infrared image determine the specified area of the target object in the infrared image.
在本实施例中,在确定所述红外图像中所述目标物体的指定区域之后,还可包括:获取所述红外图像中所述目标物体的指定区域中的最高温度值;将所述温度值标注在可见光图像中所述目标物体的指定区域的指定位置。In this embodiment, after determining the specified area of the target object in the infrared image, it may further include: acquiring the highest temperature value in the specified area of the target object in the infrared image; Annotating a specified location of a specified region of the target object in the visible light image.
图2是根据本发明实施例的可见光图像和红外图像的配准融合装置的结构框图,该装置位于配置有可见光摄像头和红外摄像头的设备上,其中,所述可见光摄像头的光轴与所述设备正视图平面垂直,如图2所示,该装置包括配准模型建立模块210和图像融合模块220。Fig. 2 is a structural block diagram of a device for registering and fusing visible light images and infrared images according to an embodiment of the present invention. The plane of the front view is vertical, as shown in FIG. 2 , the device includes a registration model building module 210 and an image fusion module 220 .
配准模型建立模块210,用于根据所述可见光摄像头与红外摄像头的空间相对位置、以及目标物体距离所述可见光摄像头的水平距离建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型。A registration model establishment module 210, configured to establish the visible light image of the target object collected by the visible light camera and the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera. A registration model for the coordinate position of the collected infrared image of the target object.
图像融合模块220,用于根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。The image fusion module 220 is configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述模块分别位于多个处理器中。It should be noted that each of the above-mentioned modules can be implemented by software or hardware. For the latter, it can be implemented in the following manner, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules are respectively located in multiple in the processor.
其二,红外摄像头的光轴与设备正视图所在平面垂直。Second, the optical axis of the infrared camera is perpendicular to the plane where the front view of the device is located.
图7是根据本发明实施例的可见光图像和红外图像的配准融合方法的流程图,如图7所示,该流程包括如下步骤:FIG. 7 is a flow chart of a method for registration and fusion of visible light images and infrared images according to an embodiment of the present invention. As shown in FIG. 7 , the process includes the following steps:
步骤S702,根据所述可见光摄像头与红外摄像头的空间相对位置、以及目标物体距离所述可见光摄像头的水平距离建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型;Step S702: Establish the visible light image of the target object collected by the visible light camera and the infrared image of the target object collected by the infrared camera according to the spatial relative positions of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera. The registration model of the coordinate position of the image;
步骤S704,根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。Step S704, performing registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
其中,在本实施例的步骤S702中,可建立例如如下的配准模型:Wherein, in step S702 of this embodiment, for example, the following registration model can be established:
Figure PCTCN2022095838-appb-000012
Figure PCTCN2022095838-appb-000012
其中,A、B、C、D分别为第一、第二、第三和第四转换参数,m、n、d分别为可见光摄像头与红外摄像头的X、Y、Z轴的空间位置相对距离,
Figure PCTCN2022095838-appb-000013
和γ分别为可见光摄像头与红外摄像头的光轴的横向夹角和纵向交角,L为目标物距离可见光摄像头的水平距离,(x VR,y VR)为可见光图像像素坐标,(x IR,y IR)为红外图像像素坐标。
Among them, A, B, C, and D are the first, second, third, and fourth conversion parameters, respectively, and m, n, and d are the relative distances between the X, Y, and Z axes of the visible light camera and the infrared camera, respectively,
Figure PCTCN2022095838-appb-000013
and γ are the horizontal and vertical angles between the optical axes of the visible light camera and the infrared camera, respectively, L is the horizontal distance between the target and the visible light camera, (x VR , y VR ) is the pixel coordinate of the visible light image, (x IR , y IR ) is the infrared image pixel coordinates.
在本实施例中,该配准模型可用于快速配准融合两个摄像头画面物体的坐标位置。从这个配准模型可得知,配准模型矩阵中的非0项不是固定的值,是目标物离可见光摄像头的水平距离L的函数,是随着目标物离可见光摄像头的水平距离L变化而变化,因此,无法通过选择某一个特定的目标物离可见光摄像头的水平距离场景采集的画面图像来确定配准模型矩阵中的非零项。In this embodiment, the registration model can be used for fast registration and fusion of coordinate positions of objects in two camera images. From this registration model, it can be known that the non-zero items in the registration model matrix are not fixed values, but are a function of the horizontal distance L between the target object and the visible light camera. Therefore, it is impossible to determine the non-zero items in the registration model matrix by selecting the frame image collected by the scene with a specific horizontal distance from the visible light camera to the target object.
在本实施例的配准模型的应用前,需要进行标定配准模型中未知的参数,即标定可见光图像与红外图像的配准模型中的两光轴的横向夹角
Figure PCTCN2022095838-appb-000014
和纵向交角γ。例如,在本实施例中,还可通过如下两种方式来标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角:
Before the application of the registration model in this embodiment, it is necessary to calibrate the unknown parameters in the registration model, that is, to calibrate the lateral angle between the two optical axes in the registration model of the visible light image and the infrared image
Figure PCTCN2022095838-appb-000014
and longitudinal angle γ. For example, in this embodiment, the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera can also be calibrated in the following two ways:
第一种标定方式包括如下步骤:The first calibration method includes the following steps:
1)选取与所述可见光摄像头的水平距离为第一设定距离的第一参照物体;1) Selecting a first reference object whose horizontal distance from the visible light camera is a first set distance;
2)通过可见光摄像头和红外摄像头同时采集所述第一参照物体的图像,并获得所述第一参照物体的相同位置分别在可见光图像和红外图像中的坐标;2) Simultaneously collect images of the first reference object through the visible light camera and the infrared camera, and obtain the coordinates of the same position of the first reference object in the visible light image and the infrared image respectively;
3)根据所述第一设定距离、以及所述第一参照物体的相同位置分别在可见光图像和红外图像中的坐标在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。3) Calibrate the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model according to the first set distance and the coordinates of the same position of the first reference object in the visible light image and the infrared image respectively. The horizontal and vertical angles of the optical axis.
例如,可通过如下公式标定所述配准模型中两摄像头的光轴的横向夹角
Figure PCTCN2022095838-appb-000015
和纵向交角γ:
For example, the horizontal angle between the optical axes of the two cameras in the registration model can be calibrated by the following formula
Figure PCTCN2022095838-appb-000015
and longitudinal angle γ:
Figure PCTCN2022095838-appb-000016
Figure PCTCN2022095838-appb-000016
Figure PCTCN2022095838-appb-000017
Figure PCTCN2022095838-appb-000017
其中,L C为第一设定距离,(x VR-C,y VR-C)为第一参照物体的相同位置在可见光图像 中的坐标,第一参照物体的相同位置在红外图像中的坐标(x IR-C,y IR-C)。 Among them, LC is the first set distance, (x VR-C , y VR-C ) is the coordinates of the same position of the first reference object in the visible light image, and the coordinates of the same position of the first reference object in the infrared image (x IR-C ,y IR-C ).
第二中标定方式包括如下步骤:The second calibration method includes the following steps:
1)选取与所述可见光摄像头的水平距离为第二设定距离的第二参照物体;1) Selecting a second reference object whose horizontal distance from the visible light camera is a second set distance;
2)调整可见光摄像头的光轴使得所述第二参照物体的相同位置位于可见光图像和红外图像的特定位置;2) adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image;
3)根据所述第二设定距离和所述特定位置的坐标值,在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。3) According to the second set distance and the coordinate value of the specific position, calibrate the horizontal angle and the vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model.
例如,假设特定位置的坐标值x VR、y VR,x IR、y IR都为0,根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000018
和两光轴纵向交角γ就分别为:
For example, assuming that the coordinate values x VR , y VR , x IR , and y IR of a specific position are all 0, the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure PCTCN2022095838-appb-000018
and the longitudinal intersection angle γ of the two optical axes are respectively:
Figure PCTCN2022095838-appb-000019
Figure PCTCN2022095838-appb-000019
例如:假设特定位置的坐标值x IR、y IR都为0,x VR为200、y VR为100,根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000020
和两光轴纵向交角γ就分别为:
For example: Assuming that the coordinate values x IR and y IR of a specific position are both 0, x VR is 200, and y VR is 100, the lateral angle between the two optical axes can be calculated according to the registration model of the visible light image and the infrared image
Figure PCTCN2022095838-appb-000020
and the longitudinal intersection angle γ of the two optical axes are respectively:
Figure PCTCN2022095838-appb-000021
Figure PCTCN2022095838-appb-000021
在本实施例中,在标定所述配准模型中所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角之后,还可包括如下步骤:将所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角代入所述配准模型中,建立所述可见光图像与红外图像的高度和宽度映射模型;基于所述目标物体与所述可见光摄像头的多个不同水平距离,建立多组可见光图像中所述目标物体的指定区域的高度与所述目标物体距离所述可见光摄像头的水平距离之间的映射关系。In this embodiment, after calibrating the horizontal and vertical angles between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model, the following steps may also be included: the optical axis of the visible light camera Substituting the horizontal and vertical angles with the optical axis of the infrared camera into the registration model to establish a height and width mapping model between the visible light image and the infrared image; based on multiple differences between the target object and the visible light camera The horizontal distance is to establish a mapping relationship between the height of the designated area of the target object in multiple groups of visible light images and the horizontal distance between the target object and the visible light camera.
例如,如果将本实施例的配准模型应用于人体的测温场景中,则建立如下的可见光图像与红外图像的高度和宽度映射模型:For example, if the registration model of this embodiment is applied to the temperature measurement scene of the human body, the following height and width mapping model between the visible light image and the infrared image is established:
Figure PCTCN2022095838-appb-000022
Figure PCTCN2022095838-appb-000022
其中,A、C、
Figure PCTCN2022095838-appb-000023
γ,n,L都为已知,λ为配置参数,λ范围为0.1<λ≤1。在本实施例中,通过调整λ大小,可以避免人脸以外区域带来的背景温度带来干扰。
Among them, A, C,
Figure PCTCN2022095838-appb-000023
γ, n, L are all known, λ is a configuration parameter, and the range of λ is 0.1<λ≤1. In this embodiment, by adjusting the size of λ, the interference caused by the background temperature in areas other than the face can be avoided.
在本实施例中,步骤S704可包括:获取可见光图像中所述目标物体的指定区域的中心位置坐标值,以及所述可见光图像中所述目标物体的指定区域的高度和宽度,并根据所述可见光图像中所述目标物体的指定区域的高度和宽度,在所述高度和宽度 映射模型中找到所述目标物体与所述可见光摄像头对应的水平距离值;将对应的水平距离值和所述目标物体的指定区域的中心位置坐标值输入到所述配准模型中,计算获得红外图像中所述目标物体的指定区域的中心位置坐标值;将可见光图像中所述目标物体的指定区域的高度和宽度输入到所述高度和宽度映射模型中,计算获得红外图像中所述目标物体的指定区域的高度和宽度;根据所述红外图像中所述目标物体的指定区域的中心位置坐标,以及所述红外图像中所述目标物体的指定区域的高度和宽度,确定所述红外图像中所述目标物体的指定区域。In this embodiment, step S704 may include: obtaining the center position coordinate value of the specified area of the target object in the visible light image, and the height and width of the specified area of the target object in the visible light image, and according to the The height and width of the specified area of the target object in the visible light image, find the horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model; combine the corresponding horizontal distance value with the target The center position coordinate value of the designated area of the object is input into the registration model, and the center position coordinate value of the designated area of the target object in the infrared image is calculated and obtained; the height and the height of the designated area of the target object in the visible light image are calculated The width is input into the height and width mapping model, and the height and width of the specified area of the target object in the infrared image are calculated and obtained; according to the center position coordinates of the specified area of the target object in the infrared image, and the The height and width of the specified area of the target object in the infrared image determine the specified area of the target object in the infrared image.
在本实施例中,在确定所述红外图像中所述目标物体的指定区域之后,还可包括:获取所述红外图像中所述目标物体的指定区域中的最高温度值;将所述温度值标注在可见光图像中所述目标物体的指定区域的指定位置。In this embodiment, after determining the specified area of the target object in the infrared image, it may further include: acquiring the highest temperature value in the specified area of the target object in the infrared image; Annotating a specified location of a specified region of the target object in the visible light image.
在本实施例中还提供了一种可见光图像和红外图像的配准融合装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。In this embodiment, a device for registering and fusing visible light images and infrared images is also provided. The device is used to implement the above embodiments and preferred implementation modes, and those that have already been described will not be repeated. As used below, the term "module" may be a combination of software and/or hardware that realizes a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
图8是根据本发明实施例的可见光图像和红外图像的配准融合装置的结构框图,该装置位于配置有可见光摄像头和红外摄像头的设备上,其中,所述红外摄像头的光轴与所述设备正视图平面垂直,如图8所示,该装置包括配准模型建立模块810和图像融合模块820。Fig. 8 is a structural block diagram of an apparatus for registration and fusion of a visible light image and an infrared image according to an embodiment of the present invention, the apparatus is located on a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the infrared camera and the device The front view plane is vertical, as shown in FIG. 8 , the device includes a registration model building module 810 and an image fusion module 820 .
配准模型建立模块810,用于根据所述可见光摄像头与红外摄像头的空间相对位置、以及目标物体距离所述可见光摄像头的水平距离建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型。The registration model establishment module 810 is configured to establish the visible light image of the target object collected by the visible light camera and the infrared camera according to the spatial relative position of the visible light camera and the infrared camera, and the horizontal distance between the target object and the visible light camera. A registration model for the coordinate position of the collected infrared image of the target object.
图像融合模块820,用于根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。The image fusion module 820 is configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述模块分别位于多个处理器中。It should be noted that each of the above-mentioned modules can be implemented by software or hardware. For the latter, it can be implemented in the following manner, but not limited to this: the above-mentioned modules are all located in the same processor; or, the above-mentioned modules are respectively located in multiple in the processor.
为了便于对本发明所提供的技术方案的理解,下面将结合具体场景实施例进行详细描述。In order to facilitate the understanding of the technical solutions provided by the present invention, the following will describe in detail in conjunction with specific scenario embodiments.
以上给出了可见光摄像头和红外摄像头分别与设备正视图平面垂直时的图像处 理方案,为例更清晰的对方案进行阐述,以下将给出一设备上同时配置可见光摄像头和红外摄像头的多个实施例进行说明。The image processing scheme when the visible light camera and the infrared camera are perpendicular to the front view plane of the device are given above, as an example to explain the scheme more clearly, the following will give multiple implementations of configuring a visible light camera and an infrared camera on a device at the same time Example to illustrate.
本发明的一个实施例提供了一种可见光和红外图像的配准融合方法。该方法应用于配置有可见光摄像头和红外摄像头的设备上。图3和图4均为根据本发明实施例的设备上的可见光摄像头和红外摄像头的水平位置示意图,其中,在图3中示出了设备区域1、红外摄像头2、可见光摄像头3、设备正视图平面水平线4、红外摄像头光轴5、可见光摄像头光轴6、两光轴的横向夹角7。An embodiment of the present invention provides a method for registration and fusion of visible light and infrared images. This method is applied to a device configured with a visible light camera and an infrared camera. Figure 3 and Figure 4 are both schematic diagrams of the horizontal position of the visible light camera and the infrared camera on the device according to an embodiment of the present invention, wherein Figure 3 shows the device area 1, the infrared camera 2, the visible light camera 3, and the front view of the device Plane horizontal line 4, infrared camera optical axis 5, visible light camera optical axis 6, and lateral angle 7 between the two optical axes.
如图3所示,可见光摄像头的光轴6与设备正视图平面垂直,红外摄像头2位于可见光摄像头3的上方,其光轴与设备正视图平面不垂直,而与可见光摄像头的光轴6相交。As shown in Figure 3, the optical axis 6 of the visible light camera is perpendicular to the front view plane of the device, and the infrared camera 2 is located above the visible light camera 3, and its optical axis is not perpendicular to the front view plane of the device, but intersects with the optical axis 6 of the visible light camera.
在图4中示出了设备区域1、红外摄像头2、可见光摄像头3、设备正视图平面水平线4、红外摄像头光轴5、可见光摄像头光轴6、两光轴的横向夹角7。如图4所示,可见光摄像头的光轴6与设备正视图平面垂直,红外摄像头2位于可见光摄像头3的上方,其光轴5与设备正视图平面不垂直,而与可见光摄像头的光轴6相交。Fig. 4 shows the device area 1, the infrared camera 2, the visible light camera 3, the equipment front view plane horizontal line 4, the infrared camera optical axis 5, the visible light camera optical axis 6, and the lateral angle 7 between the two optical axes. As shown in Figure 4, the optical axis 6 of the visible light camera is perpendicular to the plane of the front view of the device, and the infrared camera 2 is located above the visible light camera 3, and its optical axis 5 is not perpendicular to the plane of the front view of the device, but intersects with the optical axis 6 of the visible light camera .
如图3和图4所示,在本实施例中可见光摄像头的光轴与设备正视图平面垂直,红外摄像头的光轴与设备正视图平面可以不垂直,两个不同的摄像头画面中的同一物体的相对角度为0(即,画面中同一物体的位置没有发生旋转)。As shown in Figures 3 and 4, in this embodiment, the optical axis of the visible light camera is perpendicular to the plane of the front view of the device, and the optical axis of the infrared camera may not be perpendicular to the plane of the front view of the device. The relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
下面将结合采用可见光摄像头和红外摄像头进行测温的场景详细描述本实施例,当然本实施例提供的技术方案也可以应用于其它需图像融合的场景。如图5所示,本实施例提供的可见光和红外图像的配准融合可包括如下步骤:The following will describe this embodiment in detail in conjunction with a scene where a visible light camera and an infrared camera are used for temperature measurement. Of course, the technical solution provided by this embodiment can also be applied to other scenes that require image fusion. As shown in Figure 5, the registration and fusion of visible light and infrared images provided in this embodiment may include the following steps:
步骤S501,建立可见光图像与红外图像的配准模型。Step S501, establishing a registration model of a visible light image and an infrared image.
具体地,在本步骤中,可根据可见光和红外两摄像头的X、Y、Z轴空间位置相对距离m、d、n、水平和垂直视场角、显示分辨率大小、两光轴的横向夹角(即,两摄像头的ZOY平面的夹角)、两光轴纵向交角(即,两摄像头的ZOX平面的夹角)、目标物体距离可见光摄像头的水平距离L这些参数,建立可见光图像与红外图像的如下配准模型:Specifically, in this step, according to the relative distances m, d, and n of the X, Y, and Z-axis spatial positions of the visible light and infrared cameras, the horizontal and vertical field angles, the display resolution, and the lateral clamping distance between the two optical axes Angle (that is, the angle between the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (that is, the angle between the ZOX planes of the two cameras), and the horizontal distance L between the target object and the visible light camera to establish a visible light image and an infrared image The following registration model for :
Figure PCTCN2022095838-appb-000024
Figure PCTCN2022095838-appb-000024
其中,A、B、C、D为转换参数,m、n、d是两摄像头的X、Y、Z轴空间位置相对距离,是已知常数,
Figure PCTCN2022095838-appb-000025
和γ两个摄像头的光轴的横向夹角与纵向交角,目标物体距离 可见光摄像头的水平距离L是变化量。x VR和y VR是可见光图像像素坐标值,x IR和y IR是红外图像像素坐标值。
Among them, A, B, C, and D are conversion parameters, m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
Figure PCTCN2022095838-appb-000025
The horizontal distance L between the target object and the visible light camera is the amount of change. x VR and y VR are the pixel coordinate values of the visible light image, and x IR and y IR are the pixel coordinate values of the infrared image.
在本实施例中,转换参数A、B、C、D可由可见光水平和垂直视场角、红外水平和垂直视场角、红外和可见光显示分辨率大小参数计算获得。例如,In this embodiment, the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of visible light, the horizontal and vertical viewing angles of infrared light, and the display resolution parameters of infrared and visible light. E.g,
Figure PCTCN2022095838-appb-000026
Figure PCTCN2022095838-appb-000026
其中,w VR是可见光摄像头水平显示分辨率,h VR是可见光摄像头垂直显示分辨率,w IR是红外摄像头水平显示分辨率,h IR是红外摄像头垂直显示分辨率,α是可见光摄像头水平视场角,β是可见光摄像头垂直视场角,θ是红外摄像头水平视场角,φ是红外摄像头垂直视场角。 Among them, w VR is the horizontal display resolution of the visible light camera, h VR is the vertical display resolution of the visible light camera, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and α is the horizontal field of view of the visible light camera , β is the vertical viewing angle of the visible light camera, θ is the horizontal viewing angle of the infrared camera, and φ is the vertical viewing angle of the infrared camera.
步骤S502,标定配准模型中可见光摄像头光轴和红外摄像头的光轴之间的横向夹角和纵向交角。在本实施例中,不用手动调整可见光与红外摄像头,可通过如下方式来标定横向夹角和纵向交角:Step S502, calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model. In this embodiment, instead of manually adjusting the visible light and infrared cameras, the horizontal and vertical angles can be calibrated as follows:
1)选择一个距离可见光摄像头的水平距离为L C目标物体。例如,L C的选择范围可为[0.3m~7m]; 1) Select a target object whose horizontal distance from the visible light camera is L C . For example, the selection range of LC can be [0.3m~7m];
2)可见光与红外摄像头同时采集该目标物体的图像(该目标物体可以是规则的长方体、正方体或者人体某部分,如人的头部),并且测量出物体相同位置的横向长度和纵向长度。测量方式可以通过设备内置软件自动计算测量,也可以是人工对采集的图像进行手工计算测量,采集的图像方式可以通过与该设备相连接的服务器软件获得,物体可见光图像的横向长度为L VR,纵向长度W VR,物体红外图像的横向长度为L IR,纵向长度W IR2) Visible light and infrared cameras simultaneously collect images of the target object (the target object can be a regular cuboid, cube, or a certain part of the human body, such as a human head), and measure the horizontal and vertical lengths of the same position of the object. The measurement method can be automatically calculated and measured by the built-in software of the device, or manually calculated and measured on the collected images. The collected image method can be obtained through the server software connected to the device. The horizontal length of the visible light image of the object is L VR , The vertical length is W VR , the horizontal length of the infrared image of the object is L IR , and the vertical length is W IR ;
3)根据可见光图像与红外图像的配准模型可计算两光轴的横向夹角
Figure PCTCN2022095838-appb-000027
和两光轴纵向交角γ:
3) According to the registration model of the visible light image and the infrared image, the lateral angle between the two optical axes can be calculated
Figure PCTCN2022095838-appb-000027
and the vertical angle γ of the two optical axes:
Figure PCTCN2022095838-appb-000028
Figure PCTCN2022095838-appb-000028
步骤S503,建立可见光图像与红外图像的高度和宽度映射模型如下:Step S503, establishing the height and width mapping model of the visible light image and the infrared image as follows:
Figure PCTCN2022095838-appb-000029
Figure PCTCN2022095838-appb-000029
其中,A、C、
Figure PCTCN2022095838-appb-000030
γ,n,L都为已知,λ为配置参数,λ范围为0.1<λ≤1。在本实施例中,通过调整λ大小,可以避免测温应用中,人脸以外区域带来的背景温度带来干扰。
Among them, A, C,
Figure PCTCN2022095838-appb-000030
γ, n, L are all known, λ is a configuration parameter, and the range of λ is 0.1<λ≤1. In this embodiment, by adjusting the size of λ, it is possible to avoid the interference caused by the background temperature brought by areas other than the face in the temperature measurement application.
在本实施例中,通过建立的高度和宽度映射模型可以进一步快速融合物体的大小比例关系。例如,对于给定距离L的人脸图片中心所在的可见光图像中人脸图片高度H VR-Face和宽度W VR-Face,依据可见光图像与红外图像的高度和宽度映射比例模型就可获得H ir_face和W ir_face,并进一步根据x IR、y IR、H ir_face、W ir_face可获取红外图像人脸区域的对应的采集的温度信息。 In this embodiment, the established height and width mapping models can further quickly fuse the size and ratio relationship of objects. For example, for the height H VR-Face and width W VR-Face of the face image in the visible light image where the center of the face image at a given distance L is located, H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face , and W ir_face , the corresponding collected temperature information of the face area of the infrared image can be obtained.
步骤S504,建立一个可见光图像中不同人脸高度对应的范围区间与人脸与可见光摄像头水平距离L的映射表,例如,如表1所示,可将H vr_face范围区间划分为多个区间,其中,H k+1>H k>H k-1>......>H 6>H 5>H 4>H 3>H 2>H 1,L 1>L 2>L 3>L 4>......>L k-2>L k-1>L kStep S504, establishing a mapping table of the range intervals corresponding to different face heights in the visible light image and the horizontal distance L between the face and the visible light camera. For example, as shown in Table 1, the H vr_face range interval can be divided into multiple intervals, wherein , H k+1 >H k >H k-1 >...>H 6 >H 5 >H 4 >H 3 >H 2 >H 1 , L 1 >L 2 >L 3 >L 4 >...>L k-2 >L k-1 >L k .
表1Table 1
Hvr_faceHvr_face LL
[H 1,H 2] [H 1 ,H 2 ] L 1 L 1
(H 2,H 3] (H 2 ,H 3 ] L 2 L 2
(H 3,H 4] (H 3 ,H 4 ] L 3 L 3
(H k-2,H k-1] (H k-2 ,H k-1 ] L k-2 L k-2
(H k-1,H k] (H k-1 ,H k ] L k-1 L k-1
(H k,H k+1] (H k ,H k+1 ] L k L k
步骤S505,人脸检测模块输出可见光图像中的一个或多个人脸中心位置坐标值(x VR,y VR),及人脸框图片的高度H vr_face、宽度W vr_face,将每个检测出来的人脸图片的高度或宽度在映射模型中找到对应的距离L值,将对应的距离L值和人脸中心位置坐标值(x VR,y VR)输入到可见光图像与红外图像的配准模型中,计算获得对应红外图像的人脸图片中心位置坐标值(x IR,y IR),直到计算完成当前检测出来的多个人对应的红外图像的人脸图片中心位置坐标值(x VR,y VR)。 In step S505, the human face detection module outputs one or more human face center position coordinates (x VR , y VR ) in the visible light image, and the height H vr_face and width W vr_face of the human face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image, Calculate and obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image, until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
步骤S506,将每个检测出来的人脸图片的高度和宽度输入到可见光图像与红外图像的高度和宽度映射模型中,计算获得对应人脸图片的高度H ir_face和宽度W ir_faceStep S506, input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
步骤S507,根据每个检测出来人脸所对应红外图像的人脸图片中心位置坐标(x IR,y IR),人脸图片的高度H ir_face和宽度W ir_face确定对应的红外图像的区域,从对应的红外图像区域取最高温度记录为对应人员人脸温度,并将各对应人员温度值标注在各对应人脸可见光画面人脸框周围或人脸框内。 Step S507, according to the human face picture center position coordinates (x IR , y IR ) of the corresponding infrared image of the detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
通过本实施例的上述步骤,解决了可见光摄像头与红外摄像头存在中心光轴不平行(两光轴横向存在交角,两光轴纵向也存在交角)、两个摄像头X轴、Y轴、Z轴方向的位置都不相同情况下的图像配准融合问题,并且进一步解决了测温场景应用中,人脸周围环境温度异常导致对人脸温度检测干扰问题。Through the above-mentioned steps of this embodiment, it is solved that the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved. The problem of image registration and fusion in the case of different positions, and further solves the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the application of temperature measurement scenes.
当将本实施例的方法应用于测温场景中,在可见光画面检测到多个人脸所在位置及区域范围数据时,可快速准确获得所有检测出来的人脸在红外画面所对应的人脸范围,进一步准确获取所对应人脸范围的人脸温度数据,可以完全解决人脸周围环境温度异常导致对人脸温度检测的干扰问题,大大提高人脸温度的检测效率和准确率。When the method of this embodiment is applied to the temperature measurement scene, when the position and area range data of multiple faces are detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be quickly and accurately obtained. Further accurate acquisition of face temperature data corresponding to the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature.
本发明另一个实施例提供了另一种可见光和红外图像配准融合方法,该方法可应用于配置有可见光摄像头和红外摄像头的设备上。在本实施例中,可见光摄像头和红外摄像头的空间位置关系可参见图3和图4。Another embodiment of the present invention provides another method for registration and fusion of visible light and infrared images, which can be applied to devices equipped with visible light cameras and infrared cameras. In this embodiment, the spatial position relationship between the visible light camera and the infrared camera can be referred to in FIG. 3 and FIG. 4 .
如图3和图4所示,在本实施例中可见光摄像头的光轴与设备正视图平面垂直,红外摄像头的光轴与设备正视图平面可以不垂直,两个不同的摄像头画面中的同一物体的相对角度为0(即,画面中同一物体的位置没有发生旋转)。As shown in Figures 3 and 4, in this embodiment, the optical axis of the visible light camera is perpendicular to the plane of the front view of the device, and the optical axis of the infrared camera may not be perpendicular to the plane of the front view of the device. The relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
下面将结合采用可见光摄像头和红外摄像头对多人进行测温的场景详细描述本实施例,当然本实施例提供的技术方案也可以应用于其它需图像融合的场景。如图6所示,该拍摄场景为装配有可见光摄像头和红外摄像头的头盔对头盔前方的人脸进行拍摄,本实施例提供的可见光和红外图像的配准融合可包括如下步骤:The following will describe this embodiment in detail in combination with a scene where a visible light camera and an infrared camera are used to measure the temperature of multiple people. Of course, the technical solution provided by this embodiment can also be applied to other scenes that require image fusion. As shown in Figure 6, the shooting scene is that a helmet equipped with a visible light camera and an infrared camera shoots a face in front of the helmet. The registration and fusion of visible light and infrared images provided in this embodiment may include the following steps:
步骤S601,建立可见光图像与红外图像的配准模型。Step S601, establishing a registration model of a visible light image and an infrared image.
具体地,在本步骤中,可根据可见光和红外两摄像头的X、Y、Z轴空间位置相对距离为m、d、n、水平和垂直视场角、显示分辨率大小,两光轴的横向夹角(即,两摄像头的ZOY平面的夹角)、两光轴纵向交角(即,两摄像头的ZOX平面的夹角)、目标物体距离可见光摄像头的水平距离L这些参数,建立可见光图像与红外图像的如下配准模型:Specifically, in this step, according to the relative distances of the X, Y, and Z-axis spatial positions of the two visible light and infrared cameras as m, d, and n, the horizontal and vertical viewing angles, the display resolution, and the lateral direction of the two optical axes The included angle (that is, the angle between the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (that is, the angle between the ZOX planes of the two cameras), and the horizontal distance L between the target object and the visible light camera are used to establish a visible light image and an infrared image. The following registration model for the images:
Figure PCTCN2022095838-appb-000031
Figure PCTCN2022095838-appb-000031
其中,A、B、C、D为转换参数,m、n、d是两摄像头的X、Y、Z轴空间位置相对距离,是已知常数,
Figure PCTCN2022095838-appb-000032
和γ是两个摄像头的光轴的横向夹角与纵向交角,目标物体距离可见光摄像头的水平距离L是变化量。x VR和y VR是可见光图像像素坐标值,x IR和y IR是红外图像像素坐标值。
Among them, A, B, C, and D are conversion parameters, m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
Figure PCTCN2022095838-appb-000032
and γ are the horizontal and vertical angles between the optical axes of the two cameras, and the horizontal distance L between the target object and the visible light camera is the variation. x VR and y VR are the pixel coordinate values of the visible light image, and x IR and y IR are the pixel coordinate values of the infrared image.
在本实施例中,转换参数A、B、C、D可由可见光水平和垂直视场角、红外水平和垂直视场角、红外和可见光显示分辨率大小参数计算获得。例如,In this embodiment, the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of visible light, the horizontal and vertical viewing angles of infrared light, and the display resolution parameters of infrared and visible light. E.g,
Figure PCTCN2022095838-appb-000033
Figure PCTCN2022095838-appb-000033
其中,w VR是可见光摄像头水平显示分辨率,h VR是可见光摄像头垂直显示分辨率,w IR是红外摄像头水平显示分辨率,h IR是红外摄像头垂直显示分辨率,α是可见光摄像头水平视场角,β是可见光摄像头垂直视场角,θ是红外摄像头水平视场角,φ是红外摄像头垂直视场角。 Among them, w VR is the horizontal display resolution of the visible light camera, h VR is the vertical display resolution of the visible light camera, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and α is the horizontal field of view of the visible light camera , β is the vertical viewing angle of the visible light camera, θ is the horizontal viewing angle of the infrared camera, and φ is the vertical viewing angle of the infrared camera.
步骤S602,标定配准模型中可见光摄像头光轴和红外摄像头的光轴之间的横向夹角和纵向交角。在本实施例中,提供了另外一种配准模型中的横向夹角和纵向交角的标定方式,具体地,可包括如下步骤:Step S602, calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model. In this embodiment, another way of calibrating the horizontal angle and the vertical angle in the registration model is provided, specifically, the following steps may be included:
1)选择一个距离可见光摄像头的水平距离为L C物体。例如,L C的选择范围可为[0.3m~7m]; 1) Select an object whose horizontal distance from the visible light camera is L C . For example, the selection range of LC can be [0.3m~7m];
2)调整可见光摄像头的光轴使得该物体或人体相同的位置位于可见光画面和红外画面特定的位置,特定位置在可见光画面和红外画面都用一个标记符显示出来(例如十字线或其它标记图像),例如:x IR、y IR都为0,x VR为200、y VR为100; 2) Adjust the optical axis of the visible light camera so that the same position of the object or human body is located at a specific position in the visible light picture and the infrared picture, and the specific position is displayed with a marker on both the visible light picture and the infrared picture (such as crosshairs or other marked images) , for example: x IR and y IR are both 0, x VR is 200, y VR is 100;
3)根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000034
和两光轴纵向交角γ就分别为:
3) According to the registration model of the visible light image and the infrared image, the lateral angle between the two optical axes can be calculated
Figure PCTCN2022095838-appb-000034
and the longitudinal intersection angle γ of the two optical axes are respectively:
Figure PCTCN2022095838-appb-000035
Figure PCTCN2022095838-appb-000035
本实施例的特定位置可以灵活选择,例如,在另一实施例中,x VR、y VR,x IR、y IR都为0,根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000036
和两光轴纵向交角γ分别为:
The specific position of this embodiment can be flexibly selected, for example, in another embodiment, x VR , y VR , x IR , y IR are all 0, and the two optical axes can be calculated according to the registration model of the visible light image and the infrared image horizontal angle of
Figure PCTCN2022095838-appb-000036
and the longitudinal intersection angle γ of the two optical axes are:
Figure PCTCN2022095838-appb-000037
Figure PCTCN2022095838-appb-000037
步骤S603,建立可见光图像与红外图像的高度和宽度映射模型如下:Step S603, establishing the height and width mapping model of the visible light image and the infrared image as follows:
Figure PCTCN2022095838-appb-000038
Figure PCTCN2022095838-appb-000038
其中,A、C、
Figure PCTCN2022095838-appb-000039
γ,n,L都为已知,λ为配置参数,λ范围为0.1<λ≤1。通过调整λ大小,可以避免人脸以外区域带来的背景温度带来干扰。
Among them, A, C,
Figure PCTCN2022095838-appb-000039
γ, n, L are all known, λ is a configuration parameter, and the range of λ is 0.1<λ≤1. By adjusting the size of λ, the interference of the background temperature brought by the area other than the face can be avoided.
在本实施例中,通过建立的高度和宽度映射模型可以进一步快速融合物体的大小比例关系。例如,对于给定距离L的人脸图片中心所在的可见光图像中人脸图片高度H VR-Face和宽度W VR-Face,依据可见光图像与红外图像的高度和宽度映射比例模型就可 获得H ir_face和W ir_face,并进一步根据x IR、y IR、H ir_face和W ir_face可获取红外图像人脸区域的对应的采集的温度信息。 In this embodiment, the established height and width mapping models can further quickly fuse the size and ratio relationship of objects. For example, for the height H VR-Face and width W VR-Face of the face image in the visible light image where the center of the face image at a given distance L is located, H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face and W ir_face can obtain the corresponding collected temperature information of the face area of the infrared image.
步骤S604,建立一个可见光图像中不同人脸高度对应的范围区间与人脸与可见光摄像头水平距离L的映射表,例如,如表2所示,可将H vr_face范围区间划分为多个区间,其中,Hk+1>Hk>Hk-1>......>H6>H5>H4>H3>H2>H1,L1>L2>L3>L4>......>Lk-2>Lk-1>Lk。 Step S604, establish a mapping table of the range intervals corresponding to different face heights in the visible light image and the horizontal distance L between the face and the visible light camera, for example, as shown in Table 2, the H vr_face range interval can be divided into multiple intervals, where , Hk+1>Hk>Hk-1>......>H6>H5>H4>H3>H2>H1, L1>L2>L3>L4>......>Lk-2>Lk -1>Lk.
表2Table 2
Hvr_faceHvr_face LL
[H1,H2][H1,H2] L1L1
(H2,H3](H2,H3] L2L2
(H3,H4](H3,H4] L3L3
(Hk-2,Hk-1](Hk-2,Hk-1] Lk-2Lk-2
(Hk-1,Hk](Hk-1,Hk] Lk-1Lk-1
(Hk,Hk+1](Hk,Hk+1] LkLk
步骤S605,人脸检测模块输出可见光图像中的一个或多个人脸中心位置坐标值(x VR,y VR),及人脸框图片的高度H vr_face、宽度W vr_face,将每个检测出来的人脸图片的高度或宽度在映射模型中找到对应的距离L值,将对应的距离L值和人脸中心位置坐标值(x VR,y VR)输入到可见光图像与红外图像的配准模型中计算获得对应红外图像的人脸图片中心位置坐标值(x IR,y IR),直到计算完成当前检测出来的多个人对应的红外图像的人脸图片中心位置坐标值(x VR,y VR)。 Step S605, the face detection module outputs one or more coordinates of the center position of the face in the visible light image (x VR , y VR ), and the height H vr_face and width W vr_face of the face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image for calculation Obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
步骤S606,将每个检测出来的人脸图片的高度和宽度输入到可见光图像与红外图像的高度和宽度映射模型中,计算获得对应人脸图片的高度H ir_face和宽度W ir_faceStep S606, input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
步骤S607,根据每个检测出来人脸所对应红外图像的人脸图片中心位置坐标(x IR,y IR),人脸图片的高度H ir_face和宽度W ir_face确定对应的红外图像的区域,从对应的红外图像区域取最高温度记录为对应人员人脸温度,并将各对应人员温度值标注在各对应人脸可见光画面人脸框周围或人脸框内。 Step S607, according to the human face picture center position coordinates (x IR , y IR ) of the corresponding infrared image of the detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
通过本实施例的上述步骤,解决了可见光摄像头与红外摄像头存在中心光轴不平行(两光轴横向存在交角,两光轴纵向也存在交角)、两个摄像头X轴、Y轴、Z轴方向的位置都不相同情况下的图像配准融合问题,并且进一步解决了测温场景应用中, 人脸周围环境温度异常导致对人脸温度检测干扰问题。Through the above-mentioned steps of this embodiment, it is solved that the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved. The problem of image registration and fusion under the condition that the positions of the faces are different, and further solves the problem of interference to the temperature detection of the face caused by abnormal ambient temperature around the face in the application of the temperature measurement scene.
本实施例提供的技术方案可以解决穿戴式设备中可见光摄像头与红外摄像头存在中心光轴不平行、两个摄像头X轴、Y轴、Z轴方向的位置都不相同情况下的图像配准融合问题,解决测温场景应用中,人脸周围环境温度异常导致对人脸温度检测干扰问题。当本实施例提供的技术方案应用于当可见光画面检测到多个人脸所在位置及区域范围数据时,快速准确获得所有检测出来的人脸在红外画面所对应的人脸范围,进一步准确获取所对应人脸范围的人脸温度数据,可以完全解决人脸周围环境温度异常导致对人脸温度检测的干扰问题,大大提高人脸温度的检测效率和准确率。另外,本实施例提供的图像配准融合模型的标定过程简单快捷,避免了大量复杂的计算资源消耗,本实施例提供的图像配准融合模型相对于其它需要更多计算资源支撑图像融合算法所需计算资源更少,效率更高。The technical solution provided in this embodiment can solve the problem of image registration and fusion in the case where the central optical axes of the visible light camera and the infrared camera are not parallel, and the positions of the two cameras in the X-axis, Y-axis, and Z-axis directions are different. , to solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the temperature measurement scene application. When the technical solution provided by this embodiment is applied to the position and area range data of multiple human faces detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be obtained quickly and accurately, and the corresponding face ranges can be further accurately obtained. The face temperature data in the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature. In addition, the calibration process of the image registration and fusion model provided in this embodiment is simple and fast, and avoids a large amount of complicated computing resource consumption. Compared with other image fusion algorithms that require more computing resources to support It requires fewer computing resources and is more efficient.
实施例3Example 3
在以上的实施例中,在进行可见光图像和红外光图像的融合处理时,得到了可见光图像每个坐标位置与红外图像的每个坐标位置的对应关系。依据这样的关系,可以历遍所有坐标像素,使两个图像融合为一,进而对融合后的图像进行显示或存储。In the above embodiments, when the fusion processing of the visible light image and the infrared light image is performed, the corresponding relationship between each coordinate position of the visible light image and each coordinate position of the infrared image is obtained. According to such a relationship, all coordinate pixels can be traversed to fuse the two images into one, and then the fused image can be displayed or stored.
但实际上,由于可见光图像与红外光图像的视场范围并不一致,只有各自部分的图像可以进行融合,如果直接历遍所有的图像进行处理,将会浪费大量的计算资源。为此,本发明提供的实施例中可以对融合范围进行计算,然后仅对融合范围进行图像处理,以便节约计算资源和提升图像融合速度。But in fact, since the field of view of the visible light image and the infrared light image are not consistent, only the images of their respective parts can be fused. If all the images are directly traversed for processing, a lot of computing resources will be wasted. For this reason, in the embodiment provided by the present invention, the fusion range can be calculated, and then image processing is only performed on the fusion range, so as to save computing resources and increase image fusion speed.
图13和图14示出了根据本发明实施例的图像采集处理方法的摄像头布置示意图。FIG. 13 and FIG. 14 show schematic diagrams of arrangement of cameras in an image acquisition and processing method according to an embodiment of the present invention.
参照图13和图14,在本发明实施例的图像采集处理方法中,设备03中设置可见光摄像头01和红外光摄像头02,可见光摄像头01的光轴11与设备03的正视图平面31垂直,红外光摄像头02与可见光摄像头01间隔设置,在本实施例中,红外光摄像头02与可见光摄像头01在X、Y、Z轴上的投影均相互间隔,且在Z轴上的投影的距离为n。13 and 14, in the image acquisition and processing method of the embodiment of the present invention, a visible light camera 01 and an infrared light camera 02 are arranged in the device 03, the optical axis 11 of the visible light camera 01 is perpendicular to the front view plane 31 of the device 03, and the infrared light camera 01 The light camera 02 and the visible light camera 01 are arranged at intervals. In this embodiment, the projections of the infrared camera 02 and the visible light camera 01 on the X, Y, and Z axes are spaced apart from each other, and the projection distance on the Z axis is n.
其中,可见光摄像头01和红外光摄像头02的光轴偏差和间距不宜过大,以保障其画面中的同一物体的位置无旋转,降低融合配准偏差,进而提高获得的融合区域的可靠性。本发明实施例的图像采集处理方法可在可见光摄像头01和红外光摄像头02具有一定偏差的情况下保障图像采集处理效率,但其具体的安装要求根据实际情况确认,在此不作特别限定。Among them, the optical axis deviation and spacing of the visible light camera 01 and the infrared light camera 02 should not be too large, so as to ensure that the position of the same object in the picture does not rotate, reduce fusion registration deviation, and then improve the reliability of the obtained fusion area. The image acquisition and processing method of the embodiment of the present invention can guarantee the image acquisition and processing efficiency when the visible light camera 01 and the infrared light camera 02 have a certain deviation, but its specific installation requirements are determined according to the actual situation, and are not specifically limited here.
在本实施例中,红外光摄像头02的光轴21与设备03的正视图平面的垂直轴线相交,且红外光摄像头02的光轴21与可见光摄像头的光11轴在ZOX平面上的投影 的夹角为
Figure PCTCN2022095838-appb-000040
在ZOY平面上的投影的夹角为γ,夹角预先测试标定,对应无需要求红外光摄像头02的光轴21与可见光摄像头01的光轴11平行,与一般的应用场景的摄像头的实际情况相匹配,无需调节设备的硬件配置,应用简单。
In this embodiment, the optical axis 21 of the infrared camera 02 intersects with the vertical axis of the front view plane of the device 03, and the projection between the optical axis 21 of the infrared camera 02 and the light 11 axis of the visible light camera on the ZOX plane Angle is
Figure PCTCN2022095838-appb-000040
The included angle of the projection on the ZOY plane is γ, and the included angle is pre-tested and calibrated, corresponding to the fact that the optical axis 21 of the infrared camera 02 is parallel to the optical axis 11 of the visible light camera 01, which is consistent with the actual situation of cameras in general application scenarios Matching, no need to adjust the hardware configuration of the device, the application is simple.
在一可选实施例中,红外光摄像头02的光轴21与可见光摄像头01的光轴11在ZOX平面上的投影的夹角
Figure PCTCN2022095838-appb-000041
和在ZOY平面上的投影的夹角γ的标定包括:
In an optional embodiment, the angle between the optical axis 21 of the infrared camera 02 and the projection of the optical axis 11 of the visible light camera 01 on the ZOX plane
Figure PCTCN2022095838-appb-000041
The calibration of the included angle γ with the projection on the ZOY plane includes:
选择一个距离可见光摄像头01的水平距离为L c(距离选择范围为0.5米至7米)的物体,通过可见光摄像头01和红外光摄像头02同时采集该物体(该物体为规则长方体、正方体或人体的部分,例如人体的头部)的图像,并且测了出物体相同位置的横向长度(在X轴上的投影长度)和纵向长度(在Y轴上的投影长度),获得物体的可见光图像的横向长度L IR和纵向长度W IR,以及物体的红外光图像的横向长度L VR和纵向长度W VR,然后根据可见光图像与红外光图像的配准模型计算得到红外光摄像头02的光轴21与可见光摄像头01的光轴11在ZOX平面上的投影的夹角
Figure PCTCN2022095838-appb-000042
和在ZOY平面上的投影的夹角γ。其中,物体相同位置的横向长度和纵向长度的测量可通过软件自动测量计算获得,或人工根据采集图像进行手工测量计算获得。
Select an object whose horizontal distance from the visible light camera 01 is L c (distance selection range is 0.5 meters to 7 meters), and simultaneously collect the object (the object is a regular cuboid, cube or human body) through the visible light camera 01 and the infrared light camera 02. part, such as the head of the human body), and measure the horizontal length (projection length on the X-axis) and vertical length (projection length on the Y-axis) of the same position of the object, and obtain the horizontal length of the visible light image of the object The length L IR and the longitudinal length W IR , as well as the lateral length L VR and the longitudinal length W VR of the infrared image of the object, and then calculate the optical axis 21 of the infrared camera 02 and the visible light The included angle of the projection of the optical axis 11 of the camera 01 on the ZOX plane
Figure PCTCN2022095838-appb-000042
and the angle γ of the projection on the ZOY plane. Among them, the measurement of the horizontal length and the vertical length of the same position of the object can be obtained by automatic measurement and calculation by software, or by manual measurement and calculation based on the collected images.
其中,可见光图像与红外光图像的配准模型为:Among them, the registration model of visible light image and infrared light image is:
Figure PCTCN2022095838-appb-000043
其中,参数A和C参见下文。
Figure PCTCN2022095838-appb-000043
Among them, parameters A and C refer to the following.
在另一可选实施例中,红外光摄像头02的光轴21与可见光摄像头的光轴11在ZOX平面上的投影的夹角
Figure PCTCN2022095838-appb-000044
和在ZOY平面上的投影的夹角γ的标定包括:
In another optional embodiment, the angle between the optical axis 21 of the infrared camera 02 and the projection of the optical axis 11 of the visible light camera on the ZOX plane
Figure PCTCN2022095838-appb-000044
The calibration of the included angle γ with the projection on the ZOY plane includes:
选择一个距离可见光摄像头01的水平距离为L c(距离选择范围为0.5米至7米)的物体,调整可见光摄像头01的光轴11(或调整红外光摄像头02的光轴21),使该物体的相同位置位于可见光画面和红外光画面的特定位置,以将红外光摄像头02的光轴21与可见光摄像头的光轴11在ZOX平面上的投影的夹角
Figure PCTCN2022095838-appb-000045
和在ZOY平面上的投影的夹角γ调整为设定值。
Select an object whose horizontal distance from the visible light camera 01 is L c (distance selection range is 0.5 meters to 7 meters), adjust the optical axis 11 of the visible light camera 01 (or adjust the optical axis 21 of the infrared light camera 02), so that the object The same position is located at the specific position of the visible light picture and the infrared light picture, so that the included angle of the projection of the optical axis 21 of the infrared light camera 02 and the optical axis 11 of the visible light camera on the ZOX plane
Figure PCTCN2022095838-appb-000045
The angle γ between the projection and the projection on the ZOY plane is adjusted to a set value.
其中,例如该特定位置的对应的可见光图像的像素坐标x IR和y IR为0,对应的红外光图像的像素坐标x VR和y VR也为0,对应的可见光图像与红外光图像的配准模型为:
Figure PCTCN2022095838-appb-000046
Figure PCTCN2022095838-appb-000047
Wherein, for example, the pixel coordinates x IR and y IR of the corresponding visible light image at the specific position are 0, and the pixel coordinates x VR and y VR of the corresponding infrared light image are also 0, and the registration of the corresponding visible light image and the infrared light image The model is:
Figure PCTCN2022095838-appb-000046
and
Figure PCTCN2022095838-appb-000047
例如,特定位置的对应的可见光图像的像素坐标x IR和y IR为0,对应的红外光图像的像素坐标x VR为200,y VR为100,则对应的可见光图像与红外光图像的配准模型为: For example, if the pixel coordinates x IR and y IR of the corresponding visible light image at a specific position are 0, the pixel coordinates x VR of the corresponding infrared light image are 200, and y VR is 100, then the registration of the corresponding visible light image and the infrared light image The model is:
Figure PCTCN2022095838-appb-000048
Figure PCTCN2022095838-appb-000049
Figure PCTCN2022095838-appb-000048
and
Figure PCTCN2022095838-appb-000049
图17示出了根据本发明实施例的图像采集处理方法的流程示意图。Fig. 17 shows a schematic flowchart of an image acquisition and processing method according to an embodiment of the present invention.
参照图17,本发明实施例的图像采集处理方法包括:Referring to Figure 17, the image acquisition and processing method of the embodiment of the present invention includes:
步骤S1701:基于可见光摄像头和红外光摄像头的各自参数及相关参数获得融合区域初始模型,以根据融合区域初始模型获得过渡区域。Step S1701: Obtain an initial model of the fusion area based on the respective parameters and related parameters of the visible light camera and the infrared camera, so as to obtain the transition area according to the initial model of the fusion area.
其中,融合区域初始模型的参数包括:Among them, the parameters of the initial model of the fusion area include:
Figure PCTCN2022095838-appb-000050
Figure PCTCN2022095838-appb-000050
Figure PCTCN2022095838-appb-000051
Figure PCTCN2022095838-appb-000051
其中,x 1≤x VR≤x 2,y 1≤y VR≤y 2,其中,A、B、C、D为转换变量,用于简化上述x 1~y 2的公式变大,具体的:
Figure PCTCN2022095838-appb-000052
Figure PCTCN2022095838-appb-000053
x VR和y VR对应在可见光图像中的像素坐标,根据该融合区域初始模型获得的范围为过渡区域,m、n、d分别为所述可见光摄像头与所述红外光摄像头在X、Z、Y轴上的投影的距离,L max为所述可见光摄像头能够检测出图像对应物的最远距离,
Figure PCTCN2022095838-appb-000054
为所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOX平面上的投影的夹角,γ为所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOY平面上的投影的夹角,所述设备的正视图平面的垂直轴与Z轴平行,w IR为红外光光摄像头水平显示分辨率,h IR为红外光摄像头垂直显示分辨率,w VR为红外光光摄像头水平显示分辨率,h VR为红外光摄像头垂直显示分辨率,α为可见光水平视场角,β为可见光垂直视场角,θ为红外光水平视场角,φ为红外光垂直视场角,A、B、C、D适用于前文的标定中的模型参数。
Among them, x 1 ≤ x VR ≤ x 2 , y 1 ≤ y VR ≤ y 2 , among which, A, B, C, and D are conversion variables, which are used to simplify the above formulas of x 1 ~ y 2 to become larger, specifically:
Figure PCTCN2022095838-appb-000052
Figure PCTCN2022095838-appb-000053
x VR and y VR correspond to the pixel coordinates in the visible light image, and the range obtained according to the initial model of the fusion area is the transition area, and m, n, and d are the X, Z, and Y coordinates of the visible light camera and the infrared light camera, respectively. The distance of the projection on the axis, L max is the farthest distance that the visible light camera can detect the corresponding object of the image,
Figure PCTCN2022095838-appb-000054
is the angle between the optical axis of the infrared camera and the projection of the optical axis of the visible camera on the ZOX plane, and γ is the distance between the optical axis of the infrared camera and the optical axis of the visible camera on the ZOY plane The included angle of projection, the vertical axis of the front view plane of the device is parallel to the Z axis, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and w VR is the infrared camera Horizontal display resolution, h VR is the vertical display resolution of the infrared camera, α is the horizontal viewing angle of visible light, β is the vertical viewing angle of visible light, θ is the horizontal viewing angle of infrared light, φ is the vertical viewing angle of infrared light, A, B, C, and D are applicable to the model parameters in the previous calibration.
其中,过渡区域为方形,x VR和y VR分别对应其水平方向的像素坐标和垂直方向的像素坐标,例如图像分辨率为M*N,其水平方向的像素坐标对应M参数,垂直方向的像素坐标对应N参数。 Among them, the transition area is a square, and x VR and y VR correspond to the pixel coordinates in the horizontal direction and the pixel coordinates in the vertical direction respectively. For example, if the image resolution is M*N, the pixel coordinates in the horizontal direction correspond to the M parameter, and the pixels in the vertical direction Coordinates correspond to N parameters.
本发明提供一实施例提供了一种可见光和红外图像的配准融合方法。该方法应用于配置有可见光摄像头和红外摄像头的设备上。图9和图10均为根据本发明实施例的设备上的可见光摄像头和红外摄像头的水平位置示意图。其中,在图9中示出了设备区域1、红外摄像头2、可见光摄像头3、设备正视图平面水平线4、红外摄像头光轴5、可见光摄像头光轴6、两光轴的横向夹角7。An embodiment of the present invention provides a method for registration and fusion of visible light and infrared images. This method is applied to a device configured with a visible light camera and an infrared camera. FIG. 9 and FIG. 10 are schematic diagrams of horizontal positions of a visible light camera and an infrared camera on a device according to an embodiment of the present invention. 9 shows the device area 1, the infrared camera 2, the visible light camera 3, the horizontal line 4 of the front view plane of the device, the optical axis 5 of the infrared camera, the optical axis 6 of the visible light camera, and the lateral angle 7 between the two optical axes.
如图9所示,红外摄像头光轴5与设备正视图平面垂直,可见光摄像头3位于红外摄像头2的下方,其光轴6与设备正视图平面不垂直,而与红外摄像头的光轴5相交。As shown in Figure 9, the optical axis 5 of the infrared camera is perpendicular to the plane of the front view of the device, and the visible light camera 3 is located below the infrared camera 2, and its optical axis 6 is not perpendicular to the plane of the front view of the device, but intersects with the optical axis 5 of the infrared camera.
在图10中示出了设备区域1、红外摄像头2、可见光摄像头3、设备正视图平面 水平线4、红外摄像头光轴5、可见光摄像头光轴6、两光轴的横向夹角7。如图4所示,红外摄像头光轴5与设备正视图平面垂直,可见光摄像头3位于红外摄像头2的下方,可见光摄像头光轴6与设备正视图平面不垂直,而与红外摄像头光轴5相交。Figure 10 shows the equipment area 1, the infrared camera 2, the visible light camera 3, the equipment front view plane horizontal line 4, the infrared camera optical axis 5, the visible light camera optical axis 6, and the lateral angle 7 of the two optical axes. As shown in Figure 4, the optical axis 5 of the infrared camera is perpendicular to the front view plane of the device, the visible light camera 3 is located below the infrared camera 2, and the optical axis 6 of the visible light camera is not perpendicular to the front view plane of the device, but intersects with the optical axis 5 of the infrared camera.
如图9和图10所示,在本实施例中红外摄像头的光轴与设备正视图平面垂直,可见光摄像头的光轴与设备正视图平面可以不垂直,两个不同的摄像头画面中的同一物体的相对角度为0(即,画面中同一物体的位置没有发生旋转)。As shown in Figures 9 and 10, in this embodiment, the optical axis of the infrared camera is perpendicular to the plane of the front view of the device, and the optical axis of the visible light camera may not be perpendicular to the plane of the front view of the device. The relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
下面将结合采用可见光摄像头和红外摄像头进行测温的场景详细描述本实施例,当然本实施例提供的技术方案也可以应用于其它需图像融合的场景。如图11所示,本实施例提供的可见光和红外图像的配准融合可包括如下步骤:The following will describe this embodiment in detail in conjunction with a scene where a visible light camera and an infrared camera are used for temperature measurement. Of course, the technical solution provided by this embodiment can also be applied to other scenes that require image fusion. As shown in Figure 11, the registration and fusion of visible light and infrared images provided in this embodiment may include the following steps:
步骤S1101,建立可见光图像与红外图像的配准模型。Step S1101, establishing a registration model of a visible light image and an infrared image.
具体地,在本步骤中,可根据可见光和红外两摄像头的X、Y、Z轴空间位置相对距离m、d、n、水平和垂直视场角、显示分辨率大小、两光轴的横向夹角(即,两摄像头的ZOY平面的夹角)、两光轴纵向交角(即,两摄像头的ZOX平面的夹角)、目标物体距离可见光摄像头的水平距离L这些参数,建立可见光图像与红外图像的如下配准模型:Specifically, in this step, according to the relative distances m, d, and n of the X, Y, and Z-axis spatial positions of the visible light and infrared cameras, the horizontal and vertical field angles, the display resolution, and the lateral clamping distance between the two optical axes Angle (that is, the angle between the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (that is, the angle between the ZOX planes of the two cameras), and the horizontal distance L between the target object and the visible light camera to establish a visible light image and an infrared image The following registration model for :
Figure PCTCN2022095838-appb-000055
Figure PCTCN2022095838-appb-000055
其中,A、B、C、D为转换参数,m、n、d是两摄像头的X、Y、Z轴空间位置相对距离,是已知常数,
Figure PCTCN2022095838-appb-000056
和γ是两个摄像头的光轴的横向夹角与纵向交角,目标物体距离可见光摄像头的水平距离L是变化量。x VR和y VR是可见光图像像素坐标值,x IR和y IR是红外图像像素坐标值。
Among them, A, B, C, and D are conversion parameters, m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
Figure PCTCN2022095838-appb-000056
and γ are the horizontal and vertical angles between the optical axes of the two cameras, and the horizontal distance L between the target object and the visible light camera is the variation. x VR and y VR are the pixel coordinate values of the visible light image, and x IR and y IR are the pixel coordinate values of the infrared image.
在本实施例中,转换参数A、B、C、D可由可见光水平和垂直视场角、红外水平和垂直视场角、红外和可见光显示分辨率大小参数计算获得。例如,In this embodiment, the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of visible light, the horizontal and vertical viewing angles of infrared light, and the display resolution parameters of infrared and visible light. E.g,
Figure PCTCN2022095838-appb-000057
Figure PCTCN2022095838-appb-000057
其中,w VR是可见光摄像头水平显示分辨率,h VR是可见光摄像头垂直显示分辨率,w IR是红外摄像头水平显示分辨率,h IR是红外摄像头垂直显示分辨率,α是可见光摄像头水平视场角,β是可见光摄像头垂直视场角,θ是红外摄像头水平视场角,φ是红外摄像头垂直视场角。 Among them, w VR is the horizontal display resolution of the visible light camera, h VR is the vertical display resolution of the visible light camera, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and α is the horizontal field of view of the visible light camera , β is the vertical viewing angle of the visible light camera, θ is the horizontal viewing angle of the infrared camera, and φ is the vertical viewing angle of the infrared camera.
步骤S1102,标定配准模型中可见光摄像头光轴和红外摄像头的光轴之间的横向 夹角和纵向交角。在本实施例中,不用手动调整可见光与红外摄像头,可通过如下方式来标定横向夹角和纵向交角:Step S1102, calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model. In this embodiment, instead of manually adjusting the visible light and infrared cameras, the horizontal and vertical angles can be calibrated as follows:
1)1)选择一个距离可见光摄像头的水平距离为L C目标物体。例如,L C的选择范围可为[0.5m~7m]; 1) 1) Select a target object whose horizontal distance from the visible light camera is L C . For example, the selection range of LC can be [0.5m~7m];
2)可见光与红外摄像头同时采集该目标物体的图像(该目标物体可以是规则的长方体、正方体或者人体某部分,如人的头部五官或配戴的眼镜),并且找出物体相同位置的可见光图像中的坐标(x VR-C,y VR-C)和红外图像中的坐标(x IR-C,y IR-C); 2) Visible light and infrared cameras collect images of the target object at the same time (the target object can be a regular cuboid, cube, or a part of the human body, such as the human head, facial features or glasses), and find out the visible light at the same position of the object Coordinates in the image (x VR-C , y VR-C ) and coordinates in the infrared image (x IR-C , y IR-C );
3)根据可见光图像与红外图像的配准模型可计算两光轴的横向夹角
Figure PCTCN2022095838-appb-000058
和两光轴纵向交角γ:
3) According to the registration model of the visible light image and the infrared image, the lateral angle between the two optical axes can be calculated
Figure PCTCN2022095838-appb-000058
and the vertical angle γ of the two optical axes:
Figure PCTCN2022095838-appb-000059
Figure PCTCN2022095838-appb-000059
Figure PCTCN2022095838-appb-000060
Figure PCTCN2022095838-appb-000060
步骤S1103,建立可见光图像与红外图像的高度和宽度映射模型如下:Step S1103, establishing the height and width mapping model of the visible light image and the infrared image as follows:
Figure PCTCN2022095838-appb-000061
Figure PCTCN2022095838-appb-000061
其中,A、C、
Figure PCTCN2022095838-appb-000062
γ,n,L都为已知,λ为配置参数,λ范围为0.1<λ≤1。通过调整λ大小,可以完全避免人脸以外区域带来的背景温度带来干扰。
Among them, A, C,
Figure PCTCN2022095838-appb-000062
γ, n, L are all known, λ is a configuration parameter, and the range of λ is 0.1<λ≤1. By adjusting the size of λ, it is possible to completely avoid the interference caused by the background temperature brought by areas other than the face.
在本实施例中,通过建立的高度和宽度映射模型可以进一步快速融合物体的大小比例关系。例如,对于给定距离L的人脸图片中心所在的可见光图像中人脸图片高度H VR-Face和宽度W VR-Face,依据可见光图像与红外图像的高度和宽度映射比例模型就可获得H ir_face和W ir_face,并进一步根据x IR、y IR、H ir_face、W ir_face可获取红外图像人脸区域的对应的采集的温度信息。 In this embodiment, the established height and width mapping models can further quickly fuse the size and ratio relationship of objects. For example, for the height H VR-Face and width W VR-Face of the face image in the visible light image where the center of the face image at a given distance L is located, H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face , and W ir_face , the corresponding collected temperature information of the face area of the infrared image can be obtained.
步骤S1104,建立一个可见光图像中不同人脸高度对应的范围区间与人脸与可见光摄像头水平距离L的映射表,例如,如表3所示,可将H vr_face范围区间划分为多个区间,其中,H k+1>H k>H k-1>......>H 6>H 5>H 4>H 3>H 2>H 1,L 1>L 2>L 3>L 4>......>L k-2>L k-1>L kStep S1104, establish a mapping table of the range intervals corresponding to different face heights in the visible light image and the horizontal distance L between the face and the visible light camera, for example, as shown in Table 3, the H vr_face range interval can be divided into multiple intervals, where , H k+1 >H k >H k-1 >...>H 6 >H 5 >H 4 >H 3 >H 2 >H 1 , L 1 >L 2 >L 3 >L 4 >...>L k-2 >L k-1 >L k .
表3table 3
Figure PCTCN2022095838-appb-000063
Figure PCTCN2022095838-appb-000063
Figure PCTCN2022095838-appb-000064
Figure PCTCN2022095838-appb-000064
步骤S1105,人脸检测模块输出可见光图像中的一个或多个人脸中心位置坐标值(x VR,y VR),及人脸框图片的高度H vr_face、宽度W vr_face,将每个检测出来的人脸图片的高度或宽度在映射模型中找到对应的距离L值,将对应的距离L值和人脸中心位置坐标值(x VR,y VR)输入到可见光图像与红外图像的配准模型中,计算获得对应红外图像的人脸图片中心位置坐标值(x IR,y IR),直到计算完成当前检测出来的多个人对应的红外图像的人脸图片中心位置坐标值(x VR,y VR)。 Step S1105, the human face detection module outputs one or more human face center position coordinates (x VR , y VR ) in the visible light image, and the height H vr_face and width W vr_face of the human face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image, Calculate and obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image, until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
步骤S1106,将每个检测出来的人脸图片的高度和宽度输入到可见光图像与红外图像的高度和宽度映射模型中,计算获得对应人脸图片的高度H ir_face和宽度W ir_faceStep S1106, input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
步骤S1107,根据每个检测出来人脸所对应红外图像的人脸图片中心位置坐标(x IR,y IR),人脸图片的高度H ir_face和宽度W ir_face确定对应的红外图像的区域,从对应的红外图像区域取最高温度记录为对应人员人脸温度,并将各对应人员温度值标注在各对应人脸可见光画面人脸框周围或人脸框内。 Step S1107, according to the human face picture center position coordinates (x IR , y IR ) of the infrared image corresponding to each detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
通过本实施例的上述步骤,解决了可见光摄像头与红外摄像头存在中心光轴不平行(两光轴横向存在交角,两光轴纵向也存在交角)、两个摄像头X轴、Y轴、Z轴方向的位置都不相同情况下的图像配准融合问题,并且进一步解决了测温场景应用中,人脸周围环境温度异常导致对人脸温度检测干扰问题。Through the above-mentioned steps of this embodiment, it is solved that the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved. The problem of image registration and fusion in the case of different positions, and further solves the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the application of temperature measurement scenes.
当将本实施例的方法应用于测温场景中,在可见光画面检测到多个人脸所在位置及区域范围数据时,可快速准确获得所有检测出来的人脸在红外画面所对应的人脸范围,进一步准确获取所对应人脸范围的人脸温度数据,可以完全解决人脸周围环境温度异常导致对人脸温度检测的干扰问题,大大提高人脸温度的检测效率和准确率。When the method of this embodiment is applied to the temperature measurement scene, when the position and area range data of multiple faces are detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be quickly and accurately obtained. Further accurate acquisition of face temperature data corresponding to the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature.
本发明又一实施例提供了另一种可见光和红外图像配准融合方法,该方法可应用于配置有可见光摄像头和红外摄像头的设备上。在本实施例中,可见光摄像头和红外摄像头的空间位置关系可参见图9和图10。Yet another embodiment of the present invention provides another method for registration and fusion of visible light and infrared images, which can be applied to a device equipped with a visible light camera and an infrared camera. In this embodiment, the spatial positional relationship between the visible light camera and the infrared camera can be referred to FIG. 9 and FIG. 10 .
如图9和图10所示,在本实施例中红外摄像头的光轴与设备正视图平面垂直,可见光摄像头的光轴与设备正视图平面可以不垂直,两个不同的摄像头画面中的同一物体的相对角度为0(即,画面中同一物体的位置没有发生旋转)。As shown in Figures 9 and 10, in this embodiment, the optical axis of the infrared camera is perpendicular to the plane of the front view of the device, and the optical axis of the visible light camera may not be perpendicular to the plane of the front view of the device. The relative angle of is 0 (that is, the position of the same object in the screen does not rotate).
下面将结合采用可见光摄像头和红外摄像头对多人进行测温的场景详细描述本实施例,当然本实施例提供的技术方案也可以应用于其它需图像融合的场景。如图12所示,该拍摄场景为装配有可见光摄像头和红外摄像头的头盔对头盔前方的人脸进行拍摄,本实施例提供的可见光和红外图像的配准融合可包括如下步骤:The following will describe this embodiment in detail in combination with a scene where a visible light camera and an infrared camera are used to measure the temperature of multiple people. Of course, the technical solution provided by this embodiment can also be applied to other scenes that require image fusion. As shown in Figure 12, the shooting scene is that a helmet equipped with a visible light camera and an infrared camera shoots a face in front of the helmet. The registration and fusion of visible light and infrared images provided in this embodiment may include the following steps:
步骤S1201,建立可见光图像与红外图像的配准模型。Step S1201, establishing a registration model of a visible light image and an infrared image.
具体地,在本步骤中,可根据可见光和红外两摄像头的X、Y、Z轴空间位置相对距离为m、d、n、水平和垂直视场角、显示分辨率大小,两光轴的横向夹角(即,两摄像头的ZOY平面的夹角)、两光轴纵向交角(即,两摄像头的ZOX平面的夹角)、目标物体距离可见光摄像头的水平距离L这些参数,建立可见光图像与红外图像的如下配准模型:Specifically, in this step, according to the relative distances of the X, Y, and Z-axis spatial positions of the two visible light and infrared cameras as m, d, and n, the horizontal and vertical viewing angles, the display resolution, and the lateral direction of the two optical axes The included angle (that is, the angle between the ZOY planes of the two cameras), the longitudinal intersection angle of the two optical axes (that is, the angle between the ZOX planes of the two cameras), and the horizontal distance L between the target object and the visible light camera are used to establish a visible light image and an infrared image. The following registration model for the images:
Figure PCTCN2022095838-appb-000065
Figure PCTCN2022095838-appb-000065
其中,A、B、C、D为转换参数,m、n、d是两摄像头的X、Y、Z轴空间位置相对距离,是已知常数,
Figure PCTCN2022095838-appb-000066
和γ是可见光摄像头的光轴与红外摄像头的光轴的横向夹角及纵向交角,目标物体距离可见光摄像头的水平距离L是变化量。x VR和y VR是可见光图像像素坐标值,x IR和y IR是红外图像像素坐标值。
Among them, A, B, C, and D are conversion parameters, m, n, and d are the relative distances of the X, Y, and Z axes of the two cameras, which are known constants.
Figure PCTCN2022095838-appb-000066
and γ are the horizontal and vertical angles between the optical axis of the visible light camera and the infrared camera, and the horizontal distance L between the target object and the visible light camera is the variation. x VR and y VR are the pixel coordinate values of the visible light image, and x IR and y IR are the pixel coordinate values of the infrared image.
在本实施例中,转换参数A、B、C、D可由可见光摄像头水平和垂直视场角、红外摄像头水平和垂直视场角、红外摄像头和可见光摄像头显示分辨率大小参数计算获得。例如,In this embodiment, the conversion parameters A, B, C, and D can be obtained by calculating the horizontal and vertical viewing angles of the visible light camera, the horizontal and vertical viewing angles of the infrared camera, and the display resolution parameters of the infrared camera and the visible light camera. E.g,
Figure PCTCN2022095838-appb-000067
Figure PCTCN2022095838-appb-000067
其中,w VR是可见光摄像头水平显示分辨率,h VR是可见光摄像头垂直显示分辨率,w IR是红外摄像头水平显示分辨率,h IR是红外摄像头垂直显示分辨率,α是可见光摄像头水平视场角,β是可见光摄像头垂直视场角,θ是红外摄像头水平视场角,φ是红外摄像头垂直视场角。 Among them, w VR is the horizontal display resolution of the visible light camera, h VR is the vertical display resolution of the visible light camera, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and α is the horizontal field of view of the visible light camera , β is the vertical viewing angle of the visible light camera, θ is the horizontal viewing angle of the infrared camera, and φ is the vertical viewing angle of the infrared camera.
步骤S1202,标定配准模型中可见光摄像头的光轴和红外摄像头的光轴之间的横向夹角和纵向交角。在本实施例中,提供了另外一种配准模型中的横向夹角和纵向交角的标定方式,具体地,可包括如下步骤:Step S1202, calibrate the horizontal angle and vertical angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model. In this embodiment, another way of calibrating the horizontal angle and the vertical angle in the registration model is provided, specifically, the following steps may be included:
1)不用手动调整可见光与红外摄像头,选择一个距离可见光摄像头的水平距离为L C物体。例如,L C的选择范围可为[0.3m~7m]; 1) Instead of manually adjusting the visible light and infrared cameras, select an object whose horizontal distance from the visible light camera is L C. For example, the selection range of LC can be [0.3m~7m];
2)调整可见光摄像头的光轴使得该物体或人体相同的位置位于可见光画面和红外画面特定的位置,特定位置在可见光画面和红外画面都用一个标记符显示出来(例如十字线或其它标记图像),例如:x IR、y IR都为0,x VR为200、y VR为100; 2) Adjust the optical axis of the visible light camera so that the same position of the object or human body is located at a specific position in the visible light picture and the infrared picture, and the specific position is displayed with a marker on both the visible light picture and the infrared picture (such as crosshairs or other marked images) , for example: x IR and y IR are both 0, x VR is 200, y VR is 100;
3)根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000068
和两光轴纵向交角γ就分别为:
3) According to the registration model of the visible light image and the infrared image, the lateral angle between the two optical axes can be calculated
Figure PCTCN2022095838-appb-000068
and the longitudinal intersection angle γ of the two optical axes are respectively:
Figure PCTCN2022095838-appb-000069
Figure PCTCN2022095838-appb-000069
本实施例的特定位置可以灵活选择,例如,在另一实施例中,x VR、y VR,x IR、y IR都为0,根据可见光图像与红外图像的配准模型可计算出两光轴的横向夹角
Figure PCTCN2022095838-appb-000070
和两光轴纵向交角γ分别为:
The specific position of this embodiment can be flexibly selected, for example, in another embodiment, x VR , y VR , x IR , y IR are all 0, and the two optical axes can be calculated according to the registration model of the visible light image and the infrared image horizontal angle of
Figure PCTCN2022095838-appb-000070
and the longitudinal intersection angle γ of the two optical axes are:
Figure PCTCN2022095838-appb-000071
Figure PCTCN2022095838-appb-000071
步骤S1203,建立可见光图像与红外图像的高度和宽度映射模型如下:Step S1203, establishing the height and width mapping model of the visible light image and the infrared image as follows:
Figure PCTCN2022095838-appb-000072
Figure PCTCN2022095838-appb-000072
其中,A、C、
Figure PCTCN2022095838-appb-000073
γ,n,L都为已知,λ为配置参数,λ范围为0.1<λ≤1。通过调整λ大小,可以避免人脸以外区域带来的背景温度带来干扰。
Among them, A, C,
Figure PCTCN2022095838-appb-000073
γ, n, L are all known, λ is a configuration parameter, and the range of λ is 0.1<λ≤1. By adjusting the size of λ, the interference of the background temperature brought by the area other than the face can be avoided.
在本实施例中,通过建立的高度和宽度映射模型可以进一步快速融合物体的大小比例关系。例如,对于给定距离L的人脸图片中心所在的可见光图像中人脸图片高度H VR-Face和宽度W VR-Face,依据可见光图像与红外图像的高度和宽度映射比例模型就可获得H ir_face和W ir_face,并进一步根据x IR、y IR、H ir_face和W ir_face可获取红外图像人脸区域的对应的采集的温度信息。 In this embodiment, the established height and width mapping models can further quickly fuse the size and ratio relationship of objects. For example, for the height H VR-Face and width W VR-Face of the face image in the visible light image where the center of the face image at a given distance L is located, H ir_face can be obtained according to the height and width mapping scale model of the visible light image and the infrared image and W ir_face , and further according to x IR , y IR , Hi ir_face and W ir_face can obtain the corresponding collected temperature information of the face area of the infrared image.
步骤S1204,建立一个可见光图像中不同人脸高度对应的范围区间与人脸与可见光摄像头水平距离L的映射表,例如,如表4所示,可将H vr_face范围区间划分为多个区间,其中,Hk+1>Hk>Hk-1>......>H6>H5>H4>H3>H2>H1,L1>L2>L3>L4>......>Lk-2>Lk-1>Lk。 Step S1204, establish a mapping table of the range intervals corresponding to different face heights in the visible light image and the horizontal distance L between the face and the visible light camera, for example, as shown in Table 4, the H vr_face range interval can be divided into multiple intervals, where , Hk+1>Hk>Hk-1>......>H6>H5>H4>H3>H2>H1, L1>L2>L3>L4>......>Lk-2>Lk -1>Lk.
表4Table 4
Figure PCTCN2022095838-appb-000074
Figure PCTCN2022095838-appb-000074
Figure PCTCN2022095838-appb-000075
Figure PCTCN2022095838-appb-000075
步骤S1205,人脸检测模块输出可见光图像中的一个或多个人脸中心位置坐标值(x VR,y VR),及人脸框图片的高度H vr_face、宽度W vr_face,将每个检测出来的人脸图片的高度或宽度在映射模型中找到对应的距离L值,将对应的距离L值和人脸中心位置坐标值(x VR,y VR)输入到可见光图像与红外图像的配准模型中计算获得对应红外图像的人脸图片中心位置坐标值(x IR,y IR),直到计算完成当前检测出来的多个人对应的红外图像的人脸图片中心位置坐标值(x VR,y VR)。 Step S1205, the face detection module outputs one or more coordinate values of the center position of the face in the visible light image (x VR , y VR ), and the height H vr_face and width W vr_face of the face frame picture, and each detected person Find the corresponding distance L value in the mapping model for the height or width of the face picture, and input the corresponding distance L value and the coordinate value of the face center position (x VR , y VR ) into the registration model of the visible light image and the infrared image for calculation Obtain the coordinate values (x IR , y IR ) of the center position of the face picture corresponding to the infrared image until the calculation is completed until the coordinate values (x VR , y VR ) of the center position of the face picture of the infrared images corresponding to the currently detected multiple people are completed.
步骤S1206,将每个检测出来的人脸图片的高度和宽度输入到可见光图像与红外图像的高度和宽度映射模型中,计算获得对应人脸图片的高度H ir_face和宽度W ir_faceStep S1206, input the height and width of each detected face picture into the height and width mapping model of the visible light image and the infrared image, and calculate and obtain the height H ir_face and width W ir_face of the corresponding face picture.
步骤S1207,根据每个检测出来人脸所对应红外图像的人脸图片中心位置坐标(x IR,y IR),人脸图片的高度H ir_face和宽度W ir_face确定对应的红外图像的区域,从对应的红外图像区域取最高温度记录为对应人员人脸温度,并将各对应人员温度值标注在各对应人脸可见光画面人脸框周围或人脸框内。 Step S1207, according to the human face picture center position coordinates (x IR , y IR ) of the infrared image corresponding to each detected human face, the height H ir_face and the width W ir_face of the human face picture determine the area of the corresponding infrared image, from the corresponding Take the highest temperature in the infrared image area and record it as the face temperature of the corresponding person, and mark the temperature value of each corresponding person around or in the face frame of the visible light image of each corresponding face.
通过本实施例的上述步骤,解决了可见光摄像头与红外摄像头存在中心光轴不平行(两光轴横向存在交角,两光轴纵向也存在交角)、两个摄像头X轴、Y轴、Z轴方向的位置都不相同情况下的图像配准融合问题,并且进一步解决了测温场景应用中,人脸周围环境温度异常导致对人脸温度检测干扰问题。Through the above-mentioned steps of this embodiment, it is solved that the central optical axes of the visible light camera and the infrared camera are not parallel (there is an intersection angle between the two optical axes in the horizontal direction, and there is also an intersection angle in the vertical direction of the two optical axes), and the X-axis, Y-axis, and Z-axis directions of the two cameras are solved. The problem of image registration and fusion in the case of different positions, and further solves the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the application of temperature measurement scenes.
在图15和图16所示的另一实施例中,红外光摄像头02的光轴21与设备的正视图平面垂直,可见光摄像头01的光轴11与设备的正视图平面不垂直,对应的,其融合区域初始模型参数包括:In another embodiment shown in FIGS. 15 and 16 , the optical axis 21 of the infrared camera 02 is perpendicular to the plane of the front view of the device, and the optical axis 11 of the visible light camera 01 is not perpendicular to the plane of the front view of the device. Correspondingly, The initial model parameters of its fusion area include:
Figure PCTCN2022095838-appb-000076
Figure PCTCN2022095838-appb-000076
Figure PCTCN2022095838-appb-000077
Figure PCTCN2022095838-appb-000077
过渡区域(x VR,y VR)满足:x 1≤x VR≤x 2,y 1≤y VR≤y 2,其中,其它参数与前述实施例的其它参数相同,在此不再详述。本实施例的过渡区域的后续处理与前述实施例的过渡区域的后续处理相同,后续也不再赘述。 The transition region (x VR , y VR ) satisfies: x 1 ≤ x VR ≤ x 2 , y 1 ≤ y VR ≤ y 2 , where other parameters are the same as those in the foregoing embodiments, and will not be described in detail here. Subsequent processing of the transition region in this embodiment is the same as that of the transition region in the preceding embodiment, and details will not be repeated hereafter.
图18示出了采用根据本发明实施例的图像采集处理方法的设备的部分界面示意图。Fig. 18 shows a schematic diagram of a partial interface of a device adopting an image acquisition and processing method according to an embodiment of the present invention.
参照图18,对应可见光图像区域40可覆盖该设备界面全域,融合区域41小于可见光图像区域40,对最终仅对融合区域41范围内的图像数据进行处理,获得采集 对象A的信息,并将采集对象A的信息单独显示在界面的左下角,对融合区域41外的采集对象B的信息不进行采集,有效降低了数据处理量。Referring to FIG. 18 , the corresponding visible light image area 40 can cover the entire area of the device interface, and the fusion area 41 is smaller than the visible light image area 40. Finally, only the image data within the range of the fusion area 41 is processed to obtain the information of the acquisition object A, and the acquired The information of the object A is displayed separately in the lower left corner of the interface, and the information of the collected object B outside the fusion area 41 is not collected, which effectively reduces the amount of data processing.
其中,采集对象A和采集对象B例如为人脸,红外摄像头用于采集人脸的温度,仅对融合区域41范围内的可见光图像进行人脸识别,可快速锁定采集对象A,进而可快速对锁定的采集对象A的人脸温度进行检测,并将测试结果单独显示在界面的左下角,可提高人脸识别及温度检测的效率。Among them, the collection object A and the collection object B are, for example, human faces, and the infrared camera is used to collect the temperature of the face, and only face recognition is performed on the visible light image within the range of the fusion area 41, so that the collection object A can be quickly locked, and then the locked The face temperature of the collected object A is detected, and the test result is displayed separately in the lower left corner of the interface, which can improve the efficiency of face recognition and temperature detection.
本发明的图像采集处理方法采用可见光摄像头和红外光摄像头同时采集图像,根据融合区域初始模型与可见光分辨率的比较获得融合区域范围,其中,对融合区域范围内的可见光图像进行分析,获得采集图像的特征信息,可降低数据处理量,节约计算资源,提高图像处理效率。在人脸识别等图像分析处理中可有效提高处理效率。The image collection and processing method of the present invention uses a visible light camera and an infrared camera to simultaneously collect images, and obtains the range of the fusion region according to the comparison between the initial model of the fusion region and the resolution of visible light, wherein the visible light image within the fusion region is analyzed to obtain the collected image The feature information can reduce the amount of data processing, save computing resources, and improve the efficiency of image processing. It can effectively improve the processing efficiency in image analysis processing such as face recognition.
根据可见光摄像头和红外光摄像头的固定参数和相关参数获得融合区域初始模型,可见光摄像头和红外光摄像头的固定参数和相关参数标定后无需再调整,保障了使用的便捷性。The initial model of the fusion area is obtained according to the fixed parameters and related parameters of the visible light camera and the infrared light camera. The fixed parameters and related parameters of the visible light camera and the infrared light camera do not need to be adjusted after calibration, which ensures the convenience of use.
本实施例提供的技术方案可以解决穿戴式设备中可见光摄像头与红外摄像头存在中心光轴不平行、两个摄像头X轴、Y轴、Z轴方向的位置都不相同情况下的图像配准融合问题,解决测温场景应用中,人脸周围环境温度异常导致对人脸温度检测干扰问题。当本实施例提供的技术方案应用于当可见光画面检测到多个人脸所在位置及区域范围数据时,快速准确获得所有检测出来的人脸在红外画面所对应的人脸范围,进一步准确获取所对应人脸范围的人脸温度数据,可以完全解决人脸周围环境温度异常导致对人脸温度检测的干扰问题,大大提高人脸温度的检测效率和准确率。另外,本实施例提供的图像配准融合模型的标定过程简单快捷,避免了大量复杂的计算资源消耗,本实施例提供的图像配准融合模型相对于其它需要更多计算资源支撑图像融合算法所需计算资源更少,效率更高。The technical solution provided in this embodiment can solve the problem of image registration and fusion in the case where the central optical axes of the visible light camera and the infrared camera are not parallel, and the positions of the two cameras in the X-axis, Y-axis, and Z-axis directions are different. , to solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face in the temperature measurement scene application. When the technical solution provided by this embodiment is applied to the position and area range data of multiple human faces detected in the visible light screen, the range of faces corresponding to all detected faces in the infrared screen can be obtained quickly and accurately, and the corresponding face ranges can be further accurately obtained. The face temperature data in the face range can completely solve the problem of interference with face temperature detection caused by abnormal ambient temperature around the face, and greatly improve the detection efficiency and accuracy of face temperature. In addition, the calibration process of the image registration and fusion model provided in this embodiment is simple and fast, and avoids a large amount of complicated computing resource consumption. Compared with other image fusion algorithms that require more computing resources to support It requires fewer computing resources and is more efficient.

Claims (29)

  1. 一种可见光图像和红外图像的处理方法,应用于配置有可见光摄像头和红外摄像头的设备上,其中,所述可见光摄像头的光轴或红外摄像头的光轴与所述设备正视图平面垂直,其特征在于,包括:A processing method for visible light images and infrared images, which is applied to equipment equipped with visible light cameras and infrared cameras, wherein the optical axis of the visible light camera or the optical axis of the infrared camera is perpendicular to the plane of the front view of the equipment, its features In, including:
    根据所述可见光摄像头与红外摄像头的空间相对位置、转换参数以及目标物体距离所述可见光摄像头的水平距离,建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型;According to the spatial relative position of the visible light camera and the infrared camera, conversion parameters and the horizontal distance between the target object and the visible light camera, establish the visible light image of the target object collected by the visible light camera and the infrared image of the target object collected by the infrared camera. The registration model of the coordinate position of the image;
    根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。The visible light image and the infrared image of the target object are registered and fused according to the registration model.
  2. 根据权利要求1所述的方法,其特征在于,所述可见光摄像头的光轴与所述设备正视图平面垂直,其中,所述配准模型为:The method according to claim 1, wherein the optical axis of the visible light camera is perpendicular to the front view plane of the device, wherein the registration model is:
    Figure PCTCN2022095838-appb-100001
    Figure PCTCN2022095838-appb-100001
    其中,A、B、C、D分别为第一、第二、第三和第四转换参数,m、n、d分别为可见光摄像头与红外摄像头的X、Y、Z轴的空间位置相对距离,
    Figure PCTCN2022095838-appb-100002
    和γ分别为可见光摄像头与红外摄像头的光轴的横向夹角和纵向交角,L为目标物距离可见光摄像头的水平距离,(x VR,y VR)为可见光图像像素坐标,(x IR,y IR)为红外图像像素坐标。
    Among them, A, B, C, and D are the first, second, third, and fourth conversion parameters, respectively, and m, n, and d are the relative distances between the X, Y, and Z axes of the visible light camera and the infrared camera, respectively,
    Figure PCTCN2022095838-appb-100002
    and γ are the horizontal and vertical angles between the optical axes of the visible light camera and the infrared camera, respectively, L is the horizontal distance between the target and the visible light camera, (x VR , y VR ) is the pixel coordinate of the visible light image, (x IR , y IR ) is the infrared image pixel coordinates.
  3. 根据权利要求2所述的方法,其特征在于,在所述配准模型中可见光摄像头与红外摄像头光轴的横向夹角和纵向交角由以下步骤得到:The method according to claim 2, wherein, in the registration model, the horizontal angle and the vertical angle between the optical axis of the visible light camera and the infrared camera are obtained by the following steps:
    选取与所述可见光摄像头的水平距离为第一设定距离的第一参照物体;Selecting a first reference object whose horizontal distance from the visible light camera is a first set distance;
    通过可见光摄像头和红外摄像头同时采集所述第一参照物体的图像,并分别测量出所述第一参照物体的相同位置分别在可见光图像和红外图像中的横向长度和纵向长度;Simultaneously collect images of the first reference object through the visible light camera and the infrared camera, and respectively measure the horizontal length and the vertical length of the same position of the first reference object in the visible light image and the infrared image respectively;
    根据所述第一设定距离、以及所述第一参照物体的相同位置分别在可见光图像和红外图像中的横向长度和纵向长度在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。According to the first set distance, and the horizontal length and vertical length of the same position of the first reference object in the visible light image and the infrared image, respectively, the optical axis and the infrared light axis of the visible light camera are calibrated in the registration model. The horizontal angle and vertical angle of the optical axis of the camera.
  4. 根据权利要求3所述的方法,其特征在于,通过如下公式标定所述配准模型中可见光摄像头的光轴与红外摄像头的光轴的横向夹角
    Figure PCTCN2022095838-appb-100003
    和纵向交角γ:
    The method according to claim 3, wherein the lateral angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model is calibrated by the following formula
    Figure PCTCN2022095838-appb-100003
    and longitudinal angle γ:
    Figure PCTCN2022095838-appb-100004
    Figure PCTCN2022095838-appb-100004
    其中,L C为第一设定距离,L VR和W VR分别为所述第一参照物体的相同位置在可见光图像中的横向长度和纵向长度,L IR和W IR分别为所述第一参照物体的相同位置在红外图像中的横向长度和纵向长度。 Among them, L C is the first set distance, L VR and W VR are the horizontal length and vertical length of the same position of the first reference object in the visible light image respectively, L IR and W IR are the first reference The horizontal length and vertical length of the same position of the object in the infrared image.
  5. 根据权利要求3所述的方法,其特征在于,所述配准模型中可见光摄像头与红外摄像头光轴的 横向夹角和纵向交角由以下步骤得到:The method according to claim 3, wherein, in the registration model, the horizontal angle and the longitudinal angle of the visible light camera and the infrared camera optical axis are obtained by the following steps:
    选取与所述可见光摄像头的水平距离为第二设定距离的第二参照物体;Selecting a second reference object whose horizontal distance from the visible light camera is a second set distance;
    调整可见光摄像头的光轴使得所述第二参照物体的相同位置位于可见光图像和红外图像的特定位置;adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image;
    根据所述第二设定距离和所述特定位置的坐标值,在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。According to the second set distance and the coordinate value of the specific position, the horizontal angle and the longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera are calibrated in the registration model.
  6. 根据权利要求5所述的方法,其特征在于,通过如下公式标定所述配准模型中可见光摄像头的光轴与红外摄像头的光轴的横向夹角
    Figure PCTCN2022095838-appb-100005
    和纵向交角γ:
    The method according to claim 5, wherein the lateral angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model is calibrated by the following formula
    Figure PCTCN2022095838-appb-100005
    and longitudinal angle γ:
    Figure PCTCN2022095838-appb-100006
    Figure PCTCN2022095838-appb-100006
    其中,L C为第二设定距离,所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(0,0),所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(0,0); Wherein, LC is the second set distance, the coordinates of the same position of the second reference object located at a specific position in the visible light image are (0,0), and the same position of the second reference object is located at The coordinates of a specific location are (0,0);
    或,
    Figure PCTCN2022095838-appb-100007
    or,
    Figure PCTCN2022095838-appb-100007
    其中,L C为第二设定距离,所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(200,100),所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(0,0)。 Wherein, LC is the second set distance, the coordinates of the same position of the second reference object located at a specific position in the visible light image are (200,100), and the same position of the second reference object is located at a specific position in the visible light image The coordinates of are (0,0).
  7. 根据权利要求3或5所述的方法,其特征在于,在标定所述配准模型中所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角之后,还包括:The method according to claim 3 or 5, characterized in that, after calibrating the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model, further comprising:
    将所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角代入所述配准模型中,建立所述可见光图像与红外图像的高度和宽度映射模型;Substituting the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera into the registration model, and establishing a height and width mapping model of the visible light image and the infrared image;
    基于所述目标物体与所述可见光摄像头的多个不同水平距离,建立多组可见光图像中所述目标物体的指定区域的高度与所述目标物体距离所述可见光摄像头的水平距离之间的映射关系。Based on multiple different horizontal distances between the target object and the visible light camera, establishing a mapping relationship between the height of the designated area of the target object in multiple groups of visible light images and the horizontal distance between the target object and the visible light camera .
  8. 根据权利要求7所述的方法,其特征在于,根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合包括:The method according to claim 7, wherein registering and fusing the visible light image and the infrared image of the target object according to the registration model comprises:
    获取可见光图像中所述目标物体的指定区域的中心位置坐标值,以及所述可见光图像中所述目标物体的指定区域的高度和宽度,并根据所述可见光图像中所述目标物体的指定区域的高度和宽度,在所述高度和宽度映射模型中找到所述目标物体与所述可见光摄像头对应的水平距离值;Acquiring the center position coordinate value of the specified area of the target object in the visible light image, and the height and width of the specified area of the target object in the visible light image, and according to the specified area of the target object in the visible light image height and width, finding the horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model;
    将对应的水平距离值和所述目标物体的指定区域的中心位置坐标值输入到所述配准模型中,计算获得红外图像中所述目标物体的指定区域的中心位置坐标值;Inputting the corresponding horizontal distance value and the central position coordinate value of the designated area of the target object into the registration model, and calculating and obtaining the central position coordinate value of the designated area of the target object in the infrared image;
    将可见光图像中所述目标物体的指定区域的高度和宽度输入到所述高度和宽度映射模型中,计算获得红外图像中所述目标物体的指定区域的高度和宽度;inputting the height and width of the specified area of the target object in the visible light image into the height and width mapping model, and calculating the height and width of the specified area of the target object in the infrared image;
    根据所述红外图像中所述目标物体的指定区域的中心位置坐标,以及所述红外图像中所述 目标物体的指定区域的高度和宽度,确定所述红外图像中所述目标物体的指定区域。Determine the designated area of the target object in the infrared image according to the center position coordinates of the designated area of the target object in the infrared image, and the height and width of the designated area of the target object in the infrared image.
  9. 根据权利要求8所述的方法,其特征在于,在确定所述红外图像中所述目标物体的指定区域之后,还包括:The method according to claim 8, further comprising: after determining the designated area of the target object in the infrared image:
    获取所述红外图像中所述目标物体的指定区域中的最高温度值;Acquiring the highest temperature value in a specified area of the target object in the infrared image;
    将所述温度值标注在可见光图像中所述目标物体的指定区域的指定位置。Marking the temperature value at a designated position of a designated area of the target object in the visible light image.
  10. 根据权利要求1所述的方法,其特征在于,所述红外摄像头的光轴与所述设备正视图平面垂直其中,所述配准模型为:The method according to claim 1, wherein the optical axis of the infrared camera is perpendicular to the plane of the front view of the device, wherein the registration model is:
    Figure PCTCN2022095838-appb-100008
    Figure PCTCN2022095838-appb-100008
    其中,A、B、C、D分别为第一、第二、第三和第四转换参数,m、n、d分别为可见光摄像头与红外摄像头的X、Y、Z轴的空间位置相对距离,
    Figure PCTCN2022095838-appb-100009
    和γ分别为可见光摄像头与红外摄像头光轴的横向夹角和纵向交角,L为目标物距离可见光摄像头的水平距离,(x VR,y VR)为可见光图像像素坐标,(x IR,y IR)为红外图像像素坐标。
    Among them, A, B, C, and D are the first, second, third, and fourth conversion parameters, respectively, and m, n, and d are the relative distances between the X, Y, and Z axes of the visible light camera and the infrared camera, respectively,
    Figure PCTCN2022095838-appb-100009
    and γ are the horizontal and vertical angles between the optical axis of the visible light camera and the infrared camera, respectively, L is the horizontal distance between the target and the visible light camera, (x VR , y VR ) is the pixel coordinate of the visible light image, (x IR , y IR ) is the pixel coordinates of the infrared image.
  11. 根据权利要求10所述的方法,其特征在于,所述配准模型中可见光摄像头与红外摄像头光轴的横向夹角和纵向交角由以下步骤得到:The method according to claim 10, characterized in that the horizontal angle and the vertical angle between the optical axes of the visible light camera and the infrared camera in the registration model are obtained by the following steps:
    选取与所述可见光摄像头的水平距离为第一设定距离的第一参照物体;Selecting a first reference object whose horizontal distance from the visible light camera is a first set distance;
    通过可见光摄像头和红外摄像头同时采集所述第一参照物体的图像,并获取所述第一参照物体的相同位置分别在可见光图像和红外图像中的坐标;Collecting images of the first reference object simultaneously through a visible light camera and an infrared camera, and obtaining coordinates of the same position of the first reference object in the visible light image and the infrared image respectively;
    根据所述第一设定距离、以及所述第一参照物体的相同位置分别在可见光图像和红外图像中的坐标在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。Calibrate the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model according to the first set distance and the coordinates of the same position of the first reference object in the visible light image and the infrared image respectively The horizontal and vertical angles of intersection.
  12. 根据权利要求3所述的方法,其特征在于,所述配准模型中可见光摄像头的光轴与红外摄像头的光轴的横向夹角
    Figure PCTCN2022095838-appb-100010
    和纵向交角γ为:
    The method according to claim 3, wherein the lateral angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model
    Figure PCTCN2022095838-appb-100010
    and the longitudinal angle γ is:
    Figure PCTCN2022095838-appb-100011
    Figure PCTCN2022095838-appb-100011
    Figure PCTCN2022095838-appb-100012
    Figure PCTCN2022095838-appb-100012
    其中,L C为第一设定距离,(x VR-C,y VR-C)为第一参照物体的相同位置在可见光图像中的坐标,第一参照物体的相同位置在红外图像中的坐标(x IR-C,y IR-C)。 Among them, LC is the first set distance, (x VR-C , y VR-C ) is the coordinates of the same position of the first reference object in the visible light image, and the coordinates of the same position of the first reference object in the infrared image (x IR-C ,y IR-C ).
  13. 根据权利要求2所述的方法,其特征在于,所述配准模型中可见光摄像头与红外摄像头光轴的横向夹角和纵向交角由以下步骤得到::The method according to claim 2, wherein the horizontal angle and the vertical angle between the optical axes of the visible light camera and the infrared camera in the registration model are obtained by the following steps:
    选取与所述可见光摄像头的水平距离为第二设定距离的第二参照物体;Selecting a second reference object whose horizontal distance from the visible light camera is a second set distance;
    调整可见光摄像头的光轴使得所述第二参照物体的相同位置位于可见光图像和红外图像的特定位置;adjusting the optical axis of the visible light camera so that the same position of the second reference object is located at a specific position of the visible light image and the infrared image;
    根据所述第二设定距离和所述特定位置的坐标值,在所述配准模型中标定所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角。According to the second set distance and the coordinate value of the specific position, the horizontal angle and the longitudinal intersection angle between the optical axis of the visible light camera and the optical axis of the infrared camera are calibrated in the registration model.
  14. 根据权利要求5所述的方法,其特征在于,所述配准模型中可见光摄像头的光轴与红外摄像头的光轴的横向夹角
    Figure PCTCN2022095838-appb-100013
    和纵向交角γ为:
    The method according to claim 5, characterized in that, the lateral angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model
    Figure PCTCN2022095838-appb-100013
    and the longitudinal angle γ is:
    Figure PCTCN2022095838-appb-100014
    Figure PCTCN2022095838-appb-100014
    其中,L C为第二设定距离,所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(0,0),所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(0,0); Wherein, LC is the second set distance, the coordinates of the same position of the second reference object located at a specific position in the visible light image are (0,0), and the same position of the second reference object is located at The coordinates of a specific location are (0,0);
    或,
    Figure PCTCN2022095838-appb-100015
    or,
    Figure PCTCN2022095838-appb-100015
    其中,L C为第二设定距离,所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(200,100),所述第二参照物体的相同位置位于可见光图像中的特定位置的坐标为(0,0)。 Wherein, LC is the second set distance, the coordinates of the same position of the second reference object located at a specific position in the visible light image are (200,100), and the same position of the second reference object is located at a specific position in the visible light image The coordinates of are (0,0).
  15. 根据权利要求3或5所述的方法,其特征在于,在标定所述配准模型中所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角之后,还包括:The method according to claim 3 or 5, characterized in that, after calibrating the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera in the registration model, further comprising:
    将所述可见光摄像头的光轴与红外摄像头的光轴的横向夹角和纵向交角代入所述配准模型中,建立所述可见光图像与红外图像的高度和宽度映射模型;Substituting the horizontal angle and longitudinal angle between the optical axis of the visible light camera and the optical axis of the infrared camera into the registration model, and establishing a height and width mapping model of the visible light image and the infrared image;
    基于所述目标物体与所述可见光摄像头的多个不同水平距离,建立多组可见光图像中所述目标物体的指定区域的高度与所述目标物体距离所述可见光摄像头的水平距离之间的映射关系。Based on multiple different horizontal distances between the target object and the visible light camera, establishing a mapping relationship between the height of the designated area of the target object in multiple groups of visible light images and the horizontal distance between the target object and the visible light camera .
  16. 根据权利要求7所述的方法,其特征在于,根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合包括:The method according to claim 7, wherein registering and fusing the visible light image and the infrared image of the target object according to the registration model comprises:
    获取可见光图像中所述目标物体的指定区域的中心位置坐标值,以及所述可见光图像中所述目标物体的指定区域的高度和宽度,并根据所述可见光图像中所述目标物体的指定区域的高度和宽度,在所述高度和宽度映射模型中找到所述目标物体与所述可见光摄像头对应的水平距离值;Acquiring the center position coordinate value of the specified area of the target object in the visible light image, and the height and width of the specified area of the target object in the visible light image, and according to the specified area of the target object in the visible light image height and width, finding the horizontal distance value corresponding to the target object and the visible light camera in the height and width mapping model;
    将对应的水平距离值和所述目标物体的指定区域的中心位置坐标值输入到所述配准模型中,计算获得红外图像中所述目标物体的指定区域的中心位置坐标值;Inputting the corresponding horizontal distance value and the central position coordinate value of the designated area of the target object into the registration model, and calculating and obtaining the central position coordinate value of the designated area of the target object in the infrared image;
    将可见光图像中所述目标物体的指定区域的高度和宽度输入到所述高度和宽度映射模型中,计算获得红外图像中所述目标物体的指定区域的高度和宽度;inputting the height and width of the specified area of the target object in the visible light image into the height and width mapping model, and calculating the height and width of the specified area of the target object in the infrared image;
    根据所述红外图像中所述目标物体的指定区域的中心位置坐标,以及所述红外图像中所述目标物体的指定区域的高度和宽度,确定所述红外图像中所述目标物体的指定区域。The designated area of the target object in the infrared image is determined according to the center position coordinates of the designated area of the target object in the infrared image, and the height and width of the designated area of the target object in the infrared image.
  17. 根据权利要求8所述的方法,其特征在于,在确定所述红外图像中所述目标物体的指定区域之 后,还包括:The method according to claim 8, wherein, after determining the designated area of the target object in the infrared image, further comprising:
    获取所述红外图像中所述目标物体的指定区域中的最高温度值;Acquiring the highest temperature value in a specified area of the target object in the infrared image;
    将所述温度值标注在可见光图像中所述目标物体的指定区域的指定位置。Marking the temperature value at a designated position of a designated area of the target object in the visible light image.
  18. 根据权利要求1所述的方法,其特征在于,包括:The method of claim 1, comprising:
    采用设备的可见光摄像头和红外光摄像头同时进行图像采集;Use the visible light camera and infrared light camera of the equipment to collect images at the same time;
    根据融合区域初始模型参数获得过渡区域;Obtain the transition region according to the initial model parameters of the fusion region;
    根据所述过渡区域与可见光图像像素分辨率的比较获得融合区域;Obtaining a fusion area according to the comparison between the transition area and the pixel resolution of the visible light image;
    对所述融合区域范围内的可见光图像进行分析,以获得采集图像的特征信息。The visible light images within the range of the fusion area are analyzed to obtain feature information of the collected images.
  19. 根据权利要求1所述的方法,其特征在于,The method according to claim 1, characterized in that,
    所述融合区域初始模型根据所述红外摄像头和所述可见光摄像头的固定参数获得。The initial model of the fusion area is obtained according to fixed parameters of the infrared camera and the visible light camera.
  20. 根据权利要求2所述的方法,其特征在于,所述可见光摄像头的光轴与所述正视图平面垂直,所述融合区域初始模型参数包括:The method according to claim 2, wherein the optical axis of the visible light camera is perpendicular to the front view plane, and the initial model parameters of the fusion area include:
    Figure PCTCN2022095838-appb-100016
    Figure PCTCN2022095838-appb-100016
    Figure PCTCN2022095838-appb-100017
    Figure PCTCN2022095838-appb-100017
    其中,in,
    x 1≤x VR≤x 2,y 1≤y VR≤y 2x 1 ≤ x VR ≤ x 2 , y 1 ≤ y VR ≤ y 2 ,
    x VR和y VR对应所述过渡区域的像素坐标,m、n、d分别为所述可见光摄像头与所述红外光摄像头在X、Z、Y轴上的投影的距离,L max为所述可见光摄像头能够检测出图像对应物的最远距离,
    Figure PCTCN2022095838-appb-100018
    为所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOX平面上的投影的夹角,γ为所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOY平面上的投影的夹角,所述设备的正视图平面的垂直轴与Z轴平行,w IR为红外光光摄像头水平显示分辨率,h IR为红外光摄像头垂直显示分辨率,w VR为红外光光摄像头水平显示分辨率,h VR为红外光摄像头垂直显示分辨率,
    x VR and y VR correspond to the pixel coordinates of the transition region, m, n, and d are respectively the projection distances between the visible light camera and the infrared light camera on the X, Z, and Y axes, and L max is the visible light The camera can detect the farthest distance of the image counterpart,
    Figure PCTCN2022095838-appb-100018
    is the angle between the optical axis of the infrared camera and the projection of the optical axis of the visible camera on the ZOX plane, and γ is the distance between the optical axis of the infrared camera and the optical axis of the visible camera on the ZOY plane The included angle of projection, the vertical axis of the front view plane of the device is parallel to the Z axis, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and w VR is the infrared camera Horizontal display resolution, h VR is the vertical display resolution of the infrared camera,
    Figure PCTCN2022095838-appb-100019
    Figure PCTCN2022095838-appb-100019
    α为可见光水平视场角,β为可见光垂直视场角,θ为红外光水平视场角,φ为红外光垂直视场角。α is the horizontal viewing angle of visible light, β is the vertical viewing angle of visible light, θ is the horizontal viewing angle of infrared light, and φ is the vertical viewing angle of infrared light.
  21. 根据权利要求2所述的方法,其特征在于,所述红外光摄像头的光轴与所述正视图平面垂直,所述融合区域初始模型参数包括:The method according to claim 2, wherein the optical axis of the infrared camera is perpendicular to the plane of the front view, and the initial model parameters of the fusion area include:
    Figure PCTCN2022095838-appb-100020
    Figure PCTCN2022095838-appb-100020
    Figure PCTCN2022095838-appb-100021
    Figure PCTCN2022095838-appb-100021
    其中,in,
    x 1≤x VR≤x 2,y 1≤y VR≤y 2x 1 ≤ x VR ≤ x 2 , y 1 ≤ y VR ≤ y 2 ,
    x VR和y VR对应所述过渡区域的像素坐标,m、n、d分别为所述可见光摄像头与所述红外光摄像头在X、Z、Y轴上的投影的距离,L max为所述可见光摄像头能够检测出图像对应物的最远距离,
    Figure PCTCN2022095838-appb-100022
    为所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOX平面上的投影的夹角,γ为所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOY平面上的投影的夹角,所述设备的正视图平面的垂直轴与Z轴平行,w IR为红外光光摄像头水平显示分辨率,h IR为红外光摄像头垂直显示分辨率,w VR为红外光光摄像头水平显示分辨率,h VR为红外光摄像头垂直显示分辨率,
    x VR and y VR correspond to the pixel coordinates of the transition region, m, n, and d are respectively the projection distances between the visible light camera and the infrared light camera on the X, Z, and Y axes, and L max is the visible light The camera can detect the farthest distance of the image counterpart,
    Figure PCTCN2022095838-appb-100022
    is the angle between the optical axis of the infrared camera and the projection of the optical axis of the visible camera on the ZOX plane, and γ is the distance between the optical axis of the infrared camera and the optical axis of the visible camera on the ZOY plane The included angle of projection, the vertical axis of the front view plane of the device is parallel to the Z axis, w IR is the horizontal display resolution of the infrared camera, h IR is the vertical display resolution of the infrared camera, and w VR is the infrared camera Horizontal display resolution, h VR is the vertical display resolution of the infrared camera,
    Figure PCTCN2022095838-appb-100023
    Figure PCTCN2022095838-appb-100023
    α为可见光水平视场角,β为可见光垂直视场角,θ为红外光水平视场角,φ为红外光垂直视场角。α is the horizontal viewing angle of visible light, β is the vertical viewing angle of visible light, θ is the horizontal viewing angle of infrared light, and φ is the vertical viewing angle of infrared light.
  22. 根据权利要求3或4所述的方法,其特征在于,根据过渡区域与可见光图像像素分辨率的比较获得融合区域的步骤包括:The method according to claim 3 or 4, wherein the step of obtaining the blended area according to the comparison between the transition area and the pixel resolution of the visible light image comprises:
    根据所述过渡区域的像素坐标的上限和下限与可见光图像分辨率的比较,获得所述融合区域的像素坐标的上限和下限。According to the comparison between the upper limit and the lower limit of the pixel coordinates of the transition area and the resolution of the visible light image, the upper limit and the lower limit of the pixel coordinates of the fusion area are obtained.
  23. 根据权利要求5所述的方法,其特征在于,根据所述融合区域初始模型中的所述可见光图像像素坐标的上限和下限与可见光分辨率相关参数,获得所述融合区域范围的上限和下限的步骤包括:The method according to claim 5, characterized in that, according to the upper limit and lower limit of the pixel coordinates of the visible light image in the initial model of the fusion area and the parameters related to the visible light resolution, the upper limit and the lower limit of the range of the fusion area are obtained. Steps include:
    在x 1>-w VR时,w min=x 1,否则,w min=-w VRWhen x 1 >-w VR , w min =x 1 , otherwise, w min =-w VR ;
    在x 2<w VR时,w max=x 2,否则,w max=w VRWhen x 2 <w VR , w max =x 2 , otherwise, w max =w VR ;
    在y 1>-h VR时,h min=y 1,否则,h min=-h VRWhen y 1 >-h VR , h min =y 1 , otherwise, h min =-h VR ;
    在y 2<h VR时,h max=y 2,否则,h max=h VRWhen y 2 <h VR , h max =y 2 , otherwise, h max =h VR ,
    所述融合区域的像素坐标满足w min≤x VR≤w max,h min≤y VR≤h maxThe pixel coordinates of the fusion area satisfy w min ≤ x VR ≤ w max , h min ≤ y VR ≤ h max .
  24. 根据权利要求3或4所述的方法,其特征在于,The method according to claim 3 or 4, characterized in that,
    所述可见光摄像头和所述红外光摄像头所述X、Y、Z轴上的投影均彼此间隔。The projections on the X, Y, and Z axes of the visible light camera and the infrared light camera are all spaced apart from each other.
  25. 根据权利要求3或4所述的方法,其特征在于,还包括:The method according to claim 3 or 4, further comprising:
    对所述可见光摄像头和所述红外光摄像头进行标定,以确认所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOX平面上的投影的夹角,和确认所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOY平面上的投影的夹角。Calibrate the visible light camera and the infrared camera to confirm the angle between the optical axis of the infrared camera and the projection of the optical axis of the visible light camera on the ZOX plane, and confirm the The included angle between the optical axis and the projection of the optical axis of the visible light camera on the ZOY plane.
  26. 根据权利要求8所述的方法,其特征在于,所述对所述可见光摄像头和所述红外光摄像头进行标定的步骤包括:The method according to claim 8, wherein the step of calibrating the visible light camera and the infrared light camera comprises:
    通过固定了的所述可见光摄像头和所述红外光摄像头对同一物体进行图像采集,获得所述同一物体的可见光图像和红外光图像;collecting images of the same object through the fixed visible light camera and the infrared light camera, and obtaining a visible light image and an infrared light image of the same object;
    根据所述同一物体的可见光图像和红外光图像的所述同一物体的相同位置的横向长度和纵向长度进行计算,获得所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOX平面上的投影的夹角,和所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOY平面上的投影的夹角。Calculate according to the horizontal length and vertical length of the same position of the same object in the visible light image and the infrared light image of the same object, and obtain the optical axis of the infrared light camera and the optical axis of the visible light camera on the ZOX plane The included angle of the projection of , and the included angle of the projection of the optical axis of the infrared camera and the optical axis of the visible camera on the ZOY plane.
  27. 根据权利要求8所述的方法,其特征在于,所述对所述可见光摄像头和所述红外光摄像头 进行标定的步骤包括:The method according to claim 8, wherein the step of calibrating the visible light camera and the infrared camera comprises:
    调节所述可见光摄像头和所述红外光摄像头的至少一个的光轴,使同一物体的相同位置位于可见光画面和红外光画面的各自的特定位置,以将所述可见光摄像头和所述红外光摄像头的光轴的夹角调整为预设值,以确认所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOX平面上的投影的夹角,和确认所述红外光摄像头的光轴与所述可见光摄像头的光轴在ZOY平面上的投影的夹角。Adjusting the optical axis of at least one of the visible light camera and the infrared light camera, so that the same position of the same object is located in the respective specific positions of the visible light picture and the infrared light picture, so that the visible light camera and the infrared light camera The angle of the optical axis is adjusted to a preset value to confirm the angle between the optical axis of the infrared camera and the projection of the optical axis of the visible light camera on the ZOX plane, and to confirm that the optical axis of the infrared camera and The included angle of the projection of the optical axis of the visible light camera on the ZOY plane.
  28. 一种可见光图像和红外图像的处理装置,位于配置有可见光摄像头和红外摄像头的设备上,其中,所述可见光摄像头的光轴或红外摄像头的光轴与所述设备正视图平面垂直,其特征在于,包括:A processing device for visible light images and infrared images, located on a device equipped with a visible light camera and an infrared camera, wherein the optical axis of the visible light camera or the optical axis of the infrared camera is perpendicular to the plane of the front view of the device, characterized in that ,include:
    配准模型建立模块,用于根据所述可见光摄像头与红外摄像头的空间相对位置、转换参数、以及目标物体距离所述可见光摄像头的水平距离,建立所述可见光摄像头采集的目标物体的可见光图像与所述红外摄像头采集的目标物体的红外图像的坐标位置的配准模型;The registration model building module is used to establish the visible light image of the target object collected by the visible light camera and the infrared image according to the relative spatial position of the visible light camera and the infrared camera, the conversion parameters, and the horizontal distance between the target object and the visible light camera. The registration model of the coordinate position of the infrared image of the target object collected by the infrared camera;
    图像融合模块,用于根据所述配准模型将所述目标物体的可见光图像与红外图像进行配准融合。An image fusion module, configured to perform registration and fusion of the visible light image and the infrared image of the target object according to the registration model.
  29. 一种图像采集处理装置,其特征在于,包括:An image acquisition and processing device, characterized in that it comprises:
    设备,所述设备包括可见光摄像头和红外光摄像头,所述可见光摄像头的光轴与所述设备的正视图平面垂直;A device, the device comprising a visible light camera and an infrared light camera, the optical axis of the visible light camera is perpendicular to the front view plane of the device;
    处理单元,采用根据权利要求1至28任一项所述的图像处理方法获得采集图像的特征信息。The processing unit adopts the image processing method according to any one of claims 1 to 28 to obtain feature information of the collected image.
PCT/CN2022/095838 2021-06-08 2022-05-30 Method and apparatus for processing visible light image and infrared image WO2022257794A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202110645248.3 2021-06-08
CN202110650324.XA CN115457090A (en) 2021-06-08 2021-06-08 Registration fusion method and device for visible light image and infrared image
CN202110645248.3A CN115457089A (en) 2021-06-08 2021-06-08 Registration fusion method and device for visible light image and infrared image
CN202110650324.X 2021-06-08
CN202110909327.0 2021-08-09
CN202110909327.0A CN113792592B (en) 2021-08-09 2021-08-09 Image acquisition processing method and image acquisition processing device

Publications (1)

Publication Number Publication Date
WO2022257794A1 true WO2022257794A1 (en) 2022-12-15

Family

ID=84424621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095838 WO2022257794A1 (en) 2021-06-08 2022-05-30 Method and apparatus for processing visible light image and infrared image

Country Status (1)

Country Link
WO (1) WO2022257794A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309569A (en) * 2023-05-18 2023-06-23 中国民用航空飞行学院 Airport environment anomaly identification system based on infrared and visible light image registration
CN116895094A (en) * 2023-09-11 2023-10-17 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7652251B1 (en) * 2008-11-17 2010-01-26 Fluke Corporation Registration methods for fusing corresponding infrared and visible light images
CN103024281A (en) * 2013-01-11 2013-04-03 重庆大学 Infrared and visible video integration system
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN112053314A (en) * 2020-09-04 2020-12-08 深圳市迈测科技股份有限公司 Image fusion method and device, computer equipment, medium and thermal infrared imager

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7652251B1 (en) * 2008-11-17 2010-01-26 Fluke Corporation Registration methods for fusing corresponding infrared and visible light images
CN103024281A (en) * 2013-01-11 2013-04-03 重庆大学 Infrared and visible video integration system
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN112053314A (en) * 2020-09-04 2020-12-08 深圳市迈测科技股份有限公司 Image fusion method and device, computer equipment, medium and thermal infrared imager

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309569A (en) * 2023-05-18 2023-06-23 中国民用航空飞行学院 Airport environment anomaly identification system based on infrared and visible light image registration
CN116309569B (en) * 2023-05-18 2023-08-22 中国民用航空飞行学院 Airport environment anomaly identification system based on infrared and visible light image registration
CN116895094A (en) * 2023-09-11 2023-10-17 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion
CN116895094B (en) * 2023-09-11 2024-01-30 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion

Similar Documents

Publication Publication Date Title
JP6552729B2 (en) System and method for fusing the outputs of sensors having different resolutions
WO2022257794A1 (en) Method and apparatus for processing visible light image and infrared image
US9482515B2 (en) Stereoscopic measurement system and method
US9454822B2 (en) Stereoscopic measurement system and method
JP2009042162A (en) Calibration device and method therefor
CN109163657A (en) A kind of circular target position and posture detection method rebuild based on binocular vision 3 D
US9286506B2 (en) Stereoscopic measurement system and method
WO2019144269A1 (en) Multi-camera photographing system, terminal device, and robot
JP5079547B2 (en) Camera calibration apparatus and camera calibration method
WO2021259365A1 (en) Target temperature measurement method and apparatus, and temperature measurement system
EP2310799B1 (en) Stereoscopic measurement system and method
CN109493378B (en) Verticality detection method based on combination of monocular vision and binocular vision
US20130331145A1 (en) Measuring system for mobile three dimensional imaging system
Yang et al. Effect of field of view on the accuracy of camera calibration
US20240159621A1 (en) Calibration method of a portable electronic device
EP2283314B1 (en) Stereoscopic measurement system and method
CN114862960A (en) Multi-camera calibrated image ground leveling method and device, electronic equipment and medium
KR100991570B1 (en) A remote sensing method of diverse signboards&#39; Size and Apparatus using thereof
AU2009249001B2 (en) Stereoscopic measurement system and method
CN115457090A (en) Registration fusion method and device for visible light image and infrared image
CN115457089A (en) Registration fusion method and device for visible light image and infrared image
CN113792592B (en) Image acquisition processing method and image acquisition processing device
US11399778B2 (en) Measuring instrument attachment assist device and measuring instrument attachment assist method
CN109872368A (en) Image processing method, device and test macro
US20200034985A1 (en) Method and system for measuring the orientation of one rigid object relative to another

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819392

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE