WO2022100242A1 - Image processing method and apparatus, electronic device, and computer-readable storage medium - Google Patents

Image processing method and apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
WO2022100242A1
WO2022100242A1 PCT/CN2021/116809 CN2021116809W WO2022100242A1 WO 2022100242 A1 WO2022100242 A1 WO 2022100242A1 CN 2021116809 W CN2021116809 W CN 2021116809W WO 2022100242 A1 WO2022100242 A1 WO 2022100242A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
calibration
response function
pixel
Prior art date
Application number
PCT/CN2021/116809
Other languages
French (fr)
Chinese (zh)
Inventor
林枝叶
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022100242A1 publication Critical patent/WO2022100242A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • This application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and computer-readable storage medium, as well as a method, apparatus, electronic device, and computer-readable storage medium for a pixel mapping relationship of a binocular camera .
  • images captured by multiple cameras are often aligned and then fused to fuse information collected by multiple cameras, which can effectively enhance image quality.
  • images captured by different cameras may have similar image information structures but inconsistent gradients, resulting in poor image alignment accuracy and limited alignment effect.
  • Embodiments of the present application provide an image processing method, apparatus, electronic device, computer-readable storage medium, and a pixel mapping relationship method, apparatus, electronic device, and computer-readable storage medium for a binocular camera, which can improve image alignment precision.
  • An image processing method comprising:
  • the first image and the second image to be processed the first image is captured by the first camera, and the second image is captured by the second camera;
  • pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera and the second image.
  • the second camera response function of the camera is determined;
  • An image processing device includes:
  • a to-be-processed image acquisition module configured to acquire the to-be-processed first image and the second image; the first image is captured by the first camera, and the second image is captured by the second camera;
  • the pixel mapping processing module is configured to perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera A camera response function is determined with the second camera response function of the second camera;
  • the image alignment processing module is used for aligning the mapped image corresponding to the second image with the first image.
  • An electronic device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the first image and the second image to be processed the first image is captured by the first camera, and the second image is captured by the second camera;
  • pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera and the second image.
  • the second camera response function of the camera is determined;
  • the first image and the second image to be processed the first image is captured by the first camera, and the second image is captured by the second camera;
  • pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera and the second image.
  • the second camera response function of the camera is determined;
  • the above image processing method, device, electronic device and storage medium according to the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera, the second image captured by the second camera is processed. Pixel mapping, aligning the obtained mapping image corresponding to the second image with the first image.
  • pixel mapping is performed on the second image by using the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera, and the second image can be mapped by using the camera response function of the camera. Mapping to the pixel space of the first image can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
  • a method for determining a pixel mapping relationship of a binocular camera comprising:
  • the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions
  • the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions
  • the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
  • a device for determining a pixel mapping relationship of a binocular camera comprising:
  • the calibration image group acquisition module is used to obtain the first calibration image group and the second calibration image group;
  • the first calibration image group includes the first calibration image obtained by the first camera in the binocular camera shooting under the same scene and different exposure time conditions an image
  • the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
  • a first camera response function determination module configured to determine a first camera response function corresponding to the first camera based on each first calibration image
  • a second camera response function determination module configured to determine a second camera response function corresponding to the second camera based on each second calibration image
  • the pixel mapping relationship determination module is configured to determine the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function.
  • An electronic device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
  • the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions
  • the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions
  • the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
  • the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions
  • the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions
  • the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
  • the camera response function and the second camera response function corresponding to the second camera, and the pixel mapping relationship between the first camera and the second camera is determined based on the first camera response function and the second camera response function.
  • the pixel mapping relationship is determined according to the first camera response function of the first camera and the second camera response function of the second camera.
  • the camera response function of the camera can be used to convert the second image captured by the second camera in the binocular camera. Mapping to the pixel space of the first image captured by the first camera can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
  • Figure 1 is a schematic diagram of the analysis of RGB image imaging.
  • Figure 2 is a schematic diagram of the analysis of NIR image imaging.
  • FIG. 3 is an application environment diagram of an image processing method or a method for determining a pixel mapping relationship of a binocular camera in one embodiment.
  • FIG. 4 is a flowchart of an image processing method in one embodiment.
  • FIG. 5 is a flowchart of determining a first camera response function in one embodiment.
  • FIG. 6 is a flowchart of an image processing method in another embodiment.
  • FIG. 7 is a flowchart of camera calibration in one embodiment.
  • Figure 8 is a flow chart of calibration of CRF in one embodiment.
  • FIG. 9 is a schematic diagram of a camera response curve in one embodiment.
  • FIG. 10 is a schematic diagram of a camera response curve in another embodiment.
  • FIG. 11 is a schematic diagram of a camera response curve in yet another embodiment.
  • FIG. 12 is a flowchart of a method for determining a pixel mapping relationship of a binocular camera in one embodiment.
  • FIG. 13 is a structural block diagram of an image processing apparatus in an embodiment.
  • FIG. 14 is a structural block diagram of an apparatus for determining a pixel mapping relationship of a binocular camera in one embodiment.
  • Figure 15 is a diagram of the internal structure of a computer device in one embodiment.
  • RGB Red, Green and Blue filters
  • RGB color images
  • RGB image Through the fusion of the information of the RGB image and the NIR image, it can not only be used for image quality enhancement, but also for object recognition in extremely dark scenes. , image denoising, high dynamic range (HDR, High-Dynamic Range), image dehazing, skin depigmentation, etc.
  • RGB images and NIR images include two major processes, image alignment and image fusion. Alignment is the foundation and fusion is the fundamental. If the alignment error is large, it will cause artifacts such as ghosts, Problems such as ghosting; if the fusion effect is poor, there will be problems such as color distortion and white borders.
  • feature point detection and matching are often used. These feature points include Harris corner points, FAST (Features From Accelerated Segment Test, accelerated segmentation test features) feature operator, SURF (Speeded Up) Robust Features, acceleration robust features) feature operator, SIFT (Scale-invariant features transform, scale-invariant feature transform) feature operator, etc., with rotation invariance and illumination invariance.
  • FAST Features From Accelerated Segment Test, accelerated segmentation test features
  • SURF Speeded Up
  • Robust Features acceleration robust features
  • SIFT Scale-invariant features transform, scale-invariant feature transform
  • RGB images and NIR images have similar structures but inconsistent gradient directions between different objects in the same scene due to different information sources.
  • Figure 1 is an RGB image
  • Figure 2 is an NIR image.
  • the green plant area represented by the two black boxes is darker in the RGB image, while the NIR image is brighter; the extremely dark area represented by the white box, RGB The image is darker than its surrounding area, while the NIR image is comparable to its surrounding area; the sky and other areas of the building, RGB image and NIR image brightness are comparable.
  • the essential reason for this problem is that the RGB and NIR bands are different, and the transmittance to different objects is inconsistent.
  • the traditional feature point detection and alignment technology such as using SIFT feature point detection and matching alignment technology to align RGB images and NIR images, the alignment accuracy is poor, the alignment effect is limited, and it cannot meet the needs of subsequent image fusion. .
  • the present application proposes an image processing method, device, electronic device, and computer-readable storage medium that can improve the effect of image alignment, as well as a method, device, electronic device, and computer-readable storage for a pixel mapping relationship of a binocular camera
  • the medium is specifically described by the following examples.
  • first, second, etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish a first element from another element.
  • a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of this application.
  • Both the first client and the second client are clients, but they are not the same client.
  • FIG. 3 is a schematic diagram of an application environment of the image processing method in one embodiment.
  • the application environment includes an electronic device 302.
  • the electronic device 302 is equipped with multiple cameras.
  • the electronic device 302 can shoot through the multiple cameras, and align and fuse the images captured by the multiple cameras to enhance the shooting. Image quality effect.
  • the electronic device 302 acquires the first image captured by the first camera and the second image captured by the second camera, and the electronic device 302 obtains the first camera response function of the first camera and the second camera of the second camera according to the first camera response function of the first camera and the second camera of the second camera.
  • the above image processing method can also be implemented by a server, that is, the server acquires the first image and the second image to be processed, such as acquiring the first image and the second image to be processed from a database, or The electronic device 302 directly sends the captured first image and the second image to be processed to the server through the network, so that the server performs image alignment processing.
  • FIG. 3 is a schematic diagram of an application environment of a method for determining a pixel mapping relationship of a binocular camera in one embodiment.
  • the electronic device 302 obtains a first calibration image group and a second calibration image group, and the first calibration image group includes a first calibration image obtained by shooting the first camera in the binocular camera under the conditions of the same scene and different exposure times,
  • the second calibration image group includes second calibration images captured by the second camera in the binocular camera under the conditions of the same scene and different exposure times, and the electronic device 302 respectively determines the first camera response function corresponding to the first camera in the binocular camera A second camera response function corresponding to the second camera, and a pixel mapping relationship between the first camera and the second camera is determined based on the first camera response function and the second camera response function.
  • the above-mentioned method for determining the pixel mapping relationship of a binocular camera can also be implemented by a server, that is, the server obtains the first calibration image group and the second calibration image group, such as obtaining the first calibration image group and the second calibration image group from a database.
  • the second calibration image group, or the electronic device 302 directly sends the captured first calibration image group and the second calibration image group to the server through the network, so that the server performs the processing of determining the pixel mapping relationship of the binocular camera.
  • the electronic device 302 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, etc.; the server can be implemented by an independent server or a server cluster composed of multiple servers.
  • FIG. 4 is a flowchart of an image processing method in one embodiment.
  • the image processing method in this embodiment is described by taking the operation on the electronic device in FIG. 3 as an example.
  • the image processing method includes process 402 to process 406 .
  • a first image and a second image to be processed are acquired; the first image is captured by the first camera, and the second image is captured by the second camera.
  • the first image and the second image need to be aligned and processed.
  • the images can be captured by two cameras for the same scene, wherein the first image is captured by the first camera, and the second image is captured by the second camera.
  • the first image may be a color image captured by a visible light camera
  • the second image may be an infrared image captured by an infrared camera.
  • the electronic device may be provided with a binocular camera, including a first camera and a second camera.
  • a binocular camera including a first camera and a second camera.
  • two rear cameras may be provided, and the two cameras can be used to shoot at the same time to obtain the first image to be processed. and the second image.
  • Process 404 Perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera The second camera response function of the second camera is determined.
  • the pixel mapping relationship reflects the pixel value of each pixel in the image captured by the first camera and the pixel value of each pixel in the image captured by the second camera when the first camera and the second camera shoot the same scene at the same time
  • the mapping relationship between them that is, through the pixel mapping relationship, the images captured by the first camera and the second camera can be mapped to the color space, such as mapping the image captured by the first camera to the color space corresponding to the image captured by the second camera.
  • the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function of the first camera and the second camera response function of the second camera.
  • the camera response function (Camera Response Function, CRF) is used to characterize the corresponding relationship between the brightness of the image captured by the camera and the illuminance (Radiance) in the real world.
  • CRF Camera Response Function
  • the brightness or illuminance observed in the real world is constant and will not change with different cameras, and there is a certain correspondence between the brightness of the image captured by the camera and the illuminance in the real world.
  • Function description Different cameras have different CRF curves, but it is established that the brightness of the image captured by the camera has a certain relationship with the illumination of the real world.
  • the color gamut of different cameras can be mapped to the same space.
  • the camera response function can be pre-calibrated by the image captured by the camera.
  • the mapped image is obtained by performing pixel mapping processing on the second image through the pixel mapping relationship between the first camera and the second camera. Specifically, the pixel mapping relationship between the first camera and the second camera can be used to respectively map the pixels in the second image. The pixel value of each pixel is updated to obtain a mapped image.
  • the electronic device After obtaining the first image and the second image to be processed, the electronic device obtains the pixel mapping relationship between the first camera and the second camera, and performs pixel mapping on the second image based on the pixel mapping relationship to obtain the second image.
  • the image is mapped to the color space of the first image.
  • the problem that the image information structure of the mapped image and the first image is similar but the gradient is inconsistent is solved.
  • aligning the mapped image and the first image you can Make sure the images are aligned.
  • mapping image corresponding to the second image is aligned with the first image.
  • the mapped image corresponding to the second image is obtained by performing pixel mapping on the pixel mapping relationship between the first camera and the second camera, and the gradient of the mapped image and the first image is more consistent.
  • Alignment for example, aligning the mapped image and the first image through an alignment method of SIFT feature detection and matching, so as to accurately align the image captured by the first camera and the image captured by the second camera, thereby improving the effect of image alignment.
  • pixel mapping is performed on the second image captured by the second camera according to the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera.
  • the obtained mapping image corresponding to the second image is aligned with the first image.
  • pixel mapping is performed on the second image by using the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera, and the second image can be mapped by using the camera response function of the camera. Mapping to the pixel space of the first image can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
  • the image processing method further includes a process of determining a pixel mapping relationship based on the first camera response function of the first camera and the second camera response function of the second camera, specifically including: acquiring a first calibration image group and a second Calibration image group; the first calibration image group includes the first calibration image obtained by the first camera under the same scene and different exposure time conditions, and the second calibration image group includes the second camera under the same scene and different exposure time conditions shooting the obtained second calibration image; determining a first camera response function corresponding to the first camera based on each first calibration image; determining a second camera response function corresponding to the second camera based on each second calibration image; according to the first camera response function and the second camera response function to determine the pixel mapping relationship between the first camera and the second camera.
  • the first calibration image group includes a first calibration image captured by a first camera under the same scene and different exposure time conditions
  • the second calibration image group includes a second calibration image captured under the same scene and different exposure time conditions.
  • the second calibration image That is, the images in the first calibration image group and the second calibration image group are both captured by the corresponding cameras for the same scene, and the exposure times of the first calibration images in the first calibration image group are different when they are captured. The exposure times of the respective second calibration images in the calibration image group are different during shooting.
  • the shooting scenes corresponding to the first calibration image group and the second calibration image group may be high dynamic range scenes including overexposed and overdark areas, so as to ensure that the determined pixel mapping relationship can be applied to high dynamic range scenes, Guarantees the applicable scope of the pixel mapping relationship.
  • the number of the first calibration image and the second calibration image and the corresponding exposure time can be flexibly set according to actual needs. For example, the number of the first calibration image and the second calibration image can be 5, and the corresponding exposure time can be increased. , and the respective exposure times of the first calibration image and the second calibration image may be different.
  • the exposure time can be adjusted by modifying the signal gain (gain value) and shutter speed (shutter value) of the electronic device.
  • the electronic device determines the pixel mapping relationship between the first camera and the second camera, that is, when the electronic device calibrates the pixel mapping relationship between the first camera and the second camera
  • the first camera and the second camera can be respectively
  • the second camera performs self-calibration, determines the first camera response function corresponding to the first camera and the second camera response function corresponding to the second camera, and uses the first camera response function and the second camera response function to perform mutual calibration to obtain the first camera response function.
  • the pixel mapping relationship between the camera and the second camera is determining the pixel mapping relationship between the first camera and the second camera.
  • the electronic device determines the first camera response function corresponding to the first camera based on each first calibration image, and determines the corresponding second camera based on each second calibration image.
  • the second camera response function may first align the calibration images in the first calibration image group and the second calibration image group, for example, by using the median threshold bitmap alignment method to align the first calibration image group and the second calibration image group. Perform median threshold alignment on each of the calibration images, and determine the corresponding camera response function based on the first calibration image after median threshold alignment and the second calibration image after median threshold alignment.
  • the electronic device can obtain the first camera response function corresponding to the first camera and the first camera corresponding to the second camera through the Debevec algorithm. Two camera response functions. After obtaining the first camera response function and the second camera response function, the electronic device determines the pixel mapping relationship between the first camera and the second camera based on the first camera response function and the second camera response function. For example, the pixel value of the matching point between the first calibration image and the second calibration image, and the relative illuminance value determined by the matching point according to the response function of the first camera and the response function of the second camera, can be used to determine the difference between the first camera and the second camera. The illuminance mapping relationship between them is determined, and the pixel mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship.
  • self-calibration is performed by using the calibration images captured by the first camera and the second camera, respectively, to determine the first camera response function and the second camera response function, and according to the obtained first camera response function and second camera response function Perform mutual calibration to obtain the pixel mapping relationship between the first camera and the second camera.
  • the pixel mapping relationship is determined based on the response function of the first camera and the response function of the second camera, and the color space of the image captured by the first camera and the image captured by the second camera can be mapped through the pixel mapping relationship, so as to ensure the gradient when the images are aligned Consistency can effectively improve the effect of image alignment.
  • the first camera is a visible light camera; as shown in FIG. 5 , the process of determining the first camera response function, that is, determining the first camera response function corresponding to the first camera based on each first calibration image, includes the process 502 to process 508.
  • process 502 target channel images corresponding to each of the first calibration images respectively corresponding to the target color channel are acquired.
  • visible light cameras can capture color images, such as RBG cameras, sensors including Red, Green, and Blue filters to receive reflected light from objects and generate RGB color images.
  • the target color channel is the color channel that needs to construct the corresponding camera response function.
  • the camera response function is related to the camera itself.
  • the correspondence between the brightness of the image captured by different cameras and the illuminance in the real world is different, that is, different cameras correspond to different camera response functions, and the same camera is in different color channels, the camera response function
  • the expressions of the corresponding function curves are also different. For example, for an RGB image captured by a visible light camera, which consists of three color channels, the corresponding camera response functions can be calibrated based on the R, G, and B channels, respectively.
  • the camera response functions corresponding to each channel are different from each other, but The camera response function corresponding to each channel reflects the correspondence between the brightness of the image captured by the visible light camera and the illumination in the real world.
  • the target channel image is an image of the first calibration image corresponding to the target color channel. If the first calibration image is an RGB image and the target color channel is an R channel, the target channel image may be an R channel image obtained by channel separation of the RGB image.
  • the target color channel can be set according to actual needs.
  • process 504 the first feature points corresponding to the same position in each target channel image in the same scene are determined.
  • Each first calibration image is captured based on the same scene. For the same position in the same scene, the first feature point corresponding to the position in each target channel image is determined, and the first feature point corresponding to each target channel image points to reality. The same location of the scene in the world, but with different exposure times for each target channel image. Specifically, the electronic device may determine the first feature point corresponding to the same position in the same scene from each target channel image.
  • the channel luminance value of each first feature point corresponding to the target color channel is determined.
  • the electronic device After obtaining the first feature points corresponding to each other in each target channel image, the electronic device further determines the channel luminance value of each first feature point corresponding to the target color channel. Specifically, the electronic device may determine that the first feature point corresponds to the channel pixel value of the target color channel, and obtain the channel luminance value of the first feature point corresponding to the target color channel based on the channel pixel value. When the target color channel is a single channel, the channel luminance value is equal to the channel pixel value.
  • a first camera response function corresponding to the first camera is determined according to the channel luminance value of each first feature point corresponding to the target color channel.
  • the first camera response function corresponding to the first camera is determined based on the channel brightness values.
  • the electronic device may obtain the first camera response function corresponding to the first camera by using the channel luminance value of each first feature point corresponding to the target color channel based on the Debevec algorithm.
  • the first calibration image captured by the camera corresponds to the target channel image of the target color channel to perform camera response function calibration, and the camera response functions of the first camera corresponding to various channels can be determined according to actual needs.
  • acquiring the target channel images corresponding to the target color channels of the first calibration images respectively includes: performing channel separation on the first calibration images to obtain separate channel images; obtaining images corresponding to the target color according to the separated channel images The target channel image for the channel.
  • the separated channel image is an image corresponding to each color channel obtained after the first calibration image is subjected to channel separation processing, and the separated channel image corresponds to the color space in which the first calibration image is located.
  • R channel image, G channel image and B channel image can be obtained;
  • HSV (Hue-Saturation-Value, Hue-Saturation-Value) image can be obtained after channel separation to obtain H channel image, S channel image and V channel image.
  • the target color channel can be set according to actual requirements.
  • the electronic device performs channel separation on the first calibration image to obtain each separated channel image. Based on the obtained Each of the separate channel images determines the target channel image corresponding to the target color channel.
  • the separation channel image corresponding to the target color channel can be selected from the separation channel images as the target channel image; when the target color channel includes all separation channels, all separation channel images can also be directly used as the target channel image to establish the first channel image.
  • a camera corresponds to the camera response function of each color channel.
  • each separate channel image can also be transformed to obtain the target channel image.
  • the luminance channel refers to the channel in the color space that represents the brightness of the image, that is, the target channel image is the luminance channel image
  • the separate channel images corresponding to the first calibration image include the R channel image
  • the brightness channel image is obtained from the G channel image and the B channel image, that is, the target channel image is obtained.
  • the corresponding required target channel image is quickly determined, so as to ensure the processing efficiency of the camera response function calibration.
  • acquiring a target channel image corresponding to each first calibration image respectively corresponding to a target color channel includes: transforming the first calibration image into a target color space including the target color channel to obtain a target color space image; according to the target color The spatial image results in a target channel image corresponding to the target color channel.
  • the target color channel is preset according to actual requirements, and each color channel of the target color space includes the target color channel.
  • the corresponding target color can be obtained according to the image corresponding to the target color space.
  • the target channel image for the channel is preset according to actual requirements, and each color channel of the target color space includes the target color channel.
  • the electronic device when acquiring the target channel image, transforms the color space of the first calibration image. For example, the target color space including the target color channel can be determined first, and the color space transformation is performed on the first calibration image.
  • the calibration image is transformed to the target color space to obtain the target color space image in the target color space.
  • the electronic device obtains a target channel image corresponding to the target color channel according to the target color space image.
  • the electronic device may perform channel separation on the target color space image, and obtain the target channel image from the separated channel image obtained by the channel separation.
  • the target channel image is obtained according to the transformed result, and the camera responses of the first camera corresponding to various channels can be obtained based on the first calibration image through channel transformation processing. function.
  • the working principle of the infrared camera is that the infrared light emits infrared rays to illuminate the object, and the infrared rays are diffusely reflected and received by the monitoring camera to form infrared images, such as NIR images.
  • the second camera is an infrared camera
  • the second calibration image captured by the second camera is a single-channel image
  • the pixel value of the second calibration image has the same value as its brightness value.
  • an algorithm may be directly determined based on the camera response function, for example, based on the Debevec algorithm, and the second camera response function corresponding to the second camera may be obtained according to the pixel value of the second calibration image.
  • the second camera is an infrared camera.
  • the electronic device determines that the same position in the same scene is located in each second calibration image. The corresponding second feature point.
  • Each second calibration image is captured based on the same scene.
  • the second feature point corresponding to the position in each second calibration image is determined, and the second feature point corresponding to each second calibration image is Point to the same location in the scene in the real world, but with different exposure times for each second calibration image.
  • the electronic device may determine the second feature point corresponding to the same position in the same scene from each of the second calibration images. After obtaining each second feature point, the electronic device obtains the pixel value corresponding to each second feature point, and based on the Debevec algorithm, obtains the second camera response function corresponding to the second camera through the pixel value of each second feature point. .
  • the camera response function is directly calibrated by the pixel value of the second calibration image captured by the infrared camera, which can quickly determine the camera response function corresponding to the second camera.
  • determining the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function includes: acquiring at least one pair of matching point pairs, where the matching point pair The first matching point extracted from a calibration image and the second matching point extracted from the second calibration image are obtained by feature matching; the second point pixel value; determine the first relative illuminance value according to the first point pixel value and the first camera response function; determine the second relative illuminance value according to the second point pixel value and the second camera response function; based on the first relative illuminance value determining an illuminance mapping relationship with the second relative illuminance value; and determining a pixel mapping relationship between the first camera and the second camera according to the illuminance mapping relationship.
  • the matching point pair is obtained by feature matching according to the first matching point and the second matching point, the first matching point is extracted from the first calibration image, and the second matching point is extracted from the second calibration image.
  • the first matching point and the second matching point can be extracted from the first calibration image and the second calibration image, respectively, and feature matching is performed on each of the obtained first matching points and each second matching point, such as constructing a feature matching result.
  • the matching point pair includes a first matching point from the first calibration image and a second matching point from the first calibration image.
  • feature point detection algorithms such as Fast, SUSA (Smallest Univalue Segment Assimilating Nucleus, minimum single value segment assimilation kernel), SIFT, SURF or LBP (Local Binary Pattern, local binary pattern) and other algorithms can be used to detect
  • the first calibration image and the second calibration image are processed to obtain a first matching point and a second matching point.
  • Feature matching refers to matching the obtained first matching point and second matching point to determine the corresponding matching points in the first calibration image and the second calibration image, generally the same position in the shooting scene corresponds to the first calibration image and the second calibration image.
  • the second calibrates the pixels in the image.
  • the BRIEF (Binary Robust Independent Elementary Features) algorithm, the Hamming distance algorithm, etc. can be used to perform feature matching on the first matching point and the second matching point, and a matching point pair can be constructed based on the feature matching result.
  • each matching point pair includes a first matching point and a second matching point that match each other, the first matching point comes from the first calibration image, and the second matching point comes from the second calibration image.
  • illuminance refers to the energy of visible light received per unit area.
  • the image captured by the camera in the real world is the relative illuminance perceived by the camera, and the relative illuminance has a certain proportional relationship with the real illuminance in the real world.
  • the camera response function of the camera reflects the relationship between the pixel value of the image captured by the camera and the relative illuminance value, that is, the corresponding relative illuminance value can be obtained through the pixel value of the image captured by the camera and the camera response function.
  • the illuminance mapping relationship between the relative illuminance of the first camera and the second camera can be obtained, and based on the illuminance mapping relationship, the illuminance mapping relationship between the first camera and the second camera can be constructed. Pixel mapping relationship.
  • the electronic device obtains at least one pair of matching points, and respectively determines the first pair of matching points.
  • the electronic device determines the first relative illuminance value based on the pixel value of the first point and the first camera response function, and determines the second point based on the pixel value of the second point and the response function of the second camera. Relative illuminance value.
  • the electronic device determines the illuminance mapping relationship according to each of the first relative illuminance values and the corresponding second relative illuminance values. For example, the electronic device can perform statistical analysis on each of the first relative illuminance values and the corresponding second relative illuminance values to obtain the first camera and the second relative illuminance value.
  • the illuminance mapping relationship of the camera describes the corresponding relationship between the relative illuminance value corresponding to the image captured by the first camera and the relative illuminance value corresponding to the image captured by the second camera under the same scene. Further, the electronic device obtains the pixel mapping relationship between the first camera and the second camera based on the determined illuminance mapping relationship.
  • the pixel mapping relationship describes the correspondence between the pixel value of the image captured by the first camera and the pixel value of the image captured by the second camera in the same scene. Based on the correspondence, the image captured by the first camera can be realized. Pixel mapping with the image captured by the second camera.
  • the pixel values of the images captured by the first camera can be traversed, the first relative illuminance value corresponding to each pixel value can be determined through the first camera response parameter, and the corresponding first relative illuminance value and the illuminance mapping relationship can be determined based on each first relative illuminance value and the illuminance mapping relationship.
  • Two relative illuminance values based on each second relative illuminance value and the response parameter of the second camera to determine the pixel value of the image captured by the second camera, based on the pixel value of the image captured by the first camera and the image captured by the second camera.
  • the pixel value is constructed to obtain the pixel mapping relationship between the first camera and the second camera.
  • the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image are obtained by performing feature matching on the pixel value corresponding to the matching point in the matching point pair to determine the first matching point.
  • the illuminance mapping relationship between the camera and the second camera, and the pixel mapping relationship between the first camera and the second camera is obtained based on the illuminance mapping relationship, thereby realizing the mutual calibration of the first camera and the second camera.
  • the first calibration image and the second calibration image include calibration targets with different regions in the same scene; according to the first camera response function and the second camera response function, the distance between the first camera and the second camera is determined
  • the pixel mapping relationship includes: determining the pixel values of the first area corresponding to each area of the calibration target in the first calibration image; determining the pixel value of the second area corresponding to each area of the calibration target in the second calibration image ; Determine the pixel mapping relationship between the first camera and the second camera according to the corresponding relationship between the first region pixel value and the second region pixel value of the same region in the calibration target.
  • the calibration target is preset in the same scene corresponding to the first camera and the second camera when shooting, the calibration target is divided into different areas, and each area may be provided with a corresponding color.
  • the calibration target can be set according to actual needs, such as color card, gray scale card, etc.
  • the first camera response function and the second camera response function are obtained, the pixel mapping relationship between the first camera and the second camera is determined, and the first calibration image and the second calibration image include different regions in the same scene.
  • the calibration target that is, when both the first camera and the second camera capture the calibration target in the scene, the electronic device respectively determines the pixel values of the first area corresponding to each area of the calibration target in the first calibration image, and the pixel values in the second area respectively.
  • the pixel values of the second region corresponding to each region of the calibration target respectively.
  • the electronic device After obtaining the pixel value of the first area and the pixel value of the second area, the electronic device obtains the difference between the first camera and the second camera according to the corresponding relationship between the pixel value of the first area and the pixel value of the second area of the same area in the calibration target. pixel mapping relationship.
  • the calibration target is divided into multiple regions, and the electronic device can determine the correspondence between the pixel values of the first region in the first calibration image and the pixel values of the second region in the second calibration image for each region,
  • the illuminance mapping relationship between the first camera and the second camera for example, the illuminance mapping relationship is obtained according to the ratio of the pixel value of the first area and the pixel value of the second area, and the illuminance mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship.
  • Pixel mapping relationship for example, the illuminance mapping relationship is obtained according to the ratio of the pixel value of the first area and the pixel value of the second area, and the illuminance mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship.
  • each area in the calibration target has a preset corresponding solid color.
  • a solid color refers to a color or hue that is not mixed with other hues.
  • Each area in the calibration target has a preset corresponding solid color, and the colors between each area can be the same or different.
  • Each area in the calibration target has a corresponding solid color, which can ensure that the color in each area is pure and uniform, and can improve the accuracy of determining the pixel value of the area, thereby ensuring the accuracy of determining the pixel mapping relationship, which is conducive to improving the effect of image alignment.
  • the calibration target can be a grayscale card, a color scale card, a color scale diagram, and the like.
  • performing pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image includes: respectively determining the pixel points of each pixel in the second image. original pixel value; perform pixel value mapping on each original pixel value based on the pixel mapping relationship between the first camera and the second camera to obtain the mapped pixel value corresponding to each pixel in the second image; update the first pixel value based on each mapped pixel value; Two images are obtained, and a mapping image corresponding to the second image is obtained.
  • the original pixel value is the pixel value of the second image captured by the second camera without pixel mapping;
  • the mapped pixel value is the pixel value obtained by pixel mapping the original pixel value through the pixel mapping relationship;
  • the mapped image is based on the mapping
  • the pixel value updates the second image, that is, the mapping result obtained after pixel mapping is performed on the second image through the pixel mapping relationship.
  • the electronic device determines the original pixel value of each pixel in the second image respectively. For example, the electronic device can traverse each pixel in the second image to obtain the corresponding pixel value of each pixel. the original pixel value. The electronic device further acquires the pixel mapping relationship between the first camera and the second camera, and performs pixel value mapping on each original pixel value based on the pixel mapping relationship, that is, maps each original pixel value to the first image according to the pixel mapping relationship. In the color space, the mapped pixel value of each pixel in the second image is obtained.
  • the electronic device updates the second image based on the obtained mapped pixel values, and specifically may update the pixel values of the corresponding pixel points in the second image based on the mapped pixel values to generate a mapped image corresponding to the second image, so as to realize the conversion of the second image.
  • the color space mapped to the first image overcomes the problem that the image information structure is similar but the gradients are inconsistent due to the difference in information sources, resulting in poor image alignment accuracy.
  • the first image is fused, Improved image fusion effect.
  • aligning the mapped images corresponding to the first image and the second image includes: respectively performing distortion correction on the mapped images corresponding to the first image and the second image to obtain the first distortion corrected image and the second distortion corrected image. Correcting the images; respectively performing stereo correction on the first and second distortion-corrected images to obtain the first and second corrected images; and performing grid alignment on the first and second corrected images.
  • the distortion correction is used to correct the image distortion caused by the lens distortion phenomenon, and specifically includes correcting radial distortion, tangential distortion, and the like.
  • Stereo correction is used to ensure that the image planes of the two cameras are parallel, and are corrected to be aligned in coplanar rows.
  • the optical axes of the cameras are concentric and the image rows are aligned, which is beneficial to reduce the subsequent grid alignment search range. Since the entire image in the scene is not coplanar, there will be multiple planes. When the image is aligned on the entire image, the complete alignment cannot be ensured. Therefore, the grid alignment method is used to divide the image into multiple small grids. Alignment is performed in each of the divided grids to achieve the alignment effect.
  • the electronic device obtains calibration parameters corresponding to the first camera and the second camera respectively, and performs distortion correction and stereo correction through the calibration parameters corresponding to the first camera and the second camera respectively.
  • the calibration parameters may specifically be camera parameters obtained by pre-calibration of the two cameras, and specifically include internal parameters, external parameters, and distortion parameters.
  • the electronic device performs distortion correction on the mapped images corresponding to the first image and the second image, respectively, to obtain the first distortion corrected image and the second distortion corrected image, so as to overcome the radial direction existing in the mapped images corresponding to the first image and the second image. Distortion, tangential distortion and other distortion problems, improve image quality.
  • the electronic device performs stereo-correction on the first distortion-corrected image and the second distortion-corrected image respectively.
  • the first distortion-corrected image and the second distortion-corrected image can be stereo-corrected based on the Bouguet correction principle to obtain the first corrected image and the second distortion corrected image.
  • the second corrected image is such that the obtained first corrected image and the second corrected image are in parallel with the plane, the optical axis is perpendicular to the image plane, and the pole is in a wireless distance.
  • the electronic device then performs grid alignment on the obtained first corrected image and the second corrected image to achieve alignment of the captured image of the first camera and the captured image of the second camera.
  • the Bouguet correction principle is to decompose the rotation and translation matrices solved by OPencv into rotation and translation matrices that rotate half of each of the left and right cameras.
  • the principle of decomposition is to minimize the distortion caused by the reprojection of the left and right images, and to maximize the common area of the left and right views.
  • the rotation matrix of the right image plane relative to the left image plane is decomposed into two matrices R1 and Rr, which are used as composite rotation matrices of the left and right cameras. Rotate the left and right cameras by half to make the optical axes of the left and right cameras parallel.
  • the imaging planes of the left and right cameras are parallel, but the baseline is not parallel to the imaging plane.
  • the transformation matrix Rrect is constructed so that the baseline is parallel to the imaging plane, and the construction method is completed by the offset matrix T of the right camera relative to the left camera.
  • the overall rotation matrix of the left and right cameras is obtained by multiplying the composite rotation matrix and the transformation matrix.
  • the left and right camera coordinate systems are multiplied by their respective overall rotation matrices to make the main optical axes of the left and right cameras parallel, and the image plane is parallel to the baseline.
  • the pre-calibrated camera parameters of the camera are used to sequentially perform distortion correction and stereo correction on the mapped images corresponding to the first image and the second image, so as to overcome the distortion phenomenon captured by the camera and reduce the distortion of the original image;
  • the planes of the images captured by the two cameras are parallel, the optical axis is perpendicular to the image plane, and the pole is in the wireless distance, and the grid alignment is performed on the first corrected image and the second corrected image obtained after correction to ensure the alignment of the images. Effect.
  • performing grid alignment on the first corrected image and the second corrected image includes: performing grid division on the first corrected image and the second corrected image respectively to obtain each first grid corresponding to the first corrected image. grid and each second grid corresponding to the second corrected image; respectively perform grid feature point detection on each first grid and each second grid, and obtain the first grid feature points and the first grid feature points corresponding to the first grid The second grid feature points corresponding to the two grids; image transformation is performed on the first corrected image and the second corrected image based on each of the first grid feature points and the second grid feature points to align the first corrected image and the second corrected image. Correct the image.
  • the grid division is used to divide the image into multiple small grids, and align each of the small grids separately, so as to avoid the problem that the image cannot be aligned as a whole when there are multiple planes.
  • Grid feature point detection is used to detect feature points in the grid to align the grid with the feature points.
  • the electronic device when grid alignment is performed on the first corrected image and the second corrected image, the electronic device respectively performs grid division on the first corrected image and the second corrected image to obtain each first grid corresponding to the first corrected image.
  • the grid division parameters can be set according to actual needs, for example, the first corrected image and the second corrected image can be divided into N*N grids respectively.
  • the electronic device After obtaining each grid, the electronic device performs grid feature point detection on each first grid and each second grid respectively.
  • the feature point detection can be performed by algorithms such as Fast, SUSA, SIFT, SURF or LBP to obtain the first grid.
  • the electronic device performs image transformation on the first corrected image and the second corrected image based on each of the first grid feature points and the second grid feature points, so as to align the first corrected image and the second corrected image.
  • the electronic device can align each of the first grids and the corresponding second grids, so that multiple grid pairs can be aligned in parallel, and each grid pair includes a first grid and a second grid that match each other. Two grids. Specifically, after obtaining the first grid feature points corresponding to the first grid and the second grid feature points corresponding to the second grid, feature matching is performed based on the first grid feature points and the second grid feature points to obtain The matching of the first grid and the second grid is realized, and a grid pair is obtained by constructing.
  • mismatch removal processing is performed, for example, the mismatched grid pair of the feature point matching pair can be removed by the RANSAC (Random Sample Consensus) algorithm.
  • the electronic device further calculates the homography matrix of each grid pair, and performs perspective transformation on the first grid and the second grid in the grid pair based on the homography matrix, so as to realize the transformation of the first grid and the second grid in the grid pair.
  • the aligned first image and the aligned second image are obtained according to the alignment results of each grid pair.
  • the image is divided into a plurality of small grids, and the small grids are aligned separately, so as to avoid the problem that the image cannot be aligned as a whole when there are multiple planes, and further improve the image alignment effect.
  • the method further includes: according to the first corrected feature point and the second corrected feature point to construct a matching pair of feature points; the first corrected feature point is extracted from the first corrected image, and the second corrected feature point is extracted from the second corrected image; based on the offset between the corrected feature points in each feature point matching pair parameters, determine the projection parameters between the first corrected image and the second corrected image; perform projection alignment on the first corrected image and the second corrected image through the projection parameters to obtain the first projected aligned image and the second projected aligned image.
  • the first corrected feature point is extracted from the first corrected image
  • the second corrected feature point is extracted from the second corrected image.
  • a feature point detection algorithm such as Fast, SUSA, SIFT, SURF, or LBP, may be used to process the first corrected image and the first corrected image, respectively, to obtain the first corrected feature point and the second corrected feature point.
  • a feature point matching pair is constructed based on the extracted first correction feature point and the second correction feature point.
  • the feature point matching pair reflects the corresponding relationship between the correction feature points in the first correction image and the second correction image.
  • the feature point matching pair can specifically be It is obtained by performing feature matching on the obtained first correction feature point and the second correction feature point, and constructing it based on the first correction feature point and the second correction feature point corresponding to the successful matching. That is, each feature point matching pair includes a first corrected feature point and a second corrected feature point that match each other, the first corrected feature point is from the first corrected image, and the second corrected feature point is from the second corrected image.
  • the offset parameter is used to characterize the alignment degree between the corrected feature points in the feature point matching pair. If the alignment degree of the corrected feature points in each feature point matching pair is high, then the alignment effect of the corresponding first image and the second image is high. Also higher. In a specific application, the offset parameter can be measured according to the distance between the corrected feature points in the feature point matching pair, such as the Euclidean distance. Projection parameters are used for image alignment. Specifically, projection mapping can be performed on two images through projection parameters to achieve image alignment.
  • projection mapping may be performed on the second corrected image or the first corrected image by using projection parameters, so as to project the second corrected image into the coordinate system of the first corrected image, or to map the second corrected image to the coordinate system of the first corrected image.
  • the first corrected image is projected into the coordinate system of the second corrected image, so as to realize the projected alignment of the first image and the second image, and obtain the first projected aligned image and the second projected aligned image.
  • the electronic device constructs feature points according to the first corrected feature points extracted from the first corrected image and the second corrected feature points extracted from the second corrected image matching pairs.
  • the electronic device determines the offset parameters between the correction feature points in each feature point matching pair, for example, the distance between the correction feature points in each feature point matching pair can be calculated separately, and The image offset function is constructed according to the distances corresponding to each feature point matching pair, and the projection parameters are determined by solving the image offset function.
  • the electronic device uses the projection parameters to perform projection alignment on the first corrected image and the second corrected image.
  • the electronic device may perform projection mapping on the first corrected image or the second corrected image by using the projection parameters, so as to realize the alignment of the first corrected image or the second corrected image. Alignment of a corrected image and a second corrected image.
  • the projection parameters are determined according to the offset parameters between the corrected feature points in the feature point matching pair, and the projection parameters can be dynamically calibrated according to the scene captured by the image, which can reduce the influence of random errors and improve the effect of image alignment using the projection parameters. .
  • performing grid alignment on the first corrected image and the second corrected image includes: performing grid alignment on the first projected aligned image and the second projected aligned image.
  • the alignment images are aligned, thereby realizing alignment of the first image and the second image.
  • a feature point matching pair is constructed by using the first correction feature point in the first correction image and the second correction feature point in the second correction image, so as to ensure that each correction feature in the feature point matching pair can be ensured.
  • the projection parameters are determined according to the offset parameters between the corrected feature points in the feature point matching pair, and the projection parameters can be dynamically calibrated according to the scene captured by the image, which reduces the influence of random errors and improves the utilization of the projection parameters.
  • the parameter performs the effect of image alignment.
  • the first projection-aligned image and the second projection-aligned image are then grid-aligned, so as to avoid the problem that the images cannot be aligned as a whole when there are multiple planes, and further improve the alignment effect of the images.
  • an image processing method is provided, and the image processing method is applied in the process of aligning the RGB image captured by the RGB camera of the mobile phone and the NIR image captured by the NIR camera.
  • the first image is an RGB image captured by an RGB camera
  • the second image is an NIR image captured by an NIR camera.
  • CRF is performed on the RGB image and the NIR image.
  • the correction is to perform pixel mapping on the NIR image according to the pixel mapping relationship determined by the first camera response function of the RGB camera and the second camera response function of the NIR camera, and align the mapping image corresponding to the obtained NIR image with the RGB image.
  • the pre-calibrated camera parameters are used to perform distortion correction on the CRF-corrected RGB image and the CRF-corrected NIR image, respectively, to obtain the distortion-corrected RGB image and the distortion-corrected NIR image.
  • Stereo-corrected the NIR images respectively, to obtain a stereo-corrected RGB image and a stereo-corrected NIR image.
  • the grid is constructed in turn, SIFT features are extracted, feature matching and false matching are removed, the homography matrix and perspective transformation are calculated, and the aligned RGB image and the aligned NIR image are obtained.
  • camera calibration is used to calibrate the internal and external parameters and distortion parameters of the camera sensor.
  • RGB cameras only need to calibrate internal parameters and distortion parameters
  • NIR cameras need to calibrate external parameters in addition to internal parameters and distortion parameters.
  • FIG 7 when calibrating the camera parameters, it is first necessary to obtain the calibration board image pair, RGB image and NIR image.
  • the calibration board image is taken indoors, and the light intensity is weak. The whole process of shooting needs to be filled with light, and then the calibration board is detected and calibrated.
  • the RGB camera and the NIR camera were calibrated by Zhang Zhengyou's calibration method, respectively, and the calibration parameters of the RGB camera and the NIR camera were obtained.
  • the obtained calibration parameters can be stored for subsequent image correction processing.
  • the camera of the camera is used to capture images, and generally needs to be calibrated before leaving the factory.
  • the calibration of RGB camera and NIR camera can be achieved by single camera calibration.
  • Single-camera calibration refers to determining the values of the internal and external parameters of a single camera.
  • the internal parameters of a single camera may include f x , f y , c x , and cy , where f x represents the unit pixel size of the focal length in the x-axis direction of the image coordinate system, and f y represents the unit pixel size of the focal length in the y-axis direction of the image coordinate system Pixel size, c x , cy represent the coordinates of the principal point of the image plane, and the principal point is the intersection of the optical axis and the image plane.
  • the image coordinate system is a coordinate system established based on the two-dimensional image captured by the camera, and is used to specify the position of the object in the captured image.
  • the origin of the (x, y) coordinate system in the image coordinate system is located on the optical axis of the camera and the focal point (c x , c y ) of the imaging plane, and the unit is the unit of length, that is, meters, and (u, v) in the pixel coordinate system
  • the origin of the coordinate system is in the upper left corner of the image, and the unit is the unit of quantity, ie units.
  • (x, y) is used to characterize the perspective projection relationship of the object from the camera coordinate system to the image coordinate system, and (u, v) is used to characterize the pixel coordinates.
  • the conversion relationship between (x, y) and (u, v) is as formula (1):
  • Perspective projection refers to a single-sided projection image that is closer to the visual effect by projecting a body onto the projection surface by the central projection method.
  • the external parameters of a single camera include a rotation matrix and a translation matrix that convert the coordinates in the world coordinate system to the coordinates in the camera coordinate system.
  • the world coordinate system reaches the camera coordinate system through rigid body transformation, and the camera coordinate system reaches the image coordinate system through perspective projection transformation.
  • Rigid body transformation refers to the rotation and translation of a geometric object when the object is not deformed in three-dimensional space, which is the rigid body transformation.
  • the rigid body transformation is as formula (2),
  • X c represents the camera coordinate system
  • X represents the world coordinate system
  • R represents the rotation matrix from the world coordinate system to the camera coordinate system
  • T represents the translation matrix from the world coordinate system to the camera coordinate system.
  • the distance between the origin of the world coordinate system and the origin of the camera coordinate system is jointly controlled by the components in the three axis directions of x, y, and z, and has three degrees of freedom.
  • R is the sum of the effects of rotation around the X, Y, and Z axes respectively.
  • t x represents the translation amount in the x-axis direction
  • ty represents the translation amount in the y-axis direction
  • t z represents the translation amount in the z -axis direction.
  • the world coordinate system is the absolute coordinate system of the objective three-dimensional space, which can be established at any position.
  • the world coordinate system can be established with the upper left corner of the calibration plate as the origin, the calibration plate plane as the XY plane, and the Z axis perpendicular to the calibration plate plane upward.
  • the camera coordinate system takes the optical center of the camera as the origin of the coordinate system, takes the optical axis of the camera as the Z axis, and the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system.
  • the principal point of the image coordinate system is the intersection of the optical axis with the image plane.
  • the image coordinate system takes the principal point as the origin.
  • the pixel coordinate system means that the origin is defined at the upper left corner of the image plane.
  • the distortion parameters of the camera are determined according to the intrinsic and extrinsic parameters of the camera.
  • a brown polynomial can be used as the distortion model, and the brown model includes 5 parameters, among which, 3 radial distortion parameters and 2 tangential distortion parameters.
  • the block surface function fitting can also be performed to obtain the distortion parameters.
  • CRF calibration includes two processes of CRF self-calibration and mutual calibration, wherein self-calibration is used to calculate the relationship between real-world illuminance and RGB image or NIR image brightness, The mutual calibration is to find the pixel relationship between the RGB image and the NIR image according to the brightness and illuminance relationship obtained from the self-calibration.
  • the image pairs captured by the RGB camera and the NIR camera under different exposure time conditions are obtained, and the CRF auto-autonomous Calibration and mutual calibration to determine the pixel mapping relationship between the RGB camera and the NIR camera.
  • high dynamic range scenes including over-exposure and over-dark areas
  • 5 sets of images with different exposure times are taken with RGB cameras and NIR cameras respectively.
  • RGB images are RGB_1 ⁇ RGB_5, and NIR images are NIR_1 ⁇ NIR_5.
  • the exposure time is modified by the gain value (signal gain) and shutter value (shutter speed) of the mobile phone, and it is decreased in multiples of 2.
  • the maximum exposure time of the RGB camera and the NIR camera can be inconsistent.
  • the exposure times are (EV-2, EV-1 ,EV0,EV+1,EV+2).
  • the alignment method adopts median threshold bitmaps, and the new aligned images are RGB'_1 ⁇ RGB'_5, and NIR'_1 ⁇ NIR'_5.
  • FIG. 9 is a schematic diagram of a camera response curve in one embodiment.
  • the abscissa is the image pixel value (0-255), and the ordinate is the relative illuminance value.
  • Curve 1 is the camera response curve of the RGB luminance channel
  • curve 2 is the camera response curve of the NIR image.
  • the phase response function of other color spaces can also be constructed, such as the V channel of HSV, the R, G or B channels of RGB separation, etc., which can be adjusted according to actual needs.
  • curve 3 is the camera response curve of the NIR image
  • curve 4 is the camera response curve of the R channel image in the RGB color space
  • curve 5 is the camera response curve of the B channel image in the RGB color space
  • curve 6 and curve 7 Overlays are the camera response curves corresponding to the G1 channel image and the G2 channel image in the RGGB color space of the RAW image Bayer mode.
  • the curve 8 is the camera response curve of the NIR image
  • the curve 9 is the camera response curve corresponding to the V channel image in the HSV color space.
  • the camera response curve can only calculate the relationship between the image pixel value and the relative illuminance value, and it is necessary to obtain the relationship between the relative illuminance and the true illuminance through CRF mutual calibration.
  • the true illuminance needs to be measured with an illuminometer.
  • the relationship between the illuminance values in the response curves of the RGB camera and the NIR camera is calculated.
  • the matching points between the RGB image and the NIR image can be extracted to obtain the pixel value of the matching point and its relative illuminance value in the response area, and then the illuminance mapping relationship between the two can be obtained.
  • a gray-scale card can be placed in the scene when the RGB camera and NIR camera capture images, and the area of the gray-scale card can be detected to obtain the pixel value of each area of the gray-scale card in the RGB image and the NIR image, and then the pixel value Divide to get the illuminance mapping relationship.
  • the pixel mapping relationship between the RGB image and the NIR image pixel value is established based on the illuminance mapping relationship determined by the mutual calibration, and the pixel mapping relationship describes the corresponding relationship between the NIR brightness value and the brightness value of a certain RGB channel.
  • the NIR image can be corrected to the luminance domain of the RGB image, which can solve the problem of similar structures but inconsistent gradients when the images acquire image information due to different sensors, and can improve the image alignment effect.
  • the pixel mapping relationship can be stored in a table, and can be saved offline after calibration, and only needs to be calibrated once. When in use, it is only necessary to traverse the look-up table, which can effectively improve the efficiency of image processing.
  • the internal and external parameters and distortion parameters of the camera calibrated by the RGB camera and the NIR camera are used to perform distortion correction and stereo correction on the image, and the non-coplanar lines of the image are aligned.
  • Coplanar line alignment at this time, the camera optical axis is concentric, and the image lines are aligned, which is beneficial to reduce the grid alignment search range later.
  • the grid alignment method is used to divide the image into multiple small grids, and the alignment method based on SIFT features is used in the small grids to achieve the alignment effect.
  • the stereo-corrected RGB image and the stereo-corrected NIR image are divided into N*N grids, and each grid is traversed.
  • FIG. 12 is a flowchart of a method for determining a pixel mapping relationship of a binocular camera in one embodiment.
  • the method for determining the pixel mapping relationship of the binocular camera in this embodiment is described by taking the electronic device in FIG. 3 as an example for description.
  • the image processing method includes process 1202 to process 1208 .
  • Process 1202 Obtain a first calibration image group and a second calibration image group; the first calibration image group includes a first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions, and the second calibration image is obtained.
  • the image group includes a second calibration image captured by a second camera in the binocular camera under the conditions of the same scene and different exposure times.
  • the first calibration image group includes the first calibration image obtained by the first camera in the binocular camera under the conditions of the same scene and different exposure times
  • the second calibration image group includes the second camera in the binocular camera in the The second calibration image obtained by shooting under the conditions of the same scene and different exposure times. That is, the images in the first calibration image group and the second calibration image group are both captured by the corresponding cameras for the same scene, and the exposure times of the first calibration images in the first calibration image group are different when they are captured. The exposure times of the respective second calibration images in the calibration image group are different during shooting.
  • the shooting scenes corresponding to the first calibration image group and the second calibration image group may be high dynamic range scenes including overexposed and overdark areas, so as to ensure that the determined pixel mapping relationship can be applied to high dynamic range scenes, Guarantees the applicable scope of the pixel mapping relationship.
  • the number of the first calibration image and the second calibration image and the corresponding exposure time can be flexibly set according to actual needs. For example, the number of the first calibration image and the second calibration image can be 5, and the corresponding exposure time can be increased. , and the respective exposure times of the first calibration image and the second calibration image may be different.
  • the exposure time can be adjusted by modifying the signal gain (gain value) and shutter speed (shutter value) of the electronic device.
  • the electronic device can first align each calibration image in the first calibration image group and the second calibration image group, such as aligning the first calibration image group and the second calibration image group by the alignment method of the median threshold bitmap. Perform median threshold alignment on each of the calibration images, and determine the corresponding camera response function based on the first calibration image after median threshold alignment and the second calibration image after median threshold alignment.
  • a first camera response function corresponding to the first camera is determined based on each first calibration image.
  • the camera response function is used to represent the corresponding relationship between the brightness of the image captured by the camera and the illumination in the real world.
  • the brightness or illuminance observed in the real world is constant and will not change with different cameras, and there is a certain correspondence between the brightness of the image captured by the camera and the illuminance in the real world.
  • the electronic device may obtain the first camera response function corresponding to the first camera by using the Debevec algorithm based on the luminance channel image of each first calibration image.
  • a second camera response function corresponding to the second camera is determined based on each of the second calibration images.
  • the electronic device can obtain the second camera response function corresponding to the second camera through the Debevec algorithm based on the luminance channel image of each second calibration image.
  • the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
  • the pixel mapping relationship reflects the pixel value of each pixel in the image captured by the first camera and the pixel value of each pixel in the image captured by the second camera when the first camera and the second camera shoot the same scene at the same time
  • the mapping relationship between them that is, through the pixel mapping relationship, the images captured by the first camera and the second camera can be mapped to the color space, such as mapping the image captured by the first camera to the color space corresponding to the image captured by the second camera.
  • the electronic device determines the pixel mapping relationship between the first camera and the second camera based on the first camera response function and the second camera response function. For example, the pixel value of the matching point between the first calibration image and the second calibration image, and the relative illuminance value determined by the matching point according to the response function of the first camera and the response function of the second camera, can be used to determine the difference between the first camera and the second camera. The illuminance mapping relationship between them is determined, and the pixel mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship.
  • the above-mentioned method for determining the pixel mapping relationship of a binocular camera is to determine the first camera response function corresponding to the first camera in the binocular camera and the corresponding second camera respectively according to the images captured by the binocular camera under the conditions of the same scene and different exposure times. and the pixel mapping relationship between the first camera and the second camera is determined based on the first camera response function and the second camera response function. The pixel mapping relationship is determined according to the first camera response function of the first camera and the second camera response function of the second camera. Through the pixel mapping relationship, the camera response function of the camera can be used to convert the second image captured by the second camera in the binocular camera. Mapping to the pixel space of the first image captured by the first camera can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
  • acquiring the target channel images corresponding to the target color channels of the first calibration images respectively includes: performing channel separation on the first calibration images to obtain separate channel images; obtaining images corresponding to the target color according to the separated channel images The target channel image for the channel.
  • acquiring a target channel image corresponding to each first calibration image respectively corresponding to a target color channel includes: transforming the first calibration image into a target color space including the target color channel to obtain a target color space image; according to the target color The spatial image results in a target channel image corresponding to the target color channel.
  • determining the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function includes: acquiring at least one pair of matching point pairs, where the matching point pair The first matching point extracted from a calibration image and the second matching point extracted from the second calibration image are obtained by position matching; respectively determine the pixel value of the first matching point in the matching point pair and the pixel value of the second matching point.
  • the second point pixel value determines the first relative illuminance value according to the first point pixel value and the first camera response function; determine the second relative illuminance value according to the second point pixel value and the second camera response function; based on the first relative illuminance value determining an illuminance mapping relationship with the second relative illuminance value; and determining a pixel mapping relationship between the first camera and the second camera according to the illuminance mapping relationship.
  • the first calibration image and the second calibration image include calibration targets with different regions in the same scene; according to the first camera response function and the second camera response function, the distance between the first camera and the second camera is determined
  • the pixel mapping relationship includes: determining the pixel values of the first area corresponding to each area of the calibration target in the first calibration image; determining the pixel value of the second area corresponding to each area of the calibration target in the second calibration image ; Determine the pixel mapping relationship between the first camera and the second camera according to the corresponding relationship between the first region pixel value and the second region pixel value of the same region in the calibration target.
  • each area in the calibration target has a preset corresponding solid color.
  • FIGS. 4-8 and 12 are displayed in sequence according to the arrows, these processes are not necessarily executed sequentially in the sequence indicated by the arrows. Unless explicitly stated herein, there is no strict order in the execution of these processes, and these processes may be performed in other orders. Moreover, at least a part of the processes in FIGS. 4-8 and 12 may include multiple sub-processes or multiple stages. These sub-processes or stages are not necessarily executed at the same time, but may be executed at different times. These sub-processes are not necessarily completed at the same time. Alternatively, the order of execution of the stages is not necessarily sequential, but may be performed alternately or alternately with other processes or sub-processes of other processes or at least a portion of the stages.
  • FIG. 13 is a structural block diagram of an image processing apparatus 1300 according to an embodiment.
  • the image processing apparatus 1300 includes a to-be-processed image acquisition module 1302, a pixel mapping processing module 1304 and an image alignment processing module 1306, wherein:
  • the to-be-processed image acquisition module 1302 is used to acquire the to-be-processed first image and the second image; the first image is captured by the first camera, and the second image is captured by the second camera;
  • the pixel mapping processing module 1304 is configured to perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the pixel mapping relationship of the first camera.
  • the first camera response function and the second camera response function of the second camera are determined;
  • the image alignment processing module 1306 is configured to align the mapped image corresponding to the second image with the first image.
  • it further includes a calibration image group acquisition module, a first camera response function determination module, a second camera response function determination module, and a pixel mapping relationship determination module; wherein: a calibration image group acquisition module for acquiring the first calibration an image group and a second calibration image group; the first calibration image group includes the first calibration image obtained by the first camera in the same scene and under different exposure time conditions, and the second calibration image group includes the second camera in the same scene, a second calibration image obtained by shooting under different exposure time conditions; a first camera response function determination module, used for determining a first camera response function corresponding to the first camera based on each first calibration image; a second camera response function determination module, with for determining the second camera response function corresponding to the second camera based on each second calibration image; the pixel mapping relationship determination module is used for determining the relationship between the first camera and the second camera according to the first camera response function and the second camera response function pixel mapping relationship.
  • the first camera is a visible light camera
  • the first camera response function determination module includes a target channel image acquisition module, a first feature point determination module, a channel brightness value determination module and a first camera response function acquisition module; wherein:
  • the target channel image acquisition module is used to acquire the target channel images of each first calibration image corresponding to the target color channel respectively;
  • the first feature point determination module is used to determine the same position in the same scene in each target channel image corresponding to the first channel image.
  • a feature point a feature point; a channel luminance value determination module for determining a channel luminance value of each first feature point corresponding to a target color channel; a first camera response function obtaining module for determining a channel luminance value corresponding to each first feature point corresponding to the target color channel; The channel luminance value determines the first camera response function corresponding to the first camera.
  • the target channel image acquisition module includes a channel separation module and a separated channel image processing module; wherein: the channel separation module is used to perform channel separation on the first calibration image to obtain each separated channel image; the separated channel image processing module , which is used to obtain the target channel image corresponding to the target color channel according to the separate channel images.
  • the target channel image acquisition module includes a target color space image acquisition module and a target color space image processing module; wherein: the target color space image acquisition module is used to transform the first calibration image to a target including the target color channel The color space is used to obtain the target color space image; the target color space image processing module is used to obtain the target channel image corresponding to the target color channel according to the target color space image.
  • the second camera is an infrared camera
  • the second camera response function determination module includes a second feature point determination module, a second feature point pixel determination module and a second feature point pixel processing module; wherein: the second feature point a determination module, used for respectively determining the second feature points corresponding to the same position in each second calibration image in the same scene; a second feature point pixel determination module, used for determining the pixel value of each second feature point; the second feature point The point pixel processing module is configured to determine the second camera response function corresponding to the second camera according to the pixel value of each second feature point.
  • the pixel mapping relationship determination module includes a matching point pair acquisition module, a matching point pair pixel determination module, a relative illuminance value determination module, an illuminance mapping relationship determination module, and an illuminance mapping relationship processing module; wherein: the matching point pair acquisition module , used to obtain at least one pair of matching point pairs, the matching point pairs are obtained by performing feature matching on the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image; the matching point pair pixel is determined by a module for respectively determining the pixel value of the first point of the first matching point and the pixel value of the second point of the second matching point in the matching point pair; the relative illuminance value determination module is used for determining the pixel value of the first point and the first camera The response function determines the first relative illuminance value; the second relative illuminance value is determined according to the second point pixel value and the second camera response function; the illuminance mapping relationship determination module is used for determining the illuminance based on the
  • the first calibration image and the second calibration image include calibration targets with different regions in the same scene;
  • the pixel mapping relationship determination module includes a first region pixel determination module, a second region pixel determination module, and a region pixel analysis module module; wherein: the first area pixel determination module is used to determine the first area pixel values corresponding to each area of the calibration target in the first calibration image; the second area pixel determination module is used to determine the second calibration image. , the second area pixel values corresponding to each area of the calibration target respectively; the area pixel analysis module is used to determine the first area pixel value according to the corresponding relationship between the first area pixel value and the second area pixel value of the same area in the calibration target.
  • the pixel mapping relationship between the camera and the second camera includes a first region pixel determination module, a second region pixel determination module, and a region pixel analysis module module; wherein: the first area pixel determination module is used to determine the first area pixel values corresponding to each area of the calibration target in the first calibration image; the second area
  • each area in the calibration target has a preset corresponding solid color.
  • the pixel mapping processing module 1304 includes an original pixel determination module, a mapped pixel acquisition module and an image update module; wherein: an original pixel determination module is used to determine the original pixel value of each pixel in the second image respectively; mapping The pixel obtaining module is used to perform pixel value mapping on each original pixel value based on the pixel mapping relationship between the first camera and the second camera, so as to obtain the mapped pixel value corresponding to each pixel point in the second image; the image updating module, using The second image is updated based on each mapped pixel value to obtain a mapped image corresponding to the second image.
  • the image alignment processing module 1306 includes a distortion correction module, a stereo correction module, and a grid alignment module; wherein: the distortion correction module is configured to perform distortion correction on the mapped images corresponding to the first image and the second image, respectively, obtaining a first distortion corrected image and a second distortion corrected image; a stereo correction module for performing stereo correction on the first distortion corrected image and the second distortion corrected image respectively to obtain the first corrected image and the second corrected image; grid alignment The module is used for grid-aligning the first corrected image and the second corrected image.
  • the grid alignment module includes a grid division module, a grid feature extraction module and an image transformation module; wherein: a grid division module is included for meshing the first corrected image and the second corrected image respectively Divide to obtain each first grid corresponding to the first corrected image and each second grid corresponding to the second corrected image; the grid feature extraction module is used to perform the first grid and each second grid respectively.
  • Grid feature point detection to obtain the first grid feature point corresponding to the first grid and the second grid feature point corresponding to the second grid
  • the image transformation module is used for each first grid feature point and second grid feature point based on The grid feature points perform image transformation on the first corrected image and the second corrected image to align the first corrected image and the second corrected image.
  • it also includes a matching pair building module, a projection parameter determining module, and a projection alignment module; wherein: a matching pair building module is used to build a matching pair of feature points according to the first corrected feature point and the second corrected feature point; A correction feature point is extracted from the first correction image, and the second correction feature point is extracted from the second correction image; the projection parameter determination module is used to match the offset parameters between the correction feature points based on each feature point pair , determine the projection parameters between the first corrected image and the second corrected image; the projection alignment module is used to perform projection alignment on the first corrected image and the second corrected image through the projection parameters to obtain the first projected aligned image and the second projected Align the images; the grid alignment module is also used for grid alignment of the first projection alignment image and the second projection alignment image.
  • FIG. 14 is a structural block diagram of an apparatus 1400 for determining a pixel mapping relationship of a binocular camera according to an embodiment.
  • the device 1400 for determining the pixel mapping relationship of the binocular camera includes:
  • the calibration image group obtaining module 1402 is used to obtain the first calibration image group and the second calibration image group;
  • the first calibration image group includes the first calibration image obtained by the first camera in the binocular camera under the same scene and different exposure time conditions.
  • the second calibration image group includes second calibration images obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
  • a first camera response function determination module 1404 configured to determine a first camera response function corresponding to the first camera based on each first calibration image
  • a second camera response function determining module 1406, configured to determine a second camera response function corresponding to the second camera based on each second calibration image
  • the pixel mapping relationship determining module 1408 is configured to determine the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function.
  • the first camera is a visible light camera
  • the first camera response function determination module 1404 includes a target channel image acquisition module, a first feature point determination module, a channel brightness value determination module, and a first camera response function acquisition module; wherein :
  • the target channel image acquisition module is used to acquire the target channel images of the first calibration images corresponding to the target color channels respectively;
  • the first feature point determination module is used to determine the corresponding position in each target channel image in the same scene.
  • a channel luminance value determination module for determining the channel luminance value of each first feature point corresponding to the target color channel
  • a first camera response function obtaining module for determining the target color channel corresponding to each first feature point The channel brightness value of , determines the first camera response function corresponding to the first camera.
  • the target channel image acquisition module includes a channel separation module and a separated channel image processing module; wherein: the channel separation module is used to perform channel separation on the first calibration image to obtain each separated channel image; the separated channel image processing module , which is used to obtain the target channel image corresponding to the target color channel according to the separate channel images.
  • the target channel image acquisition module includes a target color space image acquisition module and a target color space image processing module; wherein: the target color space image acquisition module is used to transform the first calibration image to a target including the target color channel The color space is used to obtain the target color space image; the target color space image processing module is used to obtain the target channel image corresponding to the target color channel according to the target color space image.
  • the second camera is an infrared camera
  • the second camera response function determination module 1406 includes a second feature point determination module, a second feature point pixel determination module and a second feature point pixel processing module; wherein: the second feature point The point determination module is used to respectively determine the second feature points corresponding to the same position in each second calibration image in the same scene; the second feature point pixel determination module is used to determine the pixel value of each second feature point; the second The feature point pixel processing module is configured to determine the second camera response function corresponding to the second camera according to the pixel value of each second feature point.
  • the pixel mapping relationship determination module 1408 includes a matching point pair acquisition module, a matching point pair pixel determination module, a relative illuminance value determination module, an illuminance mapping relationship determination module, and an illuminance mapping relationship processing module; wherein: the matching point pair acquisition The module is used to obtain at least one pair of matching point pairs, and the matching point pairs are obtained by performing feature matching on the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image; the matching point pair pixel The determining module is used to respectively determine the pixel value of the first point of the first matching point and the pixel value of the second point of the second matching point in the matching point pair; the relative illuminance value determination module is used to determine the pixel value of the first point and the first point pixel value The camera response function determines the first relative illuminance value; the second relative illuminance value is determined according to the second point pixel value and the second camera response function; the illuminance mapping relationship determination module is used for determining based on the
  • the first calibration image and the second calibration image include calibration targets with different regions in the same scene;
  • the pixel mapping relationship determination module 1408 includes a first region pixel determination module, a second region pixel determination module and a region pixel An analysis module; wherein: a first area pixel determination module is used to determine the first area pixel values corresponding to each area of the calibration target in the first calibration image; the second area pixel determination module is used to determine the first area pixel value in the second calibration image In the image, the pixel values of the second area corresponding to each area of the calibration target respectively; the area pixel analysis module is used to determine the first area pixel value according to the corresponding relationship between the pixel value of the first area and the pixel value of the second area in the same area of the calibration target. Pixel mapping relationship between a camera and a second camera.
  • each area in the calibration target has a preset corresponding solid color.
  • each module in the image processing device or the device for determining the pixel mapping relationship of the binocular camera is only used for illustration.
  • the device for determining the pixel mapping relationship of the image processing device or the binocular camera can be divided into Different modules are used to complete all or part of the functions of the image processing device or the device for determining the pixel mapping relationship of the binocular camera.
  • each module in the image processing device or the device for determining the pixel mapping relationship of the binocular camera can be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • FIG. 15 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • the electronic device includes a processor and a memory connected by a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory may include non-volatile storage media and internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by the processor to implement an image processing method provided by the following embodiments or the pixel mapping relationship determination of a binocular camera.
  • Internal memory provides a cached execution environment for operating system computer programs in non-volatile storage media.
  • the electronic device may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales, a sales terminal), a vehicle-mounted computer, a wearable device, and the like.
  • each module in the image processing apparatus or the apparatus for determining a pixel mapping relationship of a binocular camera provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or server.
  • the program modules constituted by the computer program can be stored on the memory of the electronic device.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions, when executed by one or more processors, cause the processors to perform the processes of the image processing method.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to execute the pixel mapping relationship of the binocular camera The process of determining the method.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Abstract

An image processing method comprises: acquiring a first image to be processed and a second image to be processed, the first image being captured by a first camera, and the second image being captured by a second camera; performing pixel mapping on the second image on the basis of a pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image, wherein the pixel mapping relationship is determined based on a first camera response function of the first camera and a second camera response function of the second camera; and aligning the mapped image corresponding to the second image with the first image.

Description

图像处理方法、装置、电子设备和计算机可读存储介质Image processing method, apparatus, electronic device, and computer-readable storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求于2020年11月12日提交中国专利局、申请号为2020112608189、发明名称为“图像处理方法、装置、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on November 12, 2020 with the application number 2020112608189 and the title of the invention is "image processing method, device, electronic device and computer-readable storage medium", the entire contents of which are Incorporated herein by reference.
技术领域technical field
本申请涉及图像处理技术领域,特别是涉及一种图像处理方法、装置、电子设备和计算机可读存储介质,以及一种双目摄像头的像素映射关系方法、装置、电子设备和计算机可读存储介质。This application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and computer-readable storage medium, as well as a method, apparatus, electronic device, and computer-readable storage medium for a pixel mapping relationship of a binocular camera .
背景技术Background technique
手机、平板电脑等各种电子设备已经成为现今生活中必不可少的工具,为满足人们记录美好时刻的需求,电子设备拍摄成为重要功能。随着电子设备的更新换代,越来越多的电子设备搭载多个摄像头,以满足人们日益增长的拍摄需求。Various electronic devices such as mobile phones and tablet computers have become indispensable tools in today's life. In order to meet people's needs to record beautiful moments, electronic device photography has become an important function. With the upgrading of electronic devices, more and more electronic devices are equipped with multiple cameras to meet people's increasing shooting needs.
目前,为了增强电子设备拍摄的图像的画质,往往会将多个摄像头拍摄的图像进行对齐后融合,以将多个摄像头采集的信息进行融合,可以有效增强图像画质。然而,不同摄像头拍摄的图像由于信息来源差异,会出现图像信息结构相似但梯度不一致的问题,导致图像对齐的精度较差,对齐效果有限。Currently, in order to enhance the image quality of images captured by electronic devices, images captured by multiple cameras are often aligned and then fused to fuse information collected by multiple cameras, which can effectively enhance image quality. However, due to differences in information sources, images captured by different cameras may have similar image information structures but inconsistent gradients, resulting in poor image alignment accuracy and limited alignment effect.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供了一种图像处理方法、装置、电子设备、计算机可读存储介质,以及一种双目摄像头的像素映射关系方法、装置、电子设备和计算机可读存储介质,可以提高图像对齐精度。Embodiments of the present application provide an image processing method, apparatus, electronic device, computer-readable storage medium, and a pixel mapping relationship method, apparatus, electronic device, and computer-readable storage medium for a binocular camera, which can improve image alignment precision.
一种图像处理方法,包括:An image processing method, comprising:
获取待处理的第一图像和第二图像;第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到;Obtain the first image and the second image to be processed; the first image is captured by the first camera, and the second image is captured by the second camera;
基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像素映射,获得第二图像对应的映射图像;其中,像素映射关系基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定;Based on the pixel mapping relationship between the first camera and the second camera, pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera and the second image. The second camera response function of the camera is determined;
将第二图像对应的映射图像和第一图像进行对齐。Align the mapping image corresponding to the second image with the first image.
一种图像处理装置,装置包括:An image processing device, the device includes:
待处理图像获取模块,用于获取待处理的第一图像和第二图像;第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到;a to-be-processed image acquisition module, configured to acquire the to-be-processed first image and the second image; the first image is captured by the first camera, and the second image is captured by the second camera;
像素映射处理模块,用于基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像素映射,获得第二图像对应的映射图像;其中,像素映射关系基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定;The pixel mapping processing module is configured to perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera A camera response function is determined with the second camera response function of the second camera;
图像对齐处理模块,用于将第二图像对应的映射图像和第一图像进行对齐。The image alignment processing module is used for aligning the mapped image corresponding to the second image with the first image.
一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:An electronic device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
获取待处理的第一图像和第二图像;第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到;Obtain the first image and the second image to be processed; the first image is captured by the first camera, and the second image is captured by the second camera;
基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像素映射,获得第二图像对应的映射图像;其中,像素映射关系基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定;Based on the pixel mapping relationship between the first camera and the second camera, pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera and the second image. The second camera response function of the camera is determined;
将第二图像对应的映射图像和第一图像进行对齐。Align the mapping image corresponding to the second image with the first image.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:A computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
获取待处理的第一图像和第二图像;第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到;Obtain the first image and the second image to be processed; the first image is captured by the first camera, and the second image is captured by the second camera;
基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像素映射,获得第二图像对应的映射图像;其中,像素映射关系基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定;Based on the pixel mapping relationship between the first camera and the second camera, pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera and the second image. The second camera response function of the camera is determined;
将第二图像对应的映射图像和第一图像进行对齐。Align the mapping image corresponding to the second image with the first image.
上述图像处理方法、装置、电子设备和存储介质,根据第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定的像素映射关系,对第二摄像头拍摄得到的第二图像进行像素映射,将获得的第二图像对应的映射图像和第一图像进行对齐。在图像处理过程中,利用第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定的像素映射关系对第二图像进行像素映射,可以利用摄像头的相机响应函数将第二图像映射到第一图像的像素空间,能够解决图像信息结构相似但梯度不一致的问题,确保图像对齐的精度,从而提高图像对齐的效果。The above image processing method, device, electronic device and storage medium, according to the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera, the second image captured by the second camera is processed. Pixel mapping, aligning the obtained mapping image corresponding to the second image with the first image. In the image processing process, pixel mapping is performed on the second image by using the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera, and the second image can be mapped by using the camera response function of the camera. Mapping to the pixel space of the first image can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
一种双目摄像头的像素映射关系确定方法,包括:A method for determining a pixel mapping relationship of a binocular camera, comprising:
获取第一标定图像组和第二标定图像组;第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;Obtain a first calibration image group and a second calibration image group; the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions, and the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
基于各第一标定图像确定第一摄像头对应的第一相机响应函数;determining a first camera response function corresponding to the first camera based on each first calibration image;
基于各第二标定图像确定第二摄像头对应的第二相机响应函数;determining a second camera response function corresponding to the second camera based on each second calibration image;
根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
一种双目摄像头的像素映射关系确定装置,所述装置包括:A device for determining a pixel mapping relationship of a binocular camera, the device comprising:
标定图像组获取模块,用于获取第一标定图像组和第二标定图像组;第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;The calibration image group acquisition module is used to obtain the first calibration image group and the second calibration image group; the first calibration image group includes the first calibration image obtained by the first camera in the binocular camera shooting under the same scene and different exposure time conditions an image, the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
第一相机响应函数确定模块,用于基于各第一标定图像确定第一摄像头对应的第一相机响应函数;a first camera response function determination module, configured to determine a first camera response function corresponding to the first camera based on each first calibration image;
第二相机响应函数确定模块,用于基于各第二标定图像确定第二摄像头对应的第二相机响应函数;A second camera response function determination module, configured to determine a second camera response function corresponding to the second camera based on each second calibration image;
像素映射关系确定模块,用于根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。The pixel mapping relationship determination module is configured to determine the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function.
一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:An electronic device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
获取第一标定图像组和第二标定图像组;第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;Obtain a first calibration image group and a second calibration image group; the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions, and the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
基于各第一标定图像确定第一摄像头对应的第一相机响应函数;determining a first camera response function corresponding to the first camera based on each first calibration image;
基于各第二标定图像确定第二摄像头对应的第二相机响应函数;determining a second camera response function corresponding to the second camera based on each second calibration image;
根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:A computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
获取第一标定图像组和第二标定图像组;第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;Obtain a first calibration image group and a second calibration image group; the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions, and the second calibration image group includes a second calibration image obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
基于各第一标定图像确定第一摄像头对应的第一相机响应函数;determining a first camera response function corresponding to the first camera based on each first calibration image;
基于各第二标定图像确定第二摄像头对应的第二相机响应函数;determining a second camera response function corresponding to the second camera based on each second calibration image;
根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
上述双目摄像头的像素映射关系确定方法、装置、电子设备和存储介质,根据双目摄像头在相同场景、不同曝光时间条件下拍摄得到的图像,分别确定双目摄像头中第一摄像头对应的第一相机响应函数和第二摄像头对应的第二相机响应函数,并基于第一相机响应函数和第二相机响应函数确定第一摄像头与第二摄像头之间的像素映射关系。像素映射关系根据第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定,通过该像素映射关系可以利用摄像头的相机响应函数将双目摄像头中第二摄像头拍摄的第二图像映射到第一摄像头拍摄得到的第一图像的像素空间,能够解决图像信息结构相似但梯度不一致的问 题,确保图像对齐的精度,从而提高图像对齐的效果。The above-mentioned method, device, electronic device and storage medium for determining the pixel mapping relationship of the binocular camera, according to the images captured by the binocular camera under the same scene and different exposure time conditions, respectively determine the first camera corresponding to the first camera in the binocular camera. The camera response function and the second camera response function corresponding to the second camera, and the pixel mapping relationship between the first camera and the second camera is determined based on the first camera response function and the second camera response function. The pixel mapping relationship is determined according to the first camera response function of the first camera and the second camera response function of the second camera. Through the pixel mapping relationship, the camera response function of the camera can be used to convert the second image captured by the second camera in the binocular camera. Mapping to the pixel space of the first image captured by the first camera can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1为RGB图像成像的分析示意图。Figure 1 is a schematic diagram of the analysis of RGB image imaging.
图2为NIR图像成像的分析示意图。Figure 2 is a schematic diagram of the analysis of NIR image imaging.
图3为一个实施例中图像处理方法或双目摄像头的像素映射关系确定方法的应用环境图。FIG. 3 is an application environment diagram of an image processing method or a method for determining a pixel mapping relationship of a binocular camera in one embodiment.
图4为一个实施例中图像处理方法的流程图。FIG. 4 is a flowchart of an image processing method in one embodiment.
图5为一个实施例中确定第一相机响应函数的流程图。FIG. 5 is a flowchart of determining a first camera response function in one embodiment.
图6为另一个实施例中图像处理方法的流程图。FIG. 6 is a flowchart of an image processing method in another embodiment.
图7为一个实施例中相机标定的流程图。FIG. 7 is a flowchart of camera calibration in one embodiment.
图8为一个实施例中标定CRF的流程图。Figure 8 is a flow chart of calibration of CRF in one embodiment.
图9为一个实施例中相机响应曲线的示意图。FIG. 9 is a schematic diagram of a camera response curve in one embodiment.
图10为另一个实施例中相机响应曲线的示意图。FIG. 10 is a schematic diagram of a camera response curve in another embodiment.
图11为又一个实施例中相机响应曲线的示意图。FIG. 11 is a schematic diagram of a camera response curve in yet another embodiment.
图12为一个实施例中双目摄像头的像素映射关系确定方法的流程图。FIG. 12 is a flowchart of a method for determining a pixel mapping relationship of a binocular camera in one embodiment.
图13为一个实施例中图像处理装置的结构框图。FIG. 13 is a structural block diagram of an image processing apparatus in an embodiment.
图14为一个实施例中双目摄像头的像素映射关系确定装置的结构框图。FIG. 14 is a structural block diagram of an apparatus for determining a pixel mapping relationship of a binocular camera in one embodiment.
图15为一个实施例中计算机设备的内部结构图。Figure 15 is a diagram of the internal structure of a computer device in one embodiment.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.
目前,电子设备拍摄通常是采用带Red、Green和Blue滤光片的传感器来接收物体反射光,生成彩色图像(RGB),得到的彩色图像符合人类的视觉感知,但容易受到环境光照不足、雾天等恶劣天气影响。然而,描述物体热辐射的红外图像(NIR,Near Infra-red Spectrum,近红外光),在光照不足、雾和其他恶劣天气下穿透能力强于RGB图像,细节好于RGB图像,但NIR图像不能提供色彩信息,并且图像分辨率较低。所以,为电子设备同时配备可见光摄像头和红外摄像头,拍摄得到RGB图像和NIR图像,通过RGB图像和NIR图像的信息之间融合,不仅可用于图像画质增强,还可用于极暗场景的物体识别、图像去噪、高动态范围(HDR,High-Dynamic Range)、图像去雾、皮肤去斑等。At present, electronic devices usually use sensors with Red, Green and Blue filters to receive reflected light from objects and generate color images (RGB). The obtained color images conform to human visual perception, but are vulnerable to insufficient ambient light, fog bad weather effects. However, infrared images (NIR, Near Infra-red Spectrum), which describe the thermal radiation of objects, are more penetrating than RGB images in poor lighting, fog, and other bad weather, and have better detail than RGB images, but NIR images Color information is not available and the image resolution is low. Therefore, the electronic equipment is equipped with a visible light camera and an infrared camera at the same time, and the RGB image and the NIR image are obtained by shooting. Through the fusion of the information of the RGB image and the NIR image, it can not only be used for image quality enhancement, but also for object recognition in extremely dark scenes. , image denoising, high dynamic range (HDR, High-Dynamic Range), image dehazing, skin depigmentation, etc.
RGB图像和NIR图像之间的信息融合包括两大过程,图像对齐和图像融合,对齐是基础,融合是根本,若对齐误差较大,会导致在融合时出现伪影(artifact)如鬼影、重影等问题;若融合效果差,会出现色彩失真、白边等问题。传统RGB图像和RGB图像间对齐任务,常使用的方式是特征点检测和匹配,这些特征点包括Harris角点、FAST(Features From Accelerated Segment Test,加速分割测试特征)特征算子、SURF(Speeded Up Robust Features,加速稳健特征)特征算子、SIFT(Scale-invariant features transform,尺度不变特征变换)特征算子等,具有旋转不变性和光照不变性。但这些基于特征点的图像对齐技术很大程度上依赖图像结构相似区域在梯度大小方向的一致性,尤其是SIFT特征算子。The information fusion between RGB images and NIR images includes two major processes, image alignment and image fusion. Alignment is the foundation and fusion is the fundamental. If the alignment error is large, it will cause artifacts such as ghosts, Problems such as ghosting; if the fusion effect is poor, there will be problems such as color distortion and white borders. For traditional RGB image and RGB image alignment tasks, feature point detection and matching are often used. These feature points include Harris corner points, FAST (Features From Accelerated Segment Test, accelerated segmentation test features) feature operator, SURF (Speeded Up) Robust Features, acceleration robust features) feature operator, SIFT (Scale-invariant features transform, scale-invariant feature transform) feature operator, etc., with rotation invariance and illumination invariance. However, these feature point-based image alignment techniques largely rely on the consistency of the image structure similar regions in the gradient size direction, especially the SIFT feature operator.
然而,RGB图像和NIR图像由于信息来源差异,在同一场景不同物体之间结构相似但梯度方向不一致。如图1-2所示,图1为RGB图像,图2为NIR图像,两个黑色框代表的绿色植物区域,RGB图像较暗,而NIR图像较亮;白色框代表的极暗区域,RGB图像和其附近区域相比较暗,而NIR图像和其附近区域相比相当;天空和建筑其他区域,RGB图像和NIR图像亮度相当。出现这种问题的本质原因是RGB和NIR波段不同,对不同的物体透过率不一致。基于若依然用基于传统的特征点检测对齐技术,如采用SIFT特征点检测和匹配的对齐技术对RGB图像NIR图像进行对齐时,对齐精度较差,对齐的效果有限,无法满足后续图像融合的需求。However, RGB images and NIR images have similar structures but inconsistent gradient directions between different objects in the same scene due to different information sources. As shown in Figure 1-2, Figure 1 is an RGB image, and Figure 2 is an NIR image. The green plant area represented by the two black boxes is darker in the RGB image, while the NIR image is brighter; the extremely dark area represented by the white box, RGB The image is darker than its surrounding area, while the NIR image is comparable to its surrounding area; the sky and other areas of the building, RGB image and NIR image brightness are comparable. The essential reason for this problem is that the RGB and NIR bands are different, and the transmittance to different objects is inconsistent. Based on the traditional feature point detection and alignment technology, such as using SIFT feature point detection and matching alignment technology to align RGB images and NIR images, the alignment accuracy is poor, the alignment effect is limited, and it cannot meet the needs of subsequent image fusion. .
基于此,本申请提出一种可以提高图像对齐效果的图像处理方法、装置、电子设备、计算机可读存储介质,以及一种双目摄像头的像素映射关系方法、装置、电子设备和计算机可读存储介质,具体通过如下实施例进行说明。Based on this, the present application proposes an image processing method, device, electronic device, and computer-readable storage medium that can improve the effect of image alignment, as well as a method, device, electronic device, and computer-readable storage for a pixel mapping relationship of a binocular camera The medium is specifically described by the following examples.
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。It will be understood that the terms "first", "second", etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish a first element from another element. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of this application. Both the first client and the second client are clients, but they are not the same client.
图3为一个实施例中图像处理方法的应用环境示意图。如图3所示,该应用环境包括电子设备302,电子设备302配备有多个摄像头,电子设备302可以通过多个摄像头进行拍摄,并将多个摄像头拍摄的图像进行对齐、融合,以增强拍摄成像的画质效果。具体地,电子设备302获取由第一摄像头拍摄得到的第一图像和由第二摄像头拍摄得到的第二图像,电子设备302根据第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定的像素映射关系,对第二摄像头拍摄得到的第二图像进行像素映射,将获得的第二图像对应的映射图像和第一图像进行对齐。此外,在其他应用中,上述图像处理方法还可以由服务器实现,即由服务器获取待处理的第一图像和第二图像,如从数据库中获取待处理的第一图像和第二图像,或由电子设备302直接将拍摄的待处理的第一图像和第二图像通过网络发送至服务器,以由服务器进行图像对齐处理。FIG. 3 is a schematic diagram of an application environment of the image processing method in one embodiment. As shown in FIG. 3 , the application environment includes an electronic device 302. The electronic device 302 is equipped with multiple cameras. The electronic device 302 can shoot through the multiple cameras, and align and fuse the images captured by the multiple cameras to enhance the shooting. Image quality effect. Specifically, the electronic device 302 acquires the first image captured by the first camera and the second image captured by the second camera, and the electronic device 302 obtains the first camera response function of the first camera and the second camera of the second camera according to the first camera response function of the first camera and the second camera of the second camera. In response to the pixel mapping relationship determined by the function, pixel mapping is performed on the second image captured by the second camera, and the mapping image corresponding to the obtained second image is aligned with the first image. In addition, in other applications, the above image processing method can also be implemented by a server, that is, the server acquires the first image and the second image to be processed, such as acquiring the first image and the second image to be processed from a database, or The electronic device 302 directly sends the captured first image and the second image to be processed to the server through the network, so that the server performs image alignment processing.
另一方面,图3为一个实施例中双目摄像头的像素映射关系确定方法的应用环境示意图。具体地,电子设备302获取第一标定图像组和第二标定图像组,第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像,电子设备302分别确定双目摄像头中第一摄像头对应的第一相机响应函数和第二摄像头对应的第二相机响应函数,并基于第一相机响应函数和第二相机响应函数确定第一摄像头与第二摄像头之间的像素映射关系。此外,在其他应用中,上述双目摄像头的像素映射关系确定方法还可以由服务器实现,即由服务器获取第一标定图像组和第二标定图像组,如从数据库中获取第一标定图像组和第二标定图像组,或由电子设备302直接将拍摄的第一标定图像组和第二标定图像组通过网络发送至服务器,以由服务器进行双目摄像头的像素映射关系确定的处理。On the other hand, FIG. 3 is a schematic diagram of an application environment of a method for determining a pixel mapping relationship of a binocular camera in one embodiment. Specifically, the electronic device 302 obtains a first calibration image group and a second calibration image group, and the first calibration image group includes a first calibration image obtained by shooting the first camera in the binocular camera under the conditions of the same scene and different exposure times, The second calibration image group includes second calibration images captured by the second camera in the binocular camera under the conditions of the same scene and different exposure times, and the electronic device 302 respectively determines the first camera response function corresponding to the first camera in the binocular camera A second camera response function corresponding to the second camera, and a pixel mapping relationship between the first camera and the second camera is determined based on the first camera response function and the second camera response function. In addition, in other applications, the above-mentioned method for determining the pixel mapping relationship of a binocular camera can also be implemented by a server, that is, the server obtains the first calibration image group and the second calibration image group, such as obtaining the first calibration image group and the second calibration image group from a database. The second calibration image group, or the electronic device 302 directly sends the captured first calibration image group and the second calibration image group to the server through the network, so that the server performs the processing of determining the pixel mapping relationship of the binocular camera.
其中,电子设备302可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备等;服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The electronic device 302 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, etc.; the server can be implemented by an independent server or a server cluster composed of multiple servers.
图4为一个实施例中图像处理方法的流程图。本实施例中的图像处理方法,以运行于图3中的电子设备上为例进行描述。如图4所示,图像处理方法包括过程402至过程406。FIG. 4 is a flowchart of an image processing method in one embodiment. The image processing method in this embodiment is described by taking the operation on the electronic device in FIG. 3 as an example. As shown in FIG. 4 , the image processing method includes process 402 to process 406 .
过程402,获取待处理的第一图像和第二图像;第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到。In process 402, a first image and a second image to be processed are acquired; the first image is captured by the first camera, and the second image is captured by the second camera.
具体地,第一图像和第二图像需要进行对齐处理图像,具体可以由两个摄像头分别针对相同场景拍摄得到,其中,第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到。例如,第一图像可以为可见光摄像头拍摄得到的彩色图像,第二图像可以为红外摄像头拍摄得到的红外图像。Specifically, the first image and the second image need to be aligned and processed. Specifically, the images can be captured by two cameras for the same scene, wherein the first image is captured by the first camera, and the second image is captured by the second camera. . For example, the first image may be a color image captured by a visible light camera, and the second image may be an infrared image captured by an infrared camera.
在具体应用中,电子设备可以设有双目摄像头,包括第一摄像头和第二摄像头,如可以设置有两个后置摄像头,通过该两个摄像头可以同时进行拍摄,得到待处理的第一图像和第二图像。In a specific application, the electronic device may be provided with a binocular camera, including a first camera and a second camera. For example, two rear cameras may be provided, and the two cameras can be used to shoot at the same time to obtain the first image to be processed. and the second image.
过程404,基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像素映射,获得第二图像对应的映射图像;其中,像素映射关系基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定。Process 404: Perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the first camera response function of the first camera The second camera response function of the second camera is determined.
其中,像素映射关系反映了第一摄像头与第二摄像头在针对相同场景同时进行拍摄时,第一摄像头拍摄的图像中各像素点的像素值与第二摄像头拍摄的图像中各像素点的像素值之间的映射关系,即通过像素映射关系可以将第一摄像头与第二摄像头各自拍摄的图像进行色彩空间映射,如将第一摄像头拍摄的图像映射至第二摄像头拍摄的图像对应的色彩空间中,以便克服第一摄像头与第二摄像头拍摄的图像由于信息来源差异,出现图像信息结构相似但梯度不一致而导致图像对齐的精度较差的问题。The pixel mapping relationship reflects the pixel value of each pixel in the image captured by the first camera and the pixel value of each pixel in the image captured by the second camera when the first camera and the second camera shoot the same scene at the same time The mapping relationship between them, that is, through the pixel mapping relationship, the images captured by the first camera and the second camera can be mapped to the color space, such as mapping the image captured by the first camera to the color space corresponding to the image captured by the second camera. , in order to overcome the problem that the images captured by the first camera and the second camera have similar information structures but inconsistent gradients due to differences in information sources, resulting in poor image alignment accuracy.
第一摄像头与第二摄像头之间的像素映射关系根据第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定。其中,相机响应函数(Camera Response Function,CRF)用于表征摄像头拍摄到 的图像亮度和真实世界的照度(Radiance)存在的对应关系。一般地,真实世界观察到的亮度或者照度是不变的,不会随着摄像头的不同而改变,而摄像头拍摄到的图像亮度和真实世界的照度存在一定的对应关系,该对应关系通过相机响应函数描述。不同的摄像头的CRF曲线也不一样,但建立的均是摄像头拍摄的图像的亮度和真实世界的照度成一定关系,通过以真实世界照度为桥梁,可以将不同摄像头的色彩域映射到同一个空间,以克服不同摄像头获取的图像信息出现结构相似但梯度不一致的问题。相机响应函数可以预先通过摄像头拍摄的图像进行标定得到。映射图像为通过第一摄像头与第二摄像头之间的像素映射关系对第二图像进行像素映射处理得到,具体可以通过第一摄像头与第二摄像头之间的像素映射关系分别对第二图像中的各像素点的像素值进行更新,获得映射图像。The pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function of the first camera and the second camera response function of the second camera. Among them, the camera response function (Camera Response Function, CRF) is used to characterize the corresponding relationship between the brightness of the image captured by the camera and the illuminance (Radiance) in the real world. Generally, the brightness or illuminance observed in the real world is constant and will not change with different cameras, and there is a certain correspondence between the brightness of the image captured by the camera and the illuminance in the real world. Function description. Different cameras have different CRF curves, but it is established that the brightness of the image captured by the camera has a certain relationship with the illumination of the real world. By using the illumination of the real world as a bridge, the color gamut of different cameras can be mapped to the same space. , to overcome the problem that the image information obtained by different cameras has similar structures but inconsistent gradients. The camera response function can be pre-calibrated by the image captured by the camera. The mapped image is obtained by performing pixel mapping processing on the second image through the pixel mapping relationship between the first camera and the second camera. Specifically, the pixel mapping relationship between the first camera and the second camera can be used to respectively map the pixels in the second image. The pixel value of each pixel is updated to obtain a mapped image.
具体地,获得待处理的第一图像和第二图像后,电子设备获取第一摄像头与第二摄像头之间的像素映射关系,并基于该像素映射关系对第二图像进行像素映射,得到第二图像映射到第一图像的色彩空间的映射图像,相比于第二图像,映射图像与第一图像的图像信息结构相似但梯度不一致的问题得到解决,通过映射图像和第一图像进行对齐,可以确保图像对齐效果。Specifically, after obtaining the first image and the second image to be processed, the electronic device obtains the pixel mapping relationship between the first camera and the second camera, and performs pixel mapping on the second image based on the pixel mapping relationship to obtain the second image. The image is mapped to the color space of the first image. Compared with the second image, the problem that the image information structure of the mapped image and the first image is similar but the gradient is inconsistent is solved. By aligning the mapped image and the first image, you can Make sure the images are aligned.
过程406,将第二图像对应的映射图像和第一图像进行对齐。In process 406, the mapping image corresponding to the second image is aligned with the first image.
第二图像对应的映射图像通过第一摄像头与第二摄像头之间的像素映射关系进行像素映射得到,映射图像与第一图像的梯度更加一致,对第二图像对应的映射图像和第一图像进行对齐,如通过SIFT特征检测和匹配的对齐方法对齐映射图像和第一图像,从而将第一摄像头拍摄的图像和第二摄像头拍摄的图像进行精确地对齐,提高了图像对齐的效果。The mapped image corresponding to the second image is obtained by performing pixel mapping on the pixel mapping relationship between the first camera and the second camera, and the gradient of the mapped image and the first image is more consistent. Alignment, for example, aligning the mapped image and the first image through an alignment method of SIFT feature detection and matching, so as to accurately align the image captured by the first camera and the image captured by the second camera, thereby improving the effect of image alignment.
本实施例中的图像处理方法,根据第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定的像素映射关系,对第二摄像头拍摄得到的第二图像进行像素映射,将获得的第二图像对应的映射图像和第一图像进行对齐。在图像处理过程中,利用第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定的像素映射关系对第二图像进行像素映射,可以利用摄像头的相机响应函数将第二图像映射到第一图像的像素空间,能够解决图像信息结构相似但梯度不一致的问题,确保图像对齐的精度,从而提高图像对齐的效果。In the image processing method in this embodiment, pixel mapping is performed on the second image captured by the second camera according to the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera. The obtained mapping image corresponding to the second image is aligned with the first image. In the image processing process, pixel mapping is performed on the second image by using the pixel mapping relationship determined by the first camera response function of the first camera and the second camera response function of the second camera, and the second image can be mapped by using the camera response function of the camera. Mapping to the pixel space of the first image can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
在一个实施例中,图像处理方法还包括基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定像素映射关系的处理,具体包括:获取第一标定图像组和第二标定图像组;第一标定图像组包括由第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;基于各第一标定图像确定第一摄像头对应的第一相机响应函数;基于各第二标定图像确定第二摄像头对应的第二相机响应函数;根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the image processing method further includes a process of determining a pixel mapping relationship based on the first camera response function of the first camera and the second camera response function of the second camera, specifically including: acquiring a first calibration image group and a second Calibration image group; the first calibration image group includes the first calibration image obtained by the first camera under the same scene and different exposure time conditions, and the second calibration image group includes the second camera under the same scene and different exposure time conditions shooting the obtained second calibration image; determining a first camera response function corresponding to the first camera based on each first calibration image; determining a second camera response function corresponding to the second camera based on each second calibration image; according to the first camera response function and the second camera response function to determine the pixel mapping relationship between the first camera and the second camera.
其中,第一标定图像组包括由第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像。即第一标定图像组和第二标定图像组中的图像均是由对应的摄像头针对相同场景拍摄得到,且第一标定图像组中的各第一标定图像对应拍摄时的曝光时间不同,第二标定图像组中各第二标定图像对应拍摄时的曝光时间不同。在具体实现时,第一标定图像组和第二标定图像组对应的拍摄场景可以为包括过曝和过暗区域的高动态范围场景,以便确保确定的像素映射关系能够适用于高动态范围场景,保证像素映射关系的适用范围。第一标定图像和第二标定图像的数量及对应的曝光时间可以根据实际需要进行灵活设置,如第一标定图像和第二标定图像的数量均可以为5张,而对应拍摄的曝光时间可以递增,且第一标定图像和第二标定图像各自的曝光时间可以不同。曝光时间的调整可以通过修改电子设备的信号增益(gain值)和快门速度(shutter值)实现。The first calibration image group includes a first calibration image captured by a first camera under the same scene and different exposure time conditions, and the second calibration image group includes a second calibration image captured under the same scene and different exposure time conditions. the second calibration image. That is, the images in the first calibration image group and the second calibration image group are both captured by the corresponding cameras for the same scene, and the exposure times of the first calibration images in the first calibration image group are different when they are captured. The exposure times of the respective second calibration images in the calibration image group are different during shooting. In specific implementation, the shooting scenes corresponding to the first calibration image group and the second calibration image group may be high dynamic range scenes including overexposed and overdark areas, so as to ensure that the determined pixel mapping relationship can be applied to high dynamic range scenes, Guarantees the applicable scope of the pixel mapping relationship. The number of the first calibration image and the second calibration image and the corresponding exposure time can be flexibly set according to actual needs. For example, the number of the first calibration image and the second calibration image can be 5, and the corresponding exposure time can be increased. , and the respective exposure times of the first calibration image and the second calibration image may be different. The exposure time can be adjusted by modifying the signal gain (gain value) and shutter speed (shutter value) of the electronic device.
进一步地,电子设备在确定第一摄像头与第二摄像头之间的像素映射关系时,即电子设备在标定第一摄像头与第二摄像头之间的像素映射关系时,可以先分别对第一摄像头和第二摄像头进行自标定,确定第一摄像头对应的第一相机响应函数以及第二摄像头对应的第二相机响应函数,并利用第一相机响应函数和第二相机响应函数进行互标定,得到第一摄像头与第二摄像头之间的像素映射关系。Further, when the electronic device determines the pixel mapping relationship between the first camera and the second camera, that is, when the electronic device calibrates the pixel mapping relationship between the first camera and the second camera, the first camera and the second camera can be respectively The second camera performs self-calibration, determines the first camera response function corresponding to the first camera and the second camera response function corresponding to the second camera, and uses the first camera response function and the second camera response function to perform mutual calibration to obtain the first camera response function. The pixel mapping relationship between the camera and the second camera.
具体地,获得第一标定图像组和第二标定图像组后,电子设备分别基于各第一标定图像确定第一摄像头对应的第一相机响应函数,基于各第二标定图像确定第二摄像头对应的第二相机响应函数。具体地,电子设备可以先对第一标定图像组和第二标定图像组中的各标定图像进行对齐,如通过中值阈值位图的对齐 方法对第一标定图像组和第二标定图像组中的各标定图像进行中值阈值对齐,并基于中值阈值对齐后的第一标定图像和中值阈值对齐后的第二标定图像确定相应的相机响应函数。具体可以由电子设备基于各第一标定图像的亮度通道图像和各第二标定图像的亮度通道图像,通过Debevec算法分别求取得到第一摄像头对应的第一相机响应函数和第二摄像头对应的第二相机响应函数。得到第一相机响应函数和第二相机响应函数后,电子设备基于第一相机响应函数和第二相机响应函数确定第一摄像头与第二摄像头之间的像素映射关系。例如,可以利用第一标定图像和第二标定图像间匹配点的像素值,以及匹配点根据第一相机响应函数和第二相机响应函数确定的相对照度值,确定第一摄像头与第二摄像头之间的照度映射关系,并基于该照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。Specifically, after obtaining the first calibration image group and the second calibration image group, the electronic device determines the first camera response function corresponding to the first camera based on each first calibration image, and determines the corresponding second camera based on each second calibration image. The second camera response function. Specifically, the electronic device may first align the calibration images in the first calibration image group and the second calibration image group, for example, by using the median threshold bitmap alignment method to align the first calibration image group and the second calibration image group. Perform median threshold alignment on each of the calibration images, and determine the corresponding camera response function based on the first calibration image after median threshold alignment and the second calibration image after median threshold alignment. Specifically, based on the luminance channel image of each first calibration image and the luminance channel image of each second calibration image, the electronic device can obtain the first camera response function corresponding to the first camera and the first camera corresponding to the second camera through the Debevec algorithm. Two camera response functions. After obtaining the first camera response function and the second camera response function, the electronic device determines the pixel mapping relationship between the first camera and the second camera based on the first camera response function and the second camera response function. For example, the pixel value of the matching point between the first calibration image and the second calibration image, and the relative illuminance value determined by the matching point according to the response function of the first camera and the response function of the second camera, can be used to determine the difference between the first camera and the second camera. The illuminance mapping relationship between them is determined, and the pixel mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship.
本实施例中,通过第一摄像头和第二摄像头各自拍摄的标定图像进行自标定,确定第一相机响应函数和第二相机响应函数,并根据得到的第一相机响应函数和第二相机响应函数进行互标定,得到第一摄像头与第二摄像头之间的像素映射关系。像素映射关系基于第一相机响应函数和第二相机响应函数确定,通过该像素映射关系可以将第一摄像头拍摄的图像和第二摄像头拍摄的图像的色彩空间进行映射,以确保图像对齐时的梯度一致性,能够有效提高图像对齐的效果。In this embodiment, self-calibration is performed by using the calibration images captured by the first camera and the second camera, respectively, to determine the first camera response function and the second camera response function, and according to the obtained first camera response function and second camera response function Perform mutual calibration to obtain the pixel mapping relationship between the first camera and the second camera. The pixel mapping relationship is determined based on the response function of the first camera and the response function of the second camera, and the color space of the image captured by the first camera and the image captured by the second camera can be mapped through the pixel mapping relationship, so as to ensure the gradient when the images are aligned Consistency can effectively improve the effect of image alignment.
在一个实施例中,第一摄像头为可见光摄像头;如图5所示,确定第一相机响应函数的处理过程,即基于各第一标定图像确定第一摄像头对应的第一相机响应函数,包括过程502至过程508。In one embodiment, the first camera is a visible light camera; as shown in FIG. 5 , the process of determining the first camera response function, that is, determining the first camera response function corresponding to the first camera based on each first calibration image, includes the process 502 to process 508.
过程502,获取各第一标定图像分别对应于目标色彩通道的目标通道图像。In process 502, target channel images corresponding to each of the first calibration images respectively corresponding to the target color channel are acquired.
其中,可见光摄像头可以拍摄得到彩色图像,如RBG摄像头,包括Red、Green、Blue滤光片的传感器来接收物体反射光,生成RGB彩色图像。目标色彩通道为需要构建对应相机响应函数的色彩通道。相机响应函数与摄像头本身相关,不同摄像头拍摄的图像亮度与真实世界的照度之间的对应关系不同,即不同的摄像头对应于不同的相机响应函数,而相同摄像头在不同色彩通道中,相机响应函数对应函数曲线的表现形式也不一样。例如,对于可见光摄像头拍摄的RGB图像,其由三个色彩通道构成,则可以分别基于R、G和B通道标定对应的相机响应函数,各通道对应的相机响应函数相互之间有一定差异,但各通道对应的相机响应函数均体现了可见光摄像头拍摄的图像的亮度与真实世界的照度之间的对应关系。目标通道图像为第一标定图像对应于目标色彩通道的图像,如第一标定图像为RGB图像,目标色彩通道为R通道,则目标通道图像可以为RGB图像进行通道分离后的R通道图像。目标色彩通道可以根据实际需求进行设置。Among them, visible light cameras can capture color images, such as RBG cameras, sensors including Red, Green, and Blue filters to receive reflected light from objects and generate RGB color images. The target color channel is the color channel that needs to construct the corresponding camera response function. The camera response function is related to the camera itself. The correspondence between the brightness of the image captured by different cameras and the illuminance in the real world is different, that is, different cameras correspond to different camera response functions, and the same camera is in different color channels, the camera response function The expressions of the corresponding function curves are also different. For example, for an RGB image captured by a visible light camera, which consists of three color channels, the corresponding camera response functions can be calibrated based on the R, G, and B channels, respectively. The camera response functions corresponding to each channel are different from each other, but The camera response function corresponding to each channel reflects the correspondence between the brightness of the image captured by the visible light camera and the illumination in the real world. The target channel image is an image of the first calibration image corresponding to the target color channel. If the first calibration image is an RGB image and the target color channel is an R channel, the target channel image may be an R channel image obtained by channel separation of the RGB image. The target color channel can be set according to actual needs.
过程504,确定相同场景中同一位置在各目标通道图像中所对应的第一特征点。In process 504, the first feature points corresponding to the same position in each target channel image in the same scene are determined.
各第一标定图像均基于相同场景拍摄得到,对于相同场景中同一位置,确定该位置在各目标通道图像中所对应的第一特征点,各目标通道图像所对应的第一特征点均指向现实世界中场景的同一位置,但各目标通道图像的曝光时间不同。具体地,电子设备可以从各目标通道图像中确定对应于相同场景中同一位置的第一特征点。Each first calibration image is captured based on the same scene. For the same position in the same scene, the first feature point corresponding to the position in each target channel image is determined, and the first feature point corresponding to each target channel image points to reality. The same location of the scene in the world, but with different exposure times for each target channel image. Specifically, the electronic device may determine the first feature point corresponding to the same position in the same scene from each target channel image.
过程506,确定各第一特征点对应于目标色彩通道的通道亮度值。In process 506, the channel luminance value of each first feature point corresponding to the target color channel is determined.
得到各目标通道图像中相互对应的第一特征点后,电子设备进一步确定各第一特征点对应于目标色彩通道的通道亮度值。具体可以由电子设备确定第一特征点对应于目标色彩通道的通道像素值,基于该通道像素值得到第一特征点对应于目标色彩通道的通道亮度值。目标色彩通道为单通道时,通道亮度值与通道像素值的数值相等。After obtaining the first feature points corresponding to each other in each target channel image, the electronic device further determines the channel luminance value of each first feature point corresponding to the target color channel. Specifically, the electronic device may determine that the first feature point corresponds to the channel pixel value of the target color channel, and obtain the channel luminance value of the first feature point corresponding to the target color channel based on the channel pixel value. When the target color channel is a single channel, the channel luminance value is equal to the channel pixel value.
过程508,根据各第一特征点对应于目标色彩通道的通道亮度值,确定第一摄像头对应的第一相机响应函数。In process 508, a first camera response function corresponding to the first camera is determined according to the channel luminance value of each first feature point corresponding to the target color channel.
得到各第一特征点的通道亮度值后,基于各通道亮度值确定第一摄像头对应的第一相机响应函数。具体实现时,可以由电子设备基于Debevec算法,通过各第一特征点对应于目标色彩通道的通道亮度值,求取得到第一摄像头对应的第一相机响应函数。After obtaining the channel brightness values of the first feature points, the first camera response function corresponding to the first camera is determined based on the channel brightness values. During specific implementation, the electronic device may obtain the first camera response function corresponding to the first camera by using the channel luminance value of each first feature point corresponding to the target color channel based on the Debevec algorithm.
本实施例中,对于可见光摄像头,通过其拍摄的第一标定图像对应于目标色彩通道的目标通道图像进行相机响应函数标定,能够根据实际需要确定第一摄像头对应于各种通道的相机响应函数。In this embodiment, for the visible light camera, the first calibration image captured by the camera corresponds to the target channel image of the target color channel to perform camera response function calibration, and the camera response functions of the first camera corresponding to various channels can be determined according to actual needs.
在一个实施例中,获取各第一标定图像分别对应于目标色彩通道的目标通道图像,包括:对第一标定图像进行通道分离,得到各分离通道图像;根据各分离通道图像得到对应于目标色彩通道的目标通道图像。In one embodiment, acquiring the target channel images corresponding to the target color channels of the first calibration images respectively includes: performing channel separation on the first calibration images to obtain separate channel images; obtaining images corresponding to the target color according to the separated channel images The target channel image for the channel.
其中,分离通道图像为第一标定图像进行通道分离处理后得到的对应于各个色彩通道的图像,分离 通道图像与第一标定图像所处的色彩空间对应。如RGB图像进行通道分离后可以获得R通道图像、G通道图像和B通道图像;HSV(Hue-Saturation-Value,色调-饱和度-明度)图像进行通道分离后可以获得H通道图像、S通道图像和V通道图像。Wherein, the separated channel image is an image corresponding to each color channel obtained after the first calibration image is subjected to channel separation processing, and the separated channel image corresponds to the color space in which the first calibration image is located. For example, after channel separation of RGB image, R channel image, G channel image and B channel image can be obtained; HSV (Hue-Saturation-Value, Hue-Saturation-Value) image can be obtained after channel separation to obtain H channel image, S channel image and V channel image.
具体地,目标色彩通道可以根据实际需求进行设置,在获取第一标定图像分别对应于目标色彩通道的目标通道图像时,电子设备对第一标定图像进行通道分离,得到各分离通道图像,基于得到的各分离通道图像确定对应于该目标色彩通道的目标通道图像。例如,可以从各分离通道图像中选择与目标色彩通道对应的分离通道图像作为目标通道图像;目标色彩通道包括所有分离通道时,也可以直接将所有分离通道图像均作为目标通道图像,以建立第一摄像头对应于各个色彩通道的相机响应函数。此外,还可以对各分离通道图像进行变换,得到目标通道图像。例如,在目标色彩通道为亮度通道,亮度通道指色彩空间中表征图像画面的明亮程度的通道,即目标通道图像为亮度通道图像,而第一标定图像对应的各分离通道图像包括R通道图像、G通道图像和B通道图像时,可以利用R、G和B通道与亮度通道的映射关系,如Y=0.299*R+0.587*G+0.114*B,其中,Y为亮度,结合R通道图像、G通道图像和B通道图像得到亮度通道图像,即得到目标通道图像。Specifically, the target color channel can be set according to actual requirements. When acquiring the target channel images of the first calibration image corresponding to the target color channel respectively, the electronic device performs channel separation on the first calibration image to obtain each separated channel image. Based on the obtained Each of the separate channel images determines the target channel image corresponding to the target color channel. For example, the separation channel image corresponding to the target color channel can be selected from the separation channel images as the target channel image; when the target color channel includes all separation channels, all separation channel images can also be directly used as the target channel image to establish the first channel image. A camera corresponds to the camera response function of each color channel. In addition, each separate channel image can also be transformed to obtain the target channel image. For example, when the target color channel is the luminance channel, the luminance channel refers to the channel in the color space that represents the brightness of the image, that is, the target channel image is the luminance channel image, and the separate channel images corresponding to the first calibration image include the R channel image, When the G channel image and the B channel image are used, the mapping relationship between the R, G and B channels and the brightness channel can be used, such as Y=0.299*R+0.587*G+0.114*B, where Y is the brightness, combined with the R channel image, The brightness channel image is obtained from the G channel image and the B channel image, that is, the target channel image is obtained.
本实施例中,根据第一标定图像进行通道分离后得到的各分离通道图像,快速确定对应所需的目标通道图像,确保相机响应函数标定的处理效率。In this embodiment, according to each separated channel image obtained after channel separation is performed on the first calibration image, the corresponding required target channel image is quickly determined, so as to ensure the processing efficiency of the camera response function calibration.
在一个实施例中,获取各第一标定图像分别对应于目标色彩通道的目标通道图像,包括:将第一标定图像变换至包括目标色彩通道的目标色彩空间,得到目标色彩空间图像;根据目标色彩空间图像得到对应于目标色彩通道的目标通道图像。In one embodiment, acquiring a target channel image corresponding to each first calibration image respectively corresponding to a target color channel includes: transforming the first calibration image into a target color space including the target color channel to obtain a target color space image; according to the target color The spatial image results in a target channel image corresponding to the target color channel.
其中,目标色彩通道根据实际需求预先进行设置,目标色彩空间的各色彩通道包括目标色彩通道,通过将第一标定图像变换至目标色彩空间,可以根据在目标色彩空间对应的图像得到对应于目标色彩通道的目标通道图像。The target color channel is preset according to actual requirements, and each color channel of the target color space includes the target color channel. By transforming the first calibration image to the target color space, the corresponding target color can be obtained according to the image corresponding to the target color space. The target channel image for the channel.
具体地,在获取目标通道图像时,电子设备对第一标定图像的色彩空间进行变换,如可以先确定包括目标色彩通道的目标色彩空间,并对第一标定图像进行色彩空间变换,将第一标定图像变换至目标色彩空间,得到在目标色彩空间中的目标色彩空间图像。电子设备根据该目标色彩空间图像得到对应于目标色彩通道的目标通道图像。具体地,电子设备可以对目标色彩空间图像进行通道分离,并从通道分离获得的分离通道图像中得到目标通道图像。Specifically, when acquiring the target channel image, the electronic device transforms the color space of the first calibration image. For example, the target color space including the target color channel can be determined first, and the color space transformation is performed on the first calibration image. The calibration image is transformed to the target color space to obtain the target color space image in the target color space. The electronic device obtains a target channel image corresponding to the target color channel according to the target color space image. Specifically, the electronic device may perform channel separation on the target color space image, and obtain the target channel image from the separated channel image obtained by the channel separation.
本实施例中,通过将第一标定图像进行色彩空间变换后,根据变换后的结果得到目标通道图像,能够通过通道变换处理,基于第一标定图像得到第一摄像头对应于各种通道的相机响应函数。In this embodiment, after performing color space transformation on the first calibration image, the target channel image is obtained according to the transformed result, and the camera responses of the first camera corresponding to various channels can be obtained based on the first calibration image through channel transformation processing. function.
在一个实施例中,第二摄像头为红外摄像头;基于各第二标定图像确定第二摄像头对应的第二相机响应函数,包括:分别确定相同场景中同一位置在各第二标定图像中所对应的第二特征点;确定各第二特征点的像素值;根据各第二特征点的像素值确定第二摄像头对应的第二相机响应函数。In one embodiment, the second camera is an infrared camera; determining a second camera response function corresponding to the second camera based on each second calibration image includes: respectively determining the corresponding position in each second calibration image of the same position in the same scene second feature points; determining the pixel value of each second feature point; determining the second camera response function corresponding to the second camera according to the pixel value of each second feature point.
其中,红外摄像头工作原理是红外灯发出红外线照射物体,红外线漫反射,被监控摄像头接收,形成红外图像,如NIR图像。第二摄像头为红外摄像头,则第二摄像头拍摄得到的第二标定图像为单通道图像,第二标定图像的像素值的数值与其亮度值数值相同。基于此,可以直接基于相机响应函数确定算法,如基于Debevec算法,根据第二标定图像的像素值求取得到第二摄像头对应的第二相机响应函数。Among them, the working principle of the infrared camera is that the infrared light emits infrared rays to illuminate the object, and the infrared rays are diffusely reflected and received by the monitoring camera to form infrared images, such as NIR images. The second camera is an infrared camera, the second calibration image captured by the second camera is a single-channel image, and the pixel value of the second calibration image has the same value as its brightness value. Based on this, an algorithm may be directly determined based on the camera response function, for example, based on the Debevec algorithm, and the second camera response function corresponding to the second camera may be obtained according to the pixel value of the second calibration image.
具体地,第二摄像头为红外摄像头,标定第二摄像头的相机响应函数时,类同于第一摄像头的相机响应函数标定处理,电子设备分别确定相同场景中同一位置在各第二标定图像中所对应的第二特征点。各第二标定图像均基于相同场景拍摄得到,对于相同场景中同一位置,确定该位置在各第二标定图像中所对应的第二特征点,各第二标定图像所对应的第二特征点均指向现实世界中场景的同一位置,但各第二标定图像的曝光时间不同。具体地,电子设备可以从各第二标定图像中确定对应于相同场景中同一位置的第二特征点。得到各第二特征点后,电子设备获得各第二特征点分别对应的像素值,并基于Debevec算法,通过各第二特征点的像素值,求取得到第二摄像头对应的第二相机响应函数。Specifically, the second camera is an infrared camera. When calibrating the camera response function of the second camera, it is similar to the camera response function calibration process of the first camera. The electronic device determines that the same position in the same scene is located in each second calibration image. The corresponding second feature point. Each second calibration image is captured based on the same scene. For the same position in the same scene, the second feature point corresponding to the position in each second calibration image is determined, and the second feature point corresponding to each second calibration image is Point to the same location in the scene in the real world, but with different exposure times for each second calibration image. Specifically, the electronic device may determine the second feature point corresponding to the same position in the same scene from each of the second calibration images. After obtaining each second feature point, the electronic device obtains the pixel value corresponding to each second feature point, and based on the Debevec algorithm, obtains the second camera response function corresponding to the second camera through the pixel value of each second feature point. .
本实施例中,对于红外摄像头,不需要进行通道变换处理,直接通过其拍摄的第二标定图像的像素值直接进行相机响应函数标定,能够快速确定第二摄像头对应于的相机响应函数。In this embodiment, for the infrared camera, channel conversion processing is not required, and the camera response function is directly calibrated by the pixel value of the second calibration image captured by the infrared camera, which can quickly determine the camera response function corresponding to the second camera.
在一个实施例中,根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系,包括:获取至少一对匹配点对,匹配点对根据从第一标定图像中提取的第一匹配点和从第 二标定图像中提取的第二匹配点进行特征匹配得到;分别确定匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值;根据第一点像素值和第一相机响应函数确定第一相对照度值;根据第二点像素值和第二相机响应函数确定第二相对照度值;基于第一相对照度值和第二相对照度值确定照度映射关系;根据照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, determining the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function includes: acquiring at least one pair of matching point pairs, where the matching point pair The first matching point extracted from a calibration image and the second matching point extracted from the second calibration image are obtained by feature matching; the second point pixel value; determine the first relative illuminance value according to the first point pixel value and the first camera response function; determine the second relative illuminance value according to the second point pixel value and the second camera response function; based on the first relative illuminance value determining an illuminance mapping relationship with the second relative illuminance value; and determining a pixel mapping relationship between the first camera and the second camera according to the illuminance mapping relationship.
其中,匹配点对根据第一匹配点和第二匹配点进行特征匹配得到,第一匹配点从第一标定图像中提取得到,第二匹配点从第二标定图像中提取得到。具体可以分别从第一标定图像和第二标定图像提取得到第一匹配点和第二匹配点,并将得到的各第一匹配点和各第二匹配点进行特征匹配,如根据特征匹配结果构建各匹配点对。匹配点对中包括来自第一标定图像的第一匹配点和来自第一标定图像的第二匹配点。具体实现时,可以通过特征点检测算法,如Fast、SUSA(Smallest Univalue Segment Assimilating Nucleus,最小单值段同化核)、SIFT、SURF或LBP(Local Binary Pattern、局部二值模式)等算法,分别对第一标定图像和第二标定图像进行处理,得到第一匹配点和第二匹配点。The matching point pair is obtained by feature matching according to the first matching point and the second matching point, the first matching point is extracted from the first calibration image, and the second matching point is extracted from the second calibration image. Specifically, the first matching point and the second matching point can be extracted from the first calibration image and the second calibration image, respectively, and feature matching is performed on each of the obtained first matching points and each second matching point, such as constructing a feature matching result. Each matching point pair. The matching point pair includes a first matching point from the first calibration image and a second matching point from the first calibration image. In specific implementation, feature point detection algorithms, such as Fast, SUSA (Smallest Univalue Segment Assimilating Nucleus, minimum single value segment assimilation kernel), SIFT, SURF or LBP (Local Binary Pattern, local binary pattern) and other algorithms can be used to detect The first calibration image and the second calibration image are processed to obtain a first matching point and a second matching point.
特征匹配指将获得的第一匹配点和第二匹配点进行匹配,以确定第一标定图像和第二标定图像中对应的匹配点,一般为拍摄场景中相同的位置对应在第一标定图像和第二标定图像中的像素点。具体可以通过BRIEF(Binary Robust Independent Elementary Features,二进制健壮的独立基本特征)算法、Hamming(汉明)距离算法等对第一匹配点和第二匹配点进行特征匹配,基于特征匹配结果构建匹配点对,每一匹配点对中包括相互匹配的第一匹配点和第二匹配点,第一匹配点来自第一标定图像,而第二匹配点来自第二标定图像。Feature matching refers to matching the obtained first matching point and second matching point to determine the corresponding matching points in the first calibration image and the second calibration image, generally the same position in the shooting scene corresponds to the first calibration image and the second calibration image. The second calibrates the pixels in the image. Specifically, the BRIEF (Binary Robust Independent Elementary Features) algorithm, the Hamming distance algorithm, etc. can be used to perform feature matching on the first matching point and the second matching point, and a matching point pair can be constructed based on the feature matching result. , each matching point pair includes a first matching point and a second matching point that match each other, the first matching point comes from the first calibration image, and the second matching point comes from the second calibration image.
其中,照度指单位面积上所接受可见光的能量。摄像头在现实世界拍摄得到的图像为摄像头感知到的相对照度,该相对照度与现实世界的真实照度存在一定比例关系。摄像头的相机响应函数反映的是摄像头拍摄的图像的像素值与相对照度值的关系,即通过摄像头拍摄的图像的像素值和相机响应函数可以得到对应的相对照度值。根据匹配点对中两个特征点对应的相对照度值,可以得到第一摄像头和第二摄像头相对照度之间的照度映射关系,基于该照度映射关系可以构建第一摄像头与第二摄像头之间的像素映射关系。Among them, illuminance refers to the energy of visible light received per unit area. The image captured by the camera in the real world is the relative illuminance perceived by the camera, and the relative illuminance has a certain proportional relationship with the real illuminance in the real world. The camera response function of the camera reflects the relationship between the pixel value of the image captured by the camera and the relative illuminance value, that is, the corresponding relative illuminance value can be obtained through the pixel value of the image captured by the camera and the camera response function. According to the relative illuminance values corresponding to the two feature points in the matching point pair, the illuminance mapping relationship between the relative illuminance of the first camera and the second camera can be obtained, and based on the illuminance mapping relationship, the illuminance mapping relationship between the first camera and the second camera can be constructed. Pixel mapping relationship.
具体地,得到第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系时,电子设备获取至少一对匹配点对,并分别确定匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值。得到第一点像素值和第二点像素值后,电子设备基于第一点像素值和第一相机响应函数确定第一相对照度值,基于第二点像素值和第二相机响应函数确定第二相对照度值。电子设备根据各第一相对照度值和对应第二相对照度值确定照度映射关系,如电子设备可以对各第一相对照度值和对应第二相对照度值进行统计分析,得到第一摄像头和第二摄像头的照度映射关系。照度映射关系描述了在相同场景下,第一摄像头拍摄得到的图像对应的相对照度值与第二摄像头拍摄得到的图像对应相对照度值的对应关系。进一步地,电子设备基于确定的照度映射关系得到第一摄像头与第二摄像头之间的像素映射关系。像素映射关系描述了在相同场景下,第一摄像头拍摄得到的图像的像素值与第二摄像头拍摄得到的图像的像素值之间的对应关系,基于该对应关系可以实现第一摄像头拍摄得到的图像与第二摄像头拍摄得到的图像之间的像素映射。具体实现时,可以遍历第一摄像头拍摄得到的图像的像素值,通过第一相机响应参数确定各像素值对应的第一相对照度值,基于各第一相对照度值和照度映射关系确定对应的第二相对照度值,基于各第二相对照度值和第二相机响应参数确定第二摄像头拍摄得到的图像的像素值,基于第一摄像头拍摄得到的图像的像素值和第二摄像头拍摄得到的图像的像素值,构建得到第一摄像头与第二摄像头之间的像素映射关系。Specifically, when the first camera response function and the second camera response function are obtained, and the pixel mapping relationship between the first camera and the second camera is determined, the electronic device obtains at least one pair of matching points, and respectively determines the first pair of matching points. The pixel value of the first point of a matching point and the pixel value of the second point of the second matching point. After obtaining the pixel value of the first point and the pixel value of the second point, the electronic device determines the first relative illuminance value based on the pixel value of the first point and the first camera response function, and determines the second point based on the pixel value of the second point and the response function of the second camera. Relative illuminance value. The electronic device determines the illuminance mapping relationship according to each of the first relative illuminance values and the corresponding second relative illuminance values. For example, the electronic device can perform statistical analysis on each of the first relative illuminance values and the corresponding second relative illuminance values to obtain the first camera and the second relative illuminance value. The illuminance mapping relationship of the camera. The illuminance mapping relationship describes the corresponding relationship between the relative illuminance value corresponding to the image captured by the first camera and the relative illuminance value corresponding to the image captured by the second camera under the same scene. Further, the electronic device obtains the pixel mapping relationship between the first camera and the second camera based on the determined illuminance mapping relationship. The pixel mapping relationship describes the correspondence between the pixel value of the image captured by the first camera and the pixel value of the image captured by the second camera in the same scene. Based on the correspondence, the image captured by the first camera can be realized. Pixel mapping with the image captured by the second camera. In specific implementation, the pixel values of the images captured by the first camera can be traversed, the first relative illuminance value corresponding to each pixel value can be determined through the first camera response parameter, and the corresponding first relative illuminance value and the illuminance mapping relationship can be determined based on each first relative illuminance value and the illuminance mapping relationship. Two relative illuminance values, based on each second relative illuminance value and the response parameter of the second camera to determine the pixel value of the image captured by the second camera, based on the pixel value of the image captured by the first camera and the image captured by the second camera. The pixel value is constructed to obtain the pixel mapping relationship between the first camera and the second camera.
本实施例中,通过从第一标定图像中提取的第一匹配点和从第二标定图像中提取的第二匹配点进行特征匹配得到的匹配点对中匹配点对应的像素值,确定第一摄像头与第二摄像头之间照度映射关系,并基于照度映射关系得到第一摄像头与第二摄像头之间的像素映射关系,从而实现了对第一摄像头和第二摄像头的互标定,通过像素映射关系可以解决图像信息结构相似但梯度不一致的问题,确保图像对齐的精度,从而提高图像对齐的效果。In this embodiment, the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image are obtained by performing feature matching on the pixel value corresponding to the matching point in the matching point pair to determine the first matching point. The illuminance mapping relationship between the camera and the second camera, and the pixel mapping relationship between the first camera and the second camera is obtained based on the illuminance mapping relationship, thereby realizing the mutual calibration of the first camera and the second camera. Through the pixel mapping relationship It can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
在一个实施例中,第一标定图像和第二标定图像包括处于相同场景中具备不同区域的标定目标;根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系,包括:确定在第一标定图像中,标定目标的各区域分别对应的第一区域像素值;确定在第二标定图像中,标定目标的 各区域分别对应的第二区域像素值;根据标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the first calibration image and the second calibration image include calibration targets with different regions in the same scene; according to the first camera response function and the second camera response function, the distance between the first camera and the second camera is determined The pixel mapping relationship includes: determining the pixel values of the first area corresponding to each area of the calibration target in the first calibration image; determining the pixel value of the second area corresponding to each area of the calibration target in the second calibration image ; Determine the pixel mapping relationship between the first camera and the second camera according to the corresponding relationship between the first region pixel value and the second region pixel value of the same region in the calibration target.
其中,标定目标预先设置在第一摄像头和第二摄像头拍摄时对应的相同场景中,标定目标划分有不同区域,各区域可以设有相应的颜色。标定目标可以根据实际需求进行设置,如可以为色卡、灰阶卡等。第一摄像头和第二摄像头在相同场景进行拍摄时,会同时拍摄到场景中的标定目标,基于标定目标的各区域的像素值可以对第一摄像头与第二摄像头之间的像素映射关系进行标定。Wherein, the calibration target is preset in the same scene corresponding to the first camera and the second camera when shooting, the calibration target is divided into different areas, and each area may be provided with a corresponding color. The calibration target can be set according to actual needs, such as color card, gray scale card, etc. When the first camera and the second camera are shooting in the same scene, the calibration target in the scene will be captured at the same time, and the pixel mapping relationship between the first camera and the second camera can be calibrated based on the pixel values of each area of the calibration target. .
具体地,得到第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系,且第一标定图像和第二标定图像包括处于相同场景中具备不同区域的标定目标,即第一摄像头和第二摄像头均拍摄到场景中的标定目标时,电子设备分别确定在第一标定图像中,标定目标的各区域分别对应的第一区域像素值,以及在第二标定图像中,标定目标的各区域分别对应的第二区域像素值。得到第一区域像素值和第二区域像素值后,电子设备根据标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,得到第一摄像头与第二摄像头之间的像素映射关系。具体地,标定目标划分有多个区域,电子设备可以针对每个区域在第一标定图像中的第一区域像素值以及在第二标定图像中的第二区域像素值之间的对应关系,确定第一摄像头和第二摄像头之间的照度映射关系,如根据第一区域像素值和第二区域像素值的比值得到照度映射关系,并基于照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。Specifically, the first camera response function and the second camera response function are obtained, the pixel mapping relationship between the first camera and the second camera is determined, and the first calibration image and the second calibration image include different regions in the same scene. The calibration target, that is, when both the first camera and the second camera capture the calibration target in the scene, the electronic device respectively determines the pixel values of the first area corresponding to each area of the calibration target in the first calibration image, and the pixel values in the second area respectively. In the calibration image, the pixel values of the second region corresponding to each region of the calibration target respectively. After obtaining the pixel value of the first area and the pixel value of the second area, the electronic device obtains the difference between the first camera and the second camera according to the corresponding relationship between the pixel value of the first area and the pixel value of the second area of the same area in the calibration target. pixel mapping relationship. Specifically, the calibration target is divided into multiple regions, and the electronic device can determine the correspondence between the pixel values of the first region in the first calibration image and the pixel values of the second region in the second calibration image for each region, The illuminance mapping relationship between the first camera and the second camera, for example, the illuminance mapping relationship is obtained according to the ratio of the pixel value of the first area and the pixel value of the second area, and the illuminance mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship. Pixel mapping relationship.
本实施例中,基于在拍摄的相同场景中标定目标中相同区域在第一标定图像中的第一区域像素值,以及在第二标定图像中的第二区域像素值之间的对应关系,确定在第一标定图像中的第一区域像素值,从而实现了对第一摄像头和第二摄像头的互标定,通过像素映射关系可以解决图像信息结构相似但梯度不一致的问题,确保图像对齐的精度,从而提高图像对齐的效果。In this embodiment, based on the correspondence between the pixel values of the first area in the first calibration image and the pixel values of the second area in the second calibration image in the same area of the calibration target in the same scene captured, determine The pixel value of the first area in the first calibration image, thereby realizing the mutual calibration of the first camera and the second camera, through the pixel mapping relationship, the problem of similar image information structure but inconsistent gradients can be solved to ensure the accuracy of image alignment, Thereby improving the effect of image alignment.
在一个实施例中,标定目标中每个区域具有预设的对应的纯色。In one embodiment, each area in the calibration target has a preset corresponding solid color.
其中,纯色是指一种不混有其他色调的色彩或色相。标定目标中每个区域具有预设的对应的纯色,每个区域之间的颜色可以相同也可以不同。标定目标中每个区域具有对应的纯色,从而可以确保每个区域内的颜色纯净且统一,可以提高区域像素值确定的准确性,从而确保像素映射关系确定的精度,有利于提高图像对齐的效果。在具体实现时,标定目标可以为灰阶卡、色阶卡、色阶图等。Among them, a solid color refers to a color or hue that is not mixed with other hues. Each area in the calibration target has a preset corresponding solid color, and the colors between each area can be the same or different. Each area in the calibration target has a corresponding solid color, which can ensure that the color in each area is pure and uniform, and can improve the accuracy of determining the pixel value of the area, thereby ensuring the accuracy of determining the pixel mapping relationship, which is conducive to improving the effect of image alignment. . In specific implementation, the calibration target can be a grayscale card, a color scale card, a color scale diagram, and the like.
在一个实施例中,基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像素映射,获得第二图像对应的映射图像,包括:分别确定第二图像中各像素点的原始像素值;基于第一摄像头与第二摄像头之间的像素映射关系对各原始像素值进行像素值映射,得到第二图像中各像素点分别对应的映射像素值;基于各映射像素值更新第二图像,获得第二图像对应的映射图像。In one embodiment, performing pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image includes: respectively determining the pixel points of each pixel in the second image. original pixel value; perform pixel value mapping on each original pixel value based on the pixel mapping relationship between the first camera and the second camera to obtain the mapped pixel value corresponding to each pixel in the second image; update the first pixel value based on each mapped pixel value; Two images are obtained, and a mapping image corresponding to the second image is obtained.
其中,原始像素值为第二摄像头拍摄得到的第二图像未进行像素映射时的像素值;映射像素值为通过像素映射关系对原始像素值进行像素映射后得到的像素值;映射图像为基于映射像素值对第二图像进行更新,即第二图像经过像素映射关系进行像素映射后获得的映射结果。The original pixel value is the pixel value of the second image captured by the second camera without pixel mapping; the mapped pixel value is the pixel value obtained by pixel mapping the original pixel value through the pixel mapping relationship; the mapped image is based on the mapping The pixel value updates the second image, that is, the mapping result obtained after pixel mapping is performed on the second image through the pixel mapping relationship.
具体地,电子设备获得待处理的第一图像和第二图像后,分别确定第二图像中各像素点的原始像素值,如电子设备可以遍历第二图像中各像素点,得到各像素点对应的原始像素值。电子设备进一步获取第一摄像头与第二摄像头之间的像素映射关系,并基于该像素映射关系对各原始像素值进行像素值映射,即根据像素映射关系将各原始像素值映射至第一图像的色彩空间中,得到第二图像中各像素点的映射像素值。电子设备基于获得的各映射像素值更新第二图像,具体可以基于各映射像素值对第二图像中对应像素点的像素值进行更新,生成第二图像对应的映射图像,从而实现将第二图像映射至第一图像的色彩空间,克服了由于信息来源差异出现图像信息结构相似但梯度不一致,而导致图像对齐的精度较差的问题,基于第二图像对应的映射图像和第一图像进行融合,提高了图像融合效果。Specifically, after obtaining the first image and the second image to be processed, the electronic device determines the original pixel value of each pixel in the second image respectively. For example, the electronic device can traverse each pixel in the second image to obtain the corresponding pixel value of each pixel. the original pixel value. The electronic device further acquires the pixel mapping relationship between the first camera and the second camera, and performs pixel value mapping on each original pixel value based on the pixel mapping relationship, that is, maps each original pixel value to the first image according to the pixel mapping relationship. In the color space, the mapped pixel value of each pixel in the second image is obtained. The electronic device updates the second image based on the obtained mapped pixel values, and specifically may update the pixel values of the corresponding pixel points in the second image based on the mapped pixel values to generate a mapped image corresponding to the second image, so as to realize the conversion of the second image. The color space mapped to the first image overcomes the problem that the image information structure is similar but the gradients are inconsistent due to the difference in information sources, resulting in poor image alignment accuracy. Based on the mapping image corresponding to the second image, the first image is fused, Improved image fusion effect.
在一个实施例中,将第一图像和第二图像对应的映射图像进行对齐,包括:分别对第一图像和第二图像对应的映射图像进行畸变校正,得到第一畸变校正图像和第二畸变校正图像;分别对第一畸变校正图像和第二畸变校正图像进行立体校正,获得第一校正图像和第二校正图像;将第一校正图像和第二校正图像进行网格对齐。In one embodiment, aligning the mapped images corresponding to the first image and the second image includes: respectively performing distortion correction on the mapped images corresponding to the first image and the second image to obtain the first distortion corrected image and the second distortion corrected image. Correcting the images; respectively performing stereo correction on the first and second distortion-corrected images to obtain the first and second corrected images; and performing grid alignment on the first and second corrected images.
其中,畸变校正用于校正因镜头畸变现象而导致的图像失真问题,具体包括校正径向畸变、切向畸变等。立体校正用于确保两摄像机图像平面平行,校正成共面行对准,此时相机光轴共心,图像行对齐,从 而有利于减少后续网格对齐搜索范围。由于场景中整个图像不是共面的,会多存在多个平面,在整个图像上进行图像对齐时,无法确保完全对齐,故通过采用网格对齐的方法,将图像划分成多个小网格,在划分的各网格内分别进行对齐,从而达到对齐效果。Among them, the distortion correction is used to correct the image distortion caused by the lens distortion phenomenon, and specifically includes correcting radial distortion, tangential distortion, and the like. Stereo correction is used to ensure that the image planes of the two cameras are parallel, and are corrected to be aligned in coplanar rows. At this time, the optical axes of the cameras are concentric and the image rows are aligned, which is beneficial to reduce the subsequent grid alignment search range. Since the entire image in the scene is not coplanar, there will be multiple planes. When the image is aligned on the entire image, the complete alignment cannot be ensured. Therefore, the grid alignment method is used to divide the image into multiple small grids. Alignment is performed in each of the divided grids to achieve the alignment effect.
具体地,电子设备获取第一摄像头和第二摄像头分别对应的标定参数,并通过第一摄像头和第二摄像头分别对应的标定参数进行畸变校正和立体校正。标定参数具体可以为两个摄像头预先标定得到的相机参数,具体包括内参、外参和畸变参数等。电子设备分别对第一图像和第二图像对应的映射图像进行畸变校正,得到第一畸变校正图像和第二畸变校正图像,以克服第一图像和第二图像对应的映射图像中存在的径向畸变、切向畸变等畸变问题,提高图像质量。进一步地,电子设备分别对第一畸变校正图像和第二畸变校正图像进行立体校正,具体可以基于Bouguet校正原理对第一畸变校正图像和第二畸变校正图像进行立体校正,获得第一校正图像和第二校正图像,以使获得的第一校正图像和第二校正图像所处平面平行,光轴和图像平面垂直,且极点处于无线远处。电子设备再将获得的第一校正图像和第二校正图像进行网格对齐,实现对第一摄像头的拍摄图像和第二摄像头的拍摄图像的对齐。Specifically, the electronic device obtains calibration parameters corresponding to the first camera and the second camera respectively, and performs distortion correction and stereo correction through the calibration parameters corresponding to the first camera and the second camera respectively. The calibration parameters may specifically be camera parameters obtained by pre-calibration of the two cameras, and specifically include internal parameters, external parameters, and distortion parameters. The electronic device performs distortion correction on the mapped images corresponding to the first image and the second image, respectively, to obtain the first distortion corrected image and the second distortion corrected image, so as to overcome the radial direction existing in the mapped images corresponding to the first image and the second image. Distortion, tangential distortion and other distortion problems, improve image quality. Further, the electronic device performs stereo-correction on the first distortion-corrected image and the second distortion-corrected image respectively. Specifically, the first distortion-corrected image and the second distortion-corrected image can be stereo-corrected based on the Bouguet correction principle to obtain the first corrected image and the second distortion corrected image. The second corrected image is such that the obtained first corrected image and the second corrected image are in parallel with the plane, the optical axis is perpendicular to the image plane, and the pole is in a wireless distance. The electronic device then performs grid alignment on the obtained first corrected image and the second corrected image to achieve alignment of the captured image of the first camera and the captured image of the second camera.
其中,其中,Bouguet校正原理是将OPencv求解出来的旋转和平移矩阵分解成左右相机各旋转一半的旋转和平移矩阵,分解的原则是使得左右图像重投影造成的畸变最小,左右视图的共同面积最大。具体地,基于Bouguet校正原理进行立体校正时,将右图像平面相对于左图像平面的旋转矩阵分解成两个矩阵Rl和Rr,作为左右相机的合成旋转矩阵。将左右相机各旋转一半,使得左右相机的光轴平行,此时左右相机的成像面达到平行,但是基线与成像平面不平行。构造变换矩阵Rrect使得基线与成像平面平行,构造的方法是通过右相机相对于左相机的偏移矩阵T完成的。通过合成旋转矩阵与变换矩阵相乘获得左右相机的整体旋转矩阵。左右相机坐标系乘以各自的整体旋转矩阵就可使得左右相机的主光轴平行,且像平面与基线平行,通过上述的两个整体旋转矩阵,就能够得到理想的平行配置的立体校正后的图像。Among them, the Bouguet correction principle is to decompose the rotation and translation matrices solved by OPencv into rotation and translation matrices that rotate half of each of the left and right cameras. The principle of decomposition is to minimize the distortion caused by the reprojection of the left and right images, and to maximize the common area of the left and right views. . Specifically, when performing stereo correction based on the Bouguet correction principle, the rotation matrix of the right image plane relative to the left image plane is decomposed into two matrices R1 and Rr, which are used as composite rotation matrices of the left and right cameras. Rotate the left and right cameras by half to make the optical axes of the left and right cameras parallel. At this time, the imaging planes of the left and right cameras are parallel, but the baseline is not parallel to the imaging plane. The transformation matrix Rrect is constructed so that the baseline is parallel to the imaging plane, and the construction method is completed by the offset matrix T of the right camera relative to the left camera. The overall rotation matrix of the left and right cameras is obtained by multiplying the composite rotation matrix and the transformation matrix. The left and right camera coordinate systems are multiplied by their respective overall rotation matrices to make the main optical axes of the left and right cameras parallel, and the image plane is parallel to the baseline. Through the above two overall rotation matrices, the ideal parallel configuration of the stereo corrected image can be obtained. image.
本实施例中,利用预先标定的摄像头的相机参数分别对第一图像和第二图像对应的映射图像依次进行畸变校正和立体校正,以克服摄像头拍摄的畸变现象,减少原始图像的失真;同时使两个摄像头拍摄得到的图像所处平面平行,光轴和图像平面垂直,且极点处于无线远处,并对校正后得到的第一校正图像和第二校正图像进行网格对齐,确保了图像对齐的效果。In this embodiment, the pre-calibrated camera parameters of the camera are used to sequentially perform distortion correction and stereo correction on the mapped images corresponding to the first image and the second image, so as to overcome the distortion phenomenon captured by the camera and reduce the distortion of the original image; The planes of the images captured by the two cameras are parallel, the optical axis is perpendicular to the image plane, and the pole is in the wireless distance, and the grid alignment is performed on the first corrected image and the second corrected image obtained after correction to ensure the alignment of the images. Effect.
在一个实施例中,将第一校正图像和第二校正图像进行网格对齐,包括:分别将第一校正图像和第二校正图像进行网格划分,获得第一校正图像对应的各第一网格和第二校正图像对应的各第二网格;分别对各第一网格和各第二网格中进行网格特征点检测,获得第一网格对应的第一网格特征点和第二网格对应的第二网格特征点;基于各第一网格特征点和第二网格特征点对第一校正图像和第二校正图像进行图像变换,以对齐第一校正图像和第二校正图像。In one embodiment, performing grid alignment on the first corrected image and the second corrected image includes: performing grid division on the first corrected image and the second corrected image respectively to obtain each first grid corresponding to the first corrected image. grid and each second grid corresponding to the second corrected image; respectively perform grid feature point detection on each first grid and each second grid, and obtain the first grid feature points and the first grid feature points corresponding to the first grid The second grid feature points corresponding to the two grids; image transformation is performed on the first corrected image and the second corrected image based on each of the first grid feature points and the second grid feature points to align the first corrected image and the second corrected image. Correct the image.
其中,网格划分用于将图像划分成多个小网格,并分别对各小网格进行对齐,以避免图像存在多个平面时无法整体进行对齐的问题。网格特征点检测用于检测网格中的特征点,以通过该特征点将网格进行对齐。The grid division is used to divide the image into multiple small grids, and align each of the small grids separately, so as to avoid the problem that the image cannot be aligned as a whole when there are multiple planes. Grid feature point detection is used to detect feature points in the grid to align the grid with the feature points.
具体地,在将第一校正图像和第二校正图像进行网格对齐时,电子设备分别将第一校正图像和第二校正图像进行网格划分,获得第一校正图像对应的各第一网格和第二校正图像对应的各第二网格。网格的划分参数可以根据实际需要进行设置,如可以将第一校正图像和第二校正图像分别划分为N*N个网格。得到各网格后,电子设备分别对各第一网格和各第二网格中进行网格特征点检测,具体可以通过Fast、SUSA、SIFT、SURF或LBP等算法进行特征点检测,获得第一网格对应的第一网格特征点和第二网格对应的第二网格特征点。电子设备基于各第一网格特征点和第二网格特征点对第一校正图像和第二校正图像进行图像变换,以对齐第一校正图像和第二校正图像。具体实现时,电子设备可以针对各第一网格和对应的第二网格进行对齐,从而可以并行对多个网格对进行对齐,每个网格对包括相互匹配的第一网格和第二网格。具体地,得到第一网格对应的第一网格特征点和第二网格对应的第二网格特征点后,基于第一网格特征点和第二网格特征点进行特征匹配,以实现第一网格和第二网格的匹配,构建得到网格对。针对每个网格对进行误匹配去除处理,如可以通过RANSAC(Random Sample Consensus,随机抽样一致)算法去除特征点匹配对的误匹配网格对。电子设备进一步计算各网格对的单应矩阵,并基于单应矩阵对网格对中的第一网格和第二网格进行透视变换,以实现对网格对中的第一网格和第二网格的对齐,根据各网格对的对齐结果得到对齐的第一图像和对齐的第二图像。Specifically, when grid alignment is performed on the first corrected image and the second corrected image, the electronic device respectively performs grid division on the first corrected image and the second corrected image to obtain each first grid corresponding to the first corrected image. Each second grid corresponding to the second corrected image. The grid division parameters can be set according to actual needs, for example, the first corrected image and the second corrected image can be divided into N*N grids respectively. After obtaining each grid, the electronic device performs grid feature point detection on each first grid and each second grid respectively. Specifically, the feature point detection can be performed by algorithms such as Fast, SUSA, SIFT, SURF or LBP to obtain the first grid. The first grid feature point corresponding to one grid and the second grid feature point corresponding to the second grid. The electronic device performs image transformation on the first corrected image and the second corrected image based on each of the first grid feature points and the second grid feature points, so as to align the first corrected image and the second corrected image. During specific implementation, the electronic device can align each of the first grids and the corresponding second grids, so that multiple grid pairs can be aligned in parallel, and each grid pair includes a first grid and a second grid that match each other. Two grids. Specifically, after obtaining the first grid feature points corresponding to the first grid and the second grid feature points corresponding to the second grid, feature matching is performed based on the first grid feature points and the second grid feature points to obtain The matching of the first grid and the second grid is realized, and a grid pair is obtained by constructing. For each grid pair, mismatch removal processing is performed, for example, the mismatched grid pair of the feature point matching pair can be removed by the RANSAC (Random Sample Consensus) algorithm. The electronic device further calculates the homography matrix of each grid pair, and performs perspective transformation on the first grid and the second grid in the grid pair based on the homography matrix, so as to realize the transformation of the first grid and the second grid in the grid pair. For the alignment of the second grid, the aligned first image and the aligned second image are obtained according to the alignment results of each grid pair.
本实施例中,将图像划分成多个小网格,并分别对各小网格进行对齐,以避免图像存在多个平面时无法整体进行对齐的问题,进一步提高了图像对齐效果。In this embodiment, the image is divided into a plurality of small grids, and the small grids are aligned separately, so as to avoid the problem that the image cannot be aligned as a whole when there are multiple planes, and further improve the image alignment effect.
在一个实施例中,在分别对第一畸变校正图像和第二畸变校正图像进行立体校正,获得第一校正图像和第二校正图像之后,还包括:根据第一校正特征点和第二校正特征点构建特征点匹配对;第一校正特征点从第一校正图像中提取得到,第二校正特征点从第二校正图像提取得到;基于各特征点匹配对中的校正特征点之间的偏移参数,确定第一校正图像和第二校正图像之间的投影参数;通过投影参数对第一校正图像和第二校正图像进行投影对齐,得到第一投影对齐图像和第二投影对齐图像。In one embodiment, after the stereoscopic correction is performed on the first distortion corrected image and the second distortion corrected image respectively to obtain the first corrected image and the second corrected image, the method further includes: according to the first corrected feature point and the second corrected feature point to construct a matching pair of feature points; the first corrected feature point is extracted from the first corrected image, and the second corrected feature point is extracted from the second corrected image; based on the offset between the corrected feature points in each feature point matching pair parameters, determine the projection parameters between the first corrected image and the second corrected image; perform projection alignment on the first corrected image and the second corrected image through the projection parameters to obtain the first projected aligned image and the second projected aligned image.
第一校正特征点从第一校正图像中提取得到,第二校正特征点从第二校正图像提取得到。具体地,可以通过特征点检测算法,如Fast、SUSA、SIFT、SURF、或LBP等算法分别对第一校正图像和第一校正图像进行处理,得到第一校正特征点和第二校正特征点。基于提取得到的第一校正特征点和第二校正特征点构建特征点匹配对,特征点匹配对反映了第一校正图像和第二校正图像中校正特征点的对应关系,特征点匹配对具体可以通过将获得的第一校正特征点和第二校正特征点进行特征匹配,并基于匹配成功对应的第一校正特征点和第二校正特征点构建得到。即每一特征点匹配对中包括相互匹配的第一校正特征点和第二校正特征点,第一校正特征点来自第一校正图像,而第二校正特征点来自第二校正图像。The first corrected feature point is extracted from the first corrected image, and the second corrected feature point is extracted from the second corrected image. Specifically, a feature point detection algorithm, such as Fast, SUSA, SIFT, SURF, or LBP, may be used to process the first corrected image and the first corrected image, respectively, to obtain the first corrected feature point and the second corrected feature point. A feature point matching pair is constructed based on the extracted first correction feature point and the second correction feature point. The feature point matching pair reflects the corresponding relationship between the correction feature points in the first correction image and the second correction image. The feature point matching pair can specifically be It is obtained by performing feature matching on the obtained first correction feature point and the second correction feature point, and constructing it based on the first correction feature point and the second correction feature point corresponding to the successful matching. That is, each feature point matching pair includes a first corrected feature point and a second corrected feature point that match each other, the first corrected feature point is from the first corrected image, and the second corrected feature point is from the second corrected image.
偏移参数用于表征特征点匹配对中的校正特征点之间的对齐程度,若各特征点匹配对中的校正特征点的对齐程度高,那么对应的第一图像和第二图像的对齐效果也较高。具体应用中,偏移参数可以根据特征点匹配对中的校正特征点之间的距离,如欧式距离进行度量。投影参数用于图像对齐,具体可以通过投影参数对两张图像进行投影映射,以实现图像对齐。The offset parameter is used to characterize the alignment degree between the corrected feature points in the feature point matching pair. If the alignment degree of the corrected feature points in each feature point matching pair is high, then the alignment effect of the corresponding first image and the second image is high. Also higher. In a specific application, the offset parameter can be measured according to the distance between the corrected feature points in the feature point matching pair, such as the Euclidean distance. Projection parameters are used for image alignment. Specifically, projection mapping can be performed on two images through projection parameters to achieve image alignment.
对第一校正图像和第二校正图像进行对齐具体可以通过投影参数对第二校正图像或第一校正图像进行投影映射,以将第二校正图像投影至第一校正图像的坐标系中,或将第一校正图像投影至第二校正图像的坐标系中,从而实现第一图像和第二图像的投影对齐,得到第一投影对齐图像和第二投影对齐图像。To align the first corrected image and the second corrected image, projection mapping may be performed on the second corrected image or the first corrected image by using projection parameters, so as to project the second corrected image into the coordinate system of the first corrected image, or to map the second corrected image to the coordinate system of the first corrected image. The first corrected image is projected into the coordinate system of the second corrected image, so as to realize the projected alignment of the first image and the second image, and obtain the first projected aligned image and the second projected aligned image.
具体地,在获得第一校正图像和第二校正图像后,电子设备根据从第一校正图像中提取得到的第一校正特征点和从第二校正图像提取得到的第二校正特征点构建特征点匹配对。在构建得到特征点匹配对后,电子设备确定各特征点匹配对中的校正特征点之间的偏移参数,如可以分别计算每一特征点匹配对中的校正特征点之间的距离,并根据各特征点匹配对对应的距离构建图像偏移函数,通过求解该图像偏移函数确定投影参数。得到投影参数后,电子设备利用投影参数对第一校正图像和第二校正图像进行投影对齐,具体可以由电子设备通过投影参数将第一校正图像或第二校正图像进行投影映射,以实现对第一校正图像和第二校正图像的对齐。投影参数根据特征点匹配对中的校正特征点之间的偏移参数确定,可以根据图像拍摄的场景进行动态校准投影参数,能够降低随机误差的影响,从而提高利用该投影参数进行图像对齐的效果。Specifically, after obtaining the first corrected image and the second corrected image, the electronic device constructs feature points according to the first corrected feature points extracted from the first corrected image and the second corrected feature points extracted from the second corrected image matching pairs. After the feature point matching pairs are constructed and obtained, the electronic device determines the offset parameters between the correction feature points in each feature point matching pair, for example, the distance between the correction feature points in each feature point matching pair can be calculated separately, and The image offset function is constructed according to the distances corresponding to each feature point matching pair, and the projection parameters are determined by solving the image offset function. After obtaining the projection parameters, the electronic device uses the projection parameters to perform projection alignment on the first corrected image and the second corrected image. Specifically, the electronic device may perform projection mapping on the first corrected image or the second corrected image by using the projection parameters, so as to realize the alignment of the first corrected image or the second corrected image. Alignment of a corrected image and a second corrected image. The projection parameters are determined according to the offset parameters between the corrected feature points in the feature point matching pair, and the projection parameters can be dynamically calibrated according to the scene captured by the image, which can reduce the influence of random errors and improve the effect of image alignment using the projection parameters. .
进一步地,将第一校正图像和第二校正图像进行网格对齐,包括:将第一投影对齐图像和第二投影对齐图像进行网格对齐。Further, performing grid alignment on the first corrected image and the second corrected image includes: performing grid alignment on the first projected aligned image and the second projected aligned image.
得到投影对齐后的第一投影对齐图像和第二投影对齐图像后,将第一投影对齐图像和第二投影对齐图像进行网格对齐,以基于网格对齐方式将一投影对齐图像和第二投影对齐图像进行对齐,从而实现对第一图像和第二图像的对齐。After obtaining the projection-aligned first projection-aligned image and the second projection-aligned image, perform grid alignment on the first projection-aligned image and the second projection-aligned image, so as to align the first projection-aligned image and the second projection-aligned image based on the grid alignment method The alignment images are aligned, thereby realizing alignment of the first image and the second image.
本实施例中,在图像对齐时,利用第一校正图像中的第一校正特征点和第二校正图像中的第二校正特征点构建特征点匹配对,可以确保特征点匹配对中各校正特征点的匹配精度,同时根据特征点匹配对中的校正特征点之间的偏移参数确定投影参数,可以根据图像拍摄的场景进行动态校准投影参数,降低了随机误差的影响,提高了利用该投影参数进行图像对齐的效果。再将第一投影对齐图像和第二投影对齐图像进行网格对齐,以避免图像存在多个平面时无法整体进行对齐的问题,进一步提高了图像的对齐效果。In this embodiment, during image alignment, a feature point matching pair is constructed by using the first correction feature point in the first correction image and the second correction feature point in the second correction image, so as to ensure that each correction feature in the feature point matching pair can be ensured. At the same time, the projection parameters are determined according to the offset parameters between the corrected feature points in the feature point matching pair, and the projection parameters can be dynamically calibrated according to the scene captured by the image, which reduces the influence of random errors and improves the utilization of the projection parameters. The parameter performs the effect of image alignment. The first projection-aligned image and the second projection-aligned image are then grid-aligned, so as to avoid the problem that the images cannot be aligned as a whole when there are multiple planes, and further improve the alignment effect of the images.
在一个实施例中,提供了一种图像处理方法,该图像处理方法应用于手机的RGB相机拍摄的RGB图像与NIR相机拍摄的NIR图像的对齐处理过程中。具体地,如图6所示,第一图像为RGB相机拍摄得到的RGB图像,第二图像为NIR相机拍摄得到的NIR图像,在获得RGB图像和NIR图像后,对RGB图像和NIR图像进行CRF校正,具体根据RGB相机的第一相机响应函数与NIR相机的第二相机响应函数确定的像素映射关系,对NIR图像进行像素映射,将获得的NIR图像对应的映射图像和RGB图像进行对齐。进一步利用预先标定的相机参数分别对CRF校正后的RGB图像和CRF校正后的NIR图像进行畸变 校正,得到畸变校正的RGB图像和畸变校正的NIR图像,再对畸变校正的RGB图像和畸变校正的NIR图像分别进行立体校正,获得立体校正的RGB图像和立体校正的NIR图像。分别对立体校正的RGB图像和立体校正的NIR图像依次构建网格、提取SIFT特征、特征匹配和去除误匹配、计算单应矩阵及透视变换,获得对齐后的RGB图像和对齐后的NIR图像。In one embodiment, an image processing method is provided, and the image processing method is applied in the process of aligning the RGB image captured by the RGB camera of the mobile phone and the NIR image captured by the NIR camera. Specifically, as shown in FIG. 6 , the first image is an RGB image captured by an RGB camera, and the second image is an NIR image captured by an NIR camera. After the RGB image and the NIR image are obtained, CRF is performed on the RGB image and the NIR image. The correction is to perform pixel mapping on the NIR image according to the pixel mapping relationship determined by the first camera response function of the RGB camera and the second camera response function of the NIR camera, and align the mapping image corresponding to the obtained NIR image with the RGB image. Further, the pre-calibrated camera parameters are used to perform distortion correction on the CRF-corrected RGB image and the CRF-corrected NIR image, respectively, to obtain the distortion-corrected RGB image and the distortion-corrected NIR image. Stereo-corrected the NIR images, respectively, to obtain a stereo-corrected RGB image and a stereo-corrected NIR image. For the stereo-corrected RGB image and the stereo-corrected NIR image, the grid is constructed in turn, SIFT features are extracted, feature matching and false matching are removed, the homography matrix and perspective transformation are calculated, and the aligned RGB image and the aligned NIR image are obtained.
其中,相机标定用来标定相机传感器的内外参和畸变参数。RGB相机只需要标定内参和畸变参数,NIR相机除标定内参和畸变参数外,还需要标定外参。如图7所示,在标定相机参数时,首先需要获取标定板图像对,获取RGB图像和NIR图像,标定板图像是在室内拍摄,光照强度较弱,拍摄过程需要全程补光,然后检测标定板的角点,采用张正友标定法分别对RGB相机、NIR相机进行标定,得到RGB相机、NIR相机的标定参数。获得的标定参数可进行存储,以用于后续的图像校正处理。Among them, camera calibration is used to calibrate the internal and external parameters and distortion parameters of the camera sensor. RGB cameras only need to calibrate internal parameters and distortion parameters, while NIR cameras need to calibrate external parameters in addition to internal parameters and distortion parameters. As shown in Figure 7, when calibrating the camera parameters, it is first necessary to obtain the calibration board image pair, RGB image and NIR image. The calibration board image is taken indoors, and the light intensity is weak. The whole process of shooting needs to be filled with light, and then the calibration board is detected and calibrated. At the corners of the board, the RGB camera and the NIR camera were calibrated by Zhang Zhengyou's calibration method, respectively, and the calibration parameters of the RGB camera and the NIR camera were obtained. The obtained calibration parameters can be stored for subsequent image correction processing.
进一步地,相机的摄像头用来采集图像,一般需要在出厂前进行标定。RGB相机NIR相机的标定均可以通过单摄像头标定实现。单摄像头标定是指确定单摄像头内参和外参的值。单摄像头的内参可包括f x、f y、c x、c y,其中,f x表示焦距在图像坐标系x轴方向上单位像元大小,f y表示焦距在图像坐标系y轴方向上单位像元大小,c x、c y表示图像平面的主点坐标,主点是光轴与图像平面的交点。f x=f/d x,f y=f/d y,其中,f为单摄像头的焦距,d x表示图像坐标系x轴方向上一个像素的宽度,d y表示图像坐标系y轴方向上一个像素的宽度。图像坐标系是以摄像头拍摄的二维图像为基准建立的坐标系,用于指定物体在拍摄图像中的位置。图像坐标系中的(x,y)坐标系的原点位于摄像头光轴与成像平面的焦点(c x,c y)上,单位为长度单位,即米,像素坐标系中的(u,v)坐标系的原点在图像的左上角,单位为数量单位,即个。(x,y)用于表征物体从摄像头坐标系向图像坐标系的透视投影关系,(u,v)用于表征像素坐标。(x,y)与(u,v)之间的转换关系如公式(1): Further, the camera of the camera is used to capture images, and generally needs to be calibrated before leaving the factory. The calibration of RGB camera and NIR camera can be achieved by single camera calibration. Single-camera calibration refers to determining the values of the internal and external parameters of a single camera. The internal parameters of a single camera may include f x , f y , c x , and cy , where f x represents the unit pixel size of the focal length in the x-axis direction of the image coordinate system, and f y represents the unit pixel size of the focal length in the y-axis direction of the image coordinate system Pixel size, c x , cy represent the coordinates of the principal point of the image plane, and the principal point is the intersection of the optical axis and the image plane. f x =f/d x , f y =f/ dy , where f is the focal length of a single camera, d x represents the width of a pixel in the x-axis direction of the image coordinate system, and dy represents the y-axis direction of the image coordinate system The width of one pixel. The image coordinate system is a coordinate system established based on the two-dimensional image captured by the camera, and is used to specify the position of the object in the captured image. The origin of the (x, y) coordinate system in the image coordinate system is located on the optical axis of the camera and the focal point (c x , c y ) of the imaging plane, and the unit is the unit of length, that is, meters, and (u, v) in the pixel coordinate system The origin of the coordinate system is in the upper left corner of the image, and the unit is the unit of quantity, ie units. (x, y) is used to characterize the perspective projection relationship of the object from the camera coordinate system to the image coordinate system, and (u, v) is used to characterize the pixel coordinates. The conversion relationship between (x, y) and (u, v) is as formula (1):
Figure PCTCN2021116809-appb-000001
Figure PCTCN2021116809-appb-000001
透视投影是指用中心投影法将形体投射到投影面上,从而获得的一种较为接近视觉效果的单面投影图。Perspective projection refers to a single-sided projection image that is closer to the visual effect by projecting a body onto the projection surface by the central projection method.
单摄像头的外参包括世界坐标系下的坐标转换到摄像头坐标系下的坐标的旋转矩阵和平移矩阵。世界坐标系通过刚体变换到达摄像头坐标系,摄像头坐标系通过透视投影变换到达图像坐标系。刚体变换是指三维空间中,当物体不发生形变时,对一个几何物体做旋转、平移的运动,即为刚体变换。刚体变换如公式(2),The external parameters of a single camera include a rotation matrix and a translation matrix that convert the coordinates in the world coordinate system to the coordinates in the camera coordinate system. The world coordinate system reaches the camera coordinate system through rigid body transformation, and the camera coordinate system reaches the image coordinate system through perspective projection transformation. Rigid body transformation refers to the rotation and translation of a geometric object when the object is not deformed in three-dimensional space, which is the rigid body transformation. The rigid body transformation is as formula (2),
Figure PCTCN2021116809-appb-000002
Figure PCTCN2021116809-appb-000002
X c=RX+T,
Figure PCTCN2021116809-appb-000003
X c =RX+T,
Figure PCTCN2021116809-appb-000003
其中,X c代表摄像头坐标系,X代表世界坐标系,R代表世界坐标系到摄像头坐标系的旋转矩阵, T代表世界坐标系到摄像头坐标系的平移矩阵。世界坐标系原点和摄像头坐标系原点之间的距离受x、y、z三个轴方向上的分量共同控制,具有三个自由度,R为分别绕X、Y、Z轴旋转的效果之和。t x表示x轴方向的平移量,t y表示y轴方向的平移量,t z表示z轴方向的平移量。 Among them, X c represents the camera coordinate system, X represents the world coordinate system, R represents the rotation matrix from the world coordinate system to the camera coordinate system, and T represents the translation matrix from the world coordinate system to the camera coordinate system. The distance between the origin of the world coordinate system and the origin of the camera coordinate system is jointly controlled by the components in the three axis directions of x, y, and z, and has three degrees of freedom. R is the sum of the effects of rotation around the X, Y, and Z axes respectively. . t x represents the translation amount in the x-axis direction, ty represents the translation amount in the y-axis direction, and t z represents the translation amount in the z -axis direction.
世界坐标系是客观三维空间的绝对坐标系,可以建立在任意位置。例如对于每张标定图像,世界坐标系可以建立在以标定板的左上角角点为原点,以标定板平面为XY平面,Z轴垂直标定板平面向上。摄像头坐标系是以摄像头光心为坐标系的原点,以摄像头的光轴作为Z轴,X轴、Y轴分别平行于图像坐标系的X轴Y轴。图像坐标系的主点是光轴与图像平面的交点。图像坐标系以主点为原点。像素坐标系是指原点定义在图像平面的左上角位置。The world coordinate system is the absolute coordinate system of the objective three-dimensional space, which can be established at any position. For example, for each calibration image, the world coordinate system can be established with the upper left corner of the calibration plate as the origin, the calibration plate plane as the XY plane, and the Z axis perpendicular to the calibration plate plane upward. The camera coordinate system takes the optical center of the camera as the origin of the coordinate system, takes the optical axis of the camera as the Z axis, and the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system. The principal point of the image coordinate system is the intersection of the optical axis with the image plane. The image coordinate system takes the principal point as the origin. The pixel coordinate system means that the origin is defined at the upper left corner of the image plane.
根据摄像头的内参和外参确定摄像头的畸变参数。在一个实施例中,可使用brown多项式作为畸变模型,brown模型包括5个参数,其中,3个径向畸变参数,2个切向畸变参数。在其他实施例中,也可进行分块曲面函数拟合得到畸变参数。The distortion parameters of the camera are determined according to the intrinsic and extrinsic parameters of the camera. In one embodiment, a brown polynomial can be used as the distortion model, and the brown model includes 5 parameters, among which, 3 radial distortion parameters and 2 tangential distortion parameters. In other embodiments, the block surface function fitting can also be performed to obtain the distortion parameters.
进一步地,在对RGB图像和NIR图像进行CRF校正之前,还包括标定CRF的处理。具体地,标定CRF目的是计算RGB图像和NIR图像的色彩映射关系,CRF标定包括CRF自标定和互标定两个过程,其中自标定用于计算真实世界照度和RGB图像或NIR图像亮度的关系,互标定是根据自标定得到的亮度和照度关系,找到RGB图像和NIR图像的像素关系。如图8所示,在标定CRF,确定RGB相机和NIR相机之间的像素映射关系时,获取RGB相机和NIR相机在不同曝光时间条件下拍摄得到的图像对,并基于图像对依次进行CRF自标定和互标定,确定RGB相机和NIR相机之间的像素映射关系。具体选择高动态范围场景(包括过曝和过暗区域),分别用RGB相机和NIR相机拍摄5组不同曝光时间的图像,RGB图像为RGB_1~RGB_5,NIR图像为NIR_1~NIR_5。曝光时间通过手机的gain值(信号增益)和shutter值(快门速度)修改,以2的倍数递减,RGB相机和NIR相机的最大曝光时间可不一致,曝光时间分别为(EV-2,EV-1,EV0,EV+1,EV+2)。分别对RGB_1~RGB_5和NIR_1~NIR_5进行对齐,对齐方法采用中值阈值位图(median threshold bitmaps),得到新的对齐图像为RGB’_1~RGB’_5,和NIR’_1~NIR’_5。Further, before the CRF correction is performed on the RGB image and the NIR image, the process of calibrating the CRF is also included. Specifically, the purpose of calibrating CRF is to calculate the color mapping relationship between RGB image and NIR image. CRF calibration includes two processes of CRF self-calibration and mutual calibration, wherein self-calibration is used to calculate the relationship between real-world illuminance and RGB image or NIR image brightness, The mutual calibration is to find the pixel relationship between the RGB image and the NIR image according to the brightness and illuminance relationship obtained from the self-calibration. As shown in Figure 8, when calibrating the CRF and determining the pixel mapping relationship between the RGB camera and the NIR camera, the image pairs captured by the RGB camera and the NIR camera under different exposure time conditions are obtained, and the CRF auto-autonomous Calibration and mutual calibration to determine the pixel mapping relationship between the RGB camera and the NIR camera. Specifically, high dynamic range scenes (including over-exposure and over-dark areas) are selected, and 5 sets of images with different exposure times are taken with RGB cameras and NIR cameras respectively. RGB images are RGB_1~RGB_5, and NIR images are NIR_1~NIR_5. The exposure time is modified by the gain value (signal gain) and shutter value (shutter speed) of the mobile phone, and it is decreased in multiples of 2. The maximum exposure time of the RGB camera and the NIR camera can be inconsistent. The exposure times are (EV-2, EV-1 ,EV0,EV+1,EV+2). Align RGB_1~RGB_5 and NIR_1~NIR_5 respectively. The alignment method adopts median threshold bitmaps, and the new aligned images are RGB'_1~RGB'_5, and NIR'_1~NIR'_5.
将所有RGB图像通道分离成R、G、B通道,计算其亮度通道,NIR图像是单通道,不需要分离。对RGB亮度通道和NIR图像分别采用Debevec方法求取相机响应函数对应的相机响应曲线。如图9为一个实施例中相机响应曲线的示意图。其中,横坐标为图像像素值(0~255),纵坐标为相对照度值。曲线1为RGB亮度通道的相机响应曲线,曲线2为NIR图像的相机响应曲线。相对照度值和真实的照度存在一定比例关系,相机响应曲线代表图像的像素值和相对照度值之间的关系。Separate all RGB image channels into R, G, B channels, calculate their luminance channel, NIR image is a single channel and does not need to be separated. The camera response curve corresponding to the camera response function is obtained by the Debevec method for the RGB luminance channel and the NIR image respectively. FIG. 9 is a schematic diagram of a camera response curve in one embodiment. Among them, the abscissa is the image pixel value (0-255), and the ordinate is the relative illuminance value. Curve 1 is the camera response curve of the RGB luminance channel, and curve 2 is the camera response curve of the NIR image. There is a certain proportional relationship between the relative illuminance value and the real illuminance, and the camera response curve represents the relationship between the pixel value of the image and the relative illuminance value.
此外,除RGB色彩空间的相机响应函数外,还可以构建其他色彩空间的相响应函数,如HSV的V通道,RGB分离的R、G或B通道等,可以根据实际需求进行调整。如图10所示,曲线3为NIR图像的相机响应曲线,曲线4为RGB色彩空间中R通道图像的相机响应曲线,曲线5为RGB色彩空间中B通道图像的相机响应曲线,曲线6和曲线7重叠,分别为RAW图拜耳模式的RGGB色彩空间中的G1通道图像和G2通道图像分别对应的相机响应曲线。如图11所示,曲线8为NIR图像的相机响应曲线,曲线9为HSV色彩空间中V通道图像对应的相机响应曲线。In addition, in addition to the camera response function of the RGB color space, the phase response function of other color spaces can also be constructed, such as the V channel of HSV, the R, G or B channels of RGB separation, etc., which can be adjusted according to actual needs. As shown in Figure 10, curve 3 is the camera response curve of the NIR image, curve 4 is the camera response curve of the R channel image in the RGB color space, curve 5 is the camera response curve of the B channel image in the RGB color space, curve 6 and curve 7 Overlays, respectively, are the camera response curves corresponding to the G1 channel image and the G2 channel image in the RGGB color space of the RAW image Bayer mode. As shown in FIG. 11 , the curve 8 is the camera response curve of the NIR image, and the curve 9 is the camera response curve corresponding to the V channel image in the HSV color space.
相机响应曲线只能算出图像像素值和相对照度值之间关系,需要通过CRF互标定得到相对照度和真实照度的关系。然而,真实照度需要用照度计测量得到,为了简化这个问题,通过计算RGB相机和NIR相机响应曲线中照度值之间的关系。具体实现时,一方面可以提取RGB图像和NIR图像间的匹配点,得到匹配点的像素值,以及其在响应区域的相对照度值,就可得到两者照度映射关系。另一方面,可以在RGB相机和NIR相机采集图像时的场景中放置一个灰阶卡,检测灰阶卡的区域,得到灰阶卡每个区域在RGB图像和NIR图像的像素值,然后像素值相除得到照度映射关系。基于互标定确定的照度映射关系建立RGB图像和NIR图像像素值之间的像素映射关系,像素映射关系描述了NIR亮度值与RGB某个通道亮度值之间的对应关系。采用CRF标定和校正,可将NIR图像校正到RGB图像的亮度域上,可以解决图像因不同传感器获取图像信息时出现结构相似但梯度不一致的问题,能够提高图像对齐效果。该像素映射关系可用表格存储,标定结束后可以离线保存,且只需要标定一次。在使用时只需要遍历查表即可,可以有效提高图像处理效率。The camera response curve can only calculate the relationship between the image pixel value and the relative illuminance value, and it is necessary to obtain the relationship between the relative illuminance and the true illuminance through CRF mutual calibration. However, the true illuminance needs to be measured with an illuminometer. To simplify this problem, the relationship between the illuminance values in the response curves of the RGB camera and the NIR camera is calculated. In specific implementation, on the one hand, the matching points between the RGB image and the NIR image can be extracted to obtain the pixel value of the matching point and its relative illuminance value in the response area, and then the illuminance mapping relationship between the two can be obtained. On the other hand, a gray-scale card can be placed in the scene when the RGB camera and NIR camera capture images, and the area of the gray-scale card can be detected to obtain the pixel value of each area of the gray-scale card in the RGB image and the NIR image, and then the pixel value Divide to get the illuminance mapping relationship. The pixel mapping relationship between the RGB image and the NIR image pixel value is established based on the illuminance mapping relationship determined by the mutual calibration, and the pixel mapping relationship describes the corresponding relationship between the NIR brightness value and the brightness value of a certain RGB channel. Using CRF calibration and correction, the NIR image can be corrected to the luminance domain of the RGB image, which can solve the problem of similar structures but inconsistent gradients when the images acquire image information due to different sensors, and can improve the image alignment effect. The pixel mapping relationship can be stored in a table, and can be saved offline after calibration, and only needs to be calibrated once. When in use, it is only necessary to traverse the look-up table, which can effectively improve the efficiency of image processing.
进一步地,对RGB图像和NIR图像进行CRF校正时,使用CRF标定结果,即RGB相机和NIR相机之间的像素映射关系,对NIR图像进行亮度映射,需要快速遍历NIR图像每个像素点,通过查找RGB图像和NIR图像像素值的映射关系,得到NIR图像中每个像素点新的像素值。Further, when performing CRF correction on RGB images and NIR images, using the CRF calibration result, that is, the pixel mapping relationship between the RGB camera and the NIR camera, to perform brightness mapping on the NIR image, it is necessary to quickly traverse each pixel point of the NIR image, through Find the mapping relationship between the pixel values of the RGB image and the NIR image, and get the new pixel value of each pixel in the NIR image.
进一步地,对CRF校正结果进行畸变校正和立体校正,具体利用RGB相机和NIR相机标定的相机内外参数及畸变参数,对图像做畸变校正和立体校正,把图像非共面行对准,校正成共面行对准,此时相机光轴共心,图像行对齐,从而利于减少后面的网格对齐搜索范围。Further, perform distortion correction and stereo correction on the CRF correction results. Specifically, the internal and external parameters and distortion parameters of the camera calibrated by the RGB camera and the NIR camera are used to perform distortion correction and stereo correction on the image, and the non-coplanar lines of the image are aligned. Coplanar line alignment, at this time, the camera optical axis is concentric, and the image lines are aligned, which is beneficial to reduce the grid alignment search range later.
得到立体校正的结果后,考虑到由于场景中整个图像不是共面的,会多存在多个平面,在整个图像上做基于SIFT特征的对齐方法,无法完全对齐。故采用网格对齐方法,将图像划分成多个小网格,在小网格内采用基于SIFT特征的对齐方法,从而达到对齐效果。具体地,将立体校正后的RGB图像和立体校正后的NIR图像均分成N*N网格,并遍历每个网格,需进行SIFT特征点提取、SIFT特征匹配、RANSAC去除误匹配点、计算单应性矩阵、透视变换,得到对齐后的RGB图像和对齐后的NIR图像。After obtaining the results of stereo correction, considering that the entire image in the scene is not coplanar, there will be multiple planes, and the alignment method based on SIFT features on the entire image cannot be completely aligned. Therefore, the grid alignment method is used to divide the image into multiple small grids, and the alignment method based on SIFT features is used in the small grids to achieve the alignment effect. Specifically, the stereo-corrected RGB image and the stereo-corrected NIR image are divided into N*N grids, and each grid is traversed. SIFT feature point extraction, SIFT feature matching, RANSAC removal of mismatched points, calculation Homography matrix and perspective transformation to obtain aligned RGB images and aligned NIR images.
图12为一个实施例中双目摄像头的像素映射关系确定方法的流程图。本实施例中的双目摄像头的像素映射关系确定方法,以运行于图3中的电子设备上为例进行描述。如图12所示,图像处理方法包括过程1202至过程1208。FIG. 12 is a flowchart of a method for determining a pixel mapping relationship of a binocular camera in one embodiment. The method for determining the pixel mapping relationship of the binocular camera in this embodiment is described by taking the electronic device in FIG. 3 as an example for description. As shown in FIG. 12 , the image processing method includes process 1202 to process 1208 .
过程1202,获取第一标定图像组和第二标定图像组;第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像。Process 1202: Obtain a first calibration image group and a second calibration image group; the first calibration image group includes a first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions, and the second calibration image is obtained. The image group includes a second calibration image captured by a second camera in the binocular camera under the conditions of the same scene and different exposure times.
其中,第一标定图像组包括由双目摄像头中的第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中的第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像。即第一标定图像组和第二标定图像组中的图像均是由对应的摄像头针对相同场景拍摄得到,且第一标定图像组中的各第一标定图像对应拍摄时的曝光时间不同,第二标定图像组中各第二标定图像对应拍摄时的曝光时间不同。在具体实现时,第一标定图像组和第二标定图像组对应的拍摄场景可以为包括过曝和过暗区域的高动态范围场景,以便确保确定的像素映射关系能够适用于高动态范围场景,保证像素映射关系的适用范围。第一标定图像和第二标定图像的数量及对应的曝光时间可以根据实际需要进行灵活设置,如第一标定图像和第二标定图像的数量均可以为5张,而对应拍摄的曝光时间可以递增,且第一标定图像和第二标定图像各自的曝光时间可以不同。曝光时间的调整可以通过修改电子设备的信号增益(gain值)和快门速度(shutter值)实现。进一步地,电子设备可以先对第一标定图像组和第二标定图像组中的各标定图像进行对齐,如通过中值阈值位图的对齐方法对第一标定图像组和第二标定图像组中的各标定图像进行中值阈值对齐,并基于中值阈值对齐后的第一标定图像和中值阈值对齐后的第二标定图像确定相应的相机响应函数。Wherein, the first calibration image group includes the first calibration image obtained by the first camera in the binocular camera under the conditions of the same scene and different exposure times, and the second calibration image group includes the second camera in the binocular camera in the The second calibration image obtained by shooting under the conditions of the same scene and different exposure times. That is, the images in the first calibration image group and the second calibration image group are both captured by the corresponding cameras for the same scene, and the exposure times of the first calibration images in the first calibration image group are different when they are captured. The exposure times of the respective second calibration images in the calibration image group are different during shooting. In specific implementation, the shooting scenes corresponding to the first calibration image group and the second calibration image group may be high dynamic range scenes including overexposed and overdark areas, so as to ensure that the determined pixel mapping relationship can be applied to high dynamic range scenes, Guarantees the applicable scope of the pixel mapping relationship. The number of the first calibration image and the second calibration image and the corresponding exposure time can be flexibly set according to actual needs. For example, the number of the first calibration image and the second calibration image can be 5, and the corresponding exposure time can be increased. , and the respective exposure times of the first calibration image and the second calibration image may be different. The exposure time can be adjusted by modifying the signal gain (gain value) and shutter speed (shutter value) of the electronic device. Further, the electronic device can first align each calibration image in the first calibration image group and the second calibration image group, such as aligning the first calibration image group and the second calibration image group by the alignment method of the median threshold bitmap. Perform median threshold alignment on each of the calibration images, and determine the corresponding camera response function based on the first calibration image after median threshold alignment and the second calibration image after median threshold alignment.
过程1204,基于各第一标定图像确定第一摄像头对应的第一相机响应函数。In process 1204, a first camera response function corresponding to the first camera is determined based on each first calibration image.
其中,相机响应函数用于表征摄像头拍摄到的图像亮度和真实世界的照度存在的对应关系。一般地,真实世界观察到的亮度或者照度是不变的,不会随着摄像头的不同而改变,而摄像头拍摄到的图像亮度和真实世界的照度存在一定的对应关系,该对应关系通过相机响应函数描述。具体地,电子设备可以基于各第一标定图像的亮度通道图像,通过Debevec算法求取得到第一摄像头对应的第一相机响应函数。Among them, the camera response function is used to represent the corresponding relationship between the brightness of the image captured by the camera and the illumination in the real world. Generally, the brightness or illuminance observed in the real world is constant and will not change with different cameras, and there is a certain correspondence between the brightness of the image captured by the camera and the illuminance in the real world. Function description. Specifically, the electronic device may obtain the first camera response function corresponding to the first camera by using the Debevec algorithm based on the luminance channel image of each first calibration image.
过程1206,基于各第二标定图像确定第二摄像头对应的第二相机响应函数。In process 1206, a second camera response function corresponding to the second camera is determined based on each of the second calibration images.
与第一摄像头对应的第一相机响应函数的求取同理,电子设备可以基于各第二标定图像的亮度通道图像,通过Debevec算法求取得到第二摄像头对应的第二相机响应函数。Similar to obtaining the first camera response function corresponding to the first camera, the electronic device can obtain the second camera response function corresponding to the second camera through the Debevec algorithm based on the luminance channel image of each second calibration image.
过程1208,根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。In process 1208, the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
其中,像素映射关系反映了第一摄像头与第二摄像头在针对相同场景同时进行拍摄时,第一摄像头拍摄的图像中各像素点的像素值与第二摄像头拍摄的图像中各像素点的像素值之间的映射关系,即通过像素映射关系可以将第一摄像头与第二摄像头各自拍摄的图像进行色彩空间映射,如将第一摄像头拍摄的图像映射至第二摄像头拍摄的图像对应的色彩空间中,以便克服第一摄像头与第二摄像头拍摄的图像由于信息来源差异,出现图像信息结构相似但梯度不一致而导致图像对齐的精度较差的问题。The pixel mapping relationship reflects the pixel value of each pixel in the image captured by the first camera and the pixel value of each pixel in the image captured by the second camera when the first camera and the second camera shoot the same scene at the same time The mapping relationship between them, that is, through the pixel mapping relationship, the images captured by the first camera and the second camera can be mapped to the color space, such as mapping the image captured by the first camera to the color space corresponding to the image captured by the second camera. , in order to overcome the problem that the images captured by the first camera and the second camera have similar information structures but inconsistent gradients due to differences in information sources, resulting in poor image alignment accuracy.
具体地,得到第一相机响应函数和第二相机响应函数后,电子设备基于第一相机响应函数和第二相 机响应函数确定第一摄像头与第二摄像头之间的像素映射关系。例如,可以利用第一标定图像和第二标定图像间匹配点的像素值,以及匹配点根据第一相机响应函数和第二相机响应函数确定的相对照度值,确定第一摄像头与第二摄像头之间的照度映射关系,并基于该照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。Specifically, after obtaining the first camera response function and the second camera response function, the electronic device determines the pixel mapping relationship between the first camera and the second camera based on the first camera response function and the second camera response function. For example, the pixel value of the matching point between the first calibration image and the second calibration image, and the relative illuminance value determined by the matching point according to the response function of the first camera and the response function of the second camera, can be used to determine the difference between the first camera and the second camera. The illuminance mapping relationship between them is determined, and the pixel mapping relationship between the first camera and the second camera is determined based on the illuminance mapping relationship.
上述双目摄像头的像素映射关系确定方法,根据双目摄像头在相同场景、不同曝光时间条件下拍摄得到的图像,分别确定双目摄像头中第一摄像头对应的第一相机响应函数和第二摄像头对应的第二相机响应函数,并基于第一相机响应函数和第二相机响应函数确定第一摄像头与第二摄像头之间的像素映射关系。像素映射关系根据第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定,通过该像素映射关系可以利用摄像头的相机响应函数将双目摄像头中第二摄像头拍摄的第二图像映射到第一摄像头拍摄得到的第一图像的像素空间,能够解决图像信息结构相似但梯度不一致的问题,确保图像对齐的精度,从而提高图像对齐的效果。The above-mentioned method for determining the pixel mapping relationship of a binocular camera is to determine the first camera response function corresponding to the first camera in the binocular camera and the corresponding second camera respectively according to the images captured by the binocular camera under the conditions of the same scene and different exposure times. and the pixel mapping relationship between the first camera and the second camera is determined based on the first camera response function and the second camera response function. The pixel mapping relationship is determined according to the first camera response function of the first camera and the second camera response function of the second camera. Through the pixel mapping relationship, the camera response function of the camera can be used to convert the second image captured by the second camera in the binocular camera. Mapping to the pixel space of the first image captured by the first camera can solve the problem of similar image information structure but inconsistent gradients, ensure the accuracy of image alignment, and improve the effect of image alignment.
在一个实施例中,第一摄像头为可见光摄像头;基于各第一标定图像确定第一摄像头对应的第一相机响应函数,包括:获取各第一标定图像分别对应于目标色彩通道的目标通道图像;确定相同场景中同一位置在各目标通道图像所对应的第一特征点;确定各第一特征点对应于目标色彩通道的通道亮度值;根据各第一特征点对应于目标色彩通道的通道亮度值,确定第一摄像头对应的第一相机响应函数。In one embodiment, the first camera is a visible light camera; determining a first camera response function corresponding to the first camera based on each first calibration image includes: acquiring a target channel image of each first calibration image corresponding to a target color channel respectively; Determine the first feature point corresponding to each target channel image at the same position in the same scene; determine the channel brightness value of each first feature point corresponding to the target color channel; according to the channel brightness value corresponding to each first feature point of the target color channel , and determine the first camera response function corresponding to the first camera.
在一个实施例中,获取各第一标定图像分别对应于目标色彩通道的目标通道图像,包括:对第一标定图像进行通道分离,得到各分离通道图像;根据各分离通道图像得到对应于目标色彩通道的目标通道图像。In one embodiment, acquiring the target channel images corresponding to the target color channels of the first calibration images respectively includes: performing channel separation on the first calibration images to obtain separate channel images; obtaining images corresponding to the target color according to the separated channel images The target channel image for the channel.
在一个实施例中,获取各第一标定图像分别对应于目标色彩通道的目标通道图像,包括:将第一标定图像变换至包括目标色彩通道的目标色彩空间,得到目标色彩空间图像;根据目标色彩空间图像得到对应于目标色彩通道的目标通道图像。In one embodiment, acquiring a target channel image corresponding to each first calibration image respectively corresponding to a target color channel includes: transforming the first calibration image into a target color space including the target color channel to obtain a target color space image; according to the target color The spatial image results in a target channel image corresponding to the target color channel.
在一个实施例中,第二摄像头为红外摄像头;基于各第二标定图像确定第二摄像头对应的第二相机响应函数,包括:分别确定相同场景中同一位置在各第二标定图像中所对应的第二特征点;确定各第二特征点的像素值;根据各第二特征点的像素值确定第二摄像头对应的第二相机响应函数。In one embodiment, the second camera is an infrared camera; determining a second camera response function corresponding to the second camera based on each second calibration image includes: respectively determining the corresponding position in each second calibration image of the same position in the same scene second feature points; determining the pixel value of each second feature point; determining the second camera response function corresponding to the second camera according to the pixel value of each second feature point.
在一个实施例中,根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系,包括:获取至少一对匹配点对,匹配点对根据从第一标定图像中提取的第一匹配点和从第二标定图像中提取的第二匹配点进行位置匹配得到;分别确定匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值;根据第一点像素值和第一相机响应函数确定第一相对照度值;根据第二点像素值和第二相机响应函数确定第二相对照度值;基于第一相对照度值和第二相对照度值确定照度映射关系;根据照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, determining the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function includes: acquiring at least one pair of matching point pairs, where the matching point pair The first matching point extracted from a calibration image and the second matching point extracted from the second calibration image are obtained by position matching; respectively determine the pixel value of the first matching point in the matching point pair and the pixel value of the second matching point. the second point pixel value; determine the first relative illuminance value according to the first point pixel value and the first camera response function; determine the second relative illuminance value according to the second point pixel value and the second camera response function; based on the first relative illuminance value determining an illuminance mapping relationship with the second relative illuminance value; and determining a pixel mapping relationship between the first camera and the second camera according to the illuminance mapping relationship.
在一个实施例中,第一标定图像和第二标定图像包括处于相同场景中具备不同区域的标定目标;根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系,包括:确定在第一标定图像中,标定目标的各区域分别对应的第一区域像素值;确定在第二标定图像中,标定目标的各区域分别对应的第二区域像素值;根据标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the first calibration image and the second calibration image include calibration targets with different regions in the same scene; according to the first camera response function and the second camera response function, the distance between the first camera and the second camera is determined The pixel mapping relationship includes: determining the pixel values of the first area corresponding to each area of the calibration target in the first calibration image; determining the pixel value of the second area corresponding to each area of the calibration target in the second calibration image ; Determine the pixel mapping relationship between the first camera and the second camera according to the corresponding relationship between the first region pixel value and the second region pixel value of the same region in the calibration target.
在一个实施例中,标定目标中每个区域具有预设的对应的纯色。In one embodiment, each area in the calibration target has a preset corresponding solid color.
应该理解的是,虽然图4-8、12的流程图中的各个过程按照箭头的指示依次显示,但是这些过程并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些过程的执行并没有严格的顺序限制,这些过程可以以其它的顺序执行。而且,图4-8、12中的至少一部分过程可以包括多个子过程或者多个阶段,这些子过程或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子过程或者阶段的执行顺序也不必然是依次进行,而是可以与其它过程或者其它过程的子过程或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the respective processes in the flowcharts of FIGS. 4-8 and 12 are displayed in sequence according to the arrows, these processes are not necessarily executed sequentially in the sequence indicated by the arrows. Unless explicitly stated herein, there is no strict order in the execution of these processes, and these processes may be performed in other orders. Moreover, at least a part of the processes in FIGS. 4-8 and 12 may include multiple sub-processes or multiple stages. These sub-processes or stages are not necessarily executed at the same time, but may be executed at different times. These sub-processes are not necessarily completed at the same time. Alternatively, the order of execution of the stages is not necessarily sequential, but may be performed alternately or alternately with other processes or sub-processes of other processes or at least a portion of the stages.
图13为一个实施例的图像处理装置1300的结构框图。如图13所示,图像处理装置1300包括待处理图像获取模块1302、像素映射处理模块1304和图像对齐处理模块1306,其中:FIG. 13 is a structural block diagram of an image processing apparatus 1300 according to an embodiment. As shown in FIG. 13, the image processing apparatus 1300 includes a to-be-processed image acquisition module 1302, a pixel mapping processing module 1304 and an image alignment processing module 1306, wherein:
待处理图像获取模块1302,用于获取待处理的第一图像和第二图像;第一图像由第一摄像头拍摄得到,第二图像由第二摄像头拍摄得到;The to-be-processed image acquisition module 1302 is used to acquire the to-be-processed first image and the second image; the first image is captured by the first camera, and the second image is captured by the second camera;
像素映射处理模块1304,用于基于第一摄像头与第二摄像头之间的像素映射关系,对第二图像进行像 素映射,获得第二图像对应的映射图像;其中,像素映射关系基于第一摄像头的第一相机响应函数与第二摄像头的第二相机响应函数确定;The pixel mapping processing module 1304 is configured to perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the pixel mapping relationship of the first camera. The first camera response function and the second camera response function of the second camera are determined;
图像对齐处理模块1306,用于将第二图像对应的映射图像和第一图像进行对齐。The image alignment processing module 1306 is configured to align the mapped image corresponding to the second image with the first image.
在一个实施例中,还包括标定图像组获取模块、第一相机响应函数确定模块、第二相机响应函数确定模块和像素映射关系确定模块;其中:标定图像组获取模块,用于获取第一标定图像组和第二标定图像组;第一标定图像组包括由第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;第一相机响应函数确定模块,用于基于各第一标定图像确定第一摄像头对应的第一相机响应函数;第二相机响应函数确定模块,用于基于各第二标定图像确定第二摄像头对应的第二相机响应函数;像素映射关系确定模块,用于根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, it further includes a calibration image group acquisition module, a first camera response function determination module, a second camera response function determination module, and a pixel mapping relationship determination module; wherein: a calibration image group acquisition module for acquiring the first calibration an image group and a second calibration image group; the first calibration image group includes the first calibration image obtained by the first camera in the same scene and under different exposure time conditions, and the second calibration image group includes the second camera in the same scene, a second calibration image obtained by shooting under different exposure time conditions; a first camera response function determination module, used for determining a first camera response function corresponding to the first camera based on each first calibration image; a second camera response function determination module, with for determining the second camera response function corresponding to the second camera based on each second calibration image; the pixel mapping relationship determination module is used for determining the relationship between the first camera and the second camera according to the first camera response function and the second camera response function pixel mapping relationship.
在一个实施例中,第一摄像头为可见光摄像头;第一相机响应函数确定模块包括目标通道图像获取模块、第一特征点确定模块、通道亮度值确定模块和第一相机响应函数获得模块;其中:目标通道图像获取模块,用于获取各第一标定图像分别对应于目标色彩通道的目标通道图像;第一特征点确定模块,用于确定相同场景中同一位置在各目标通道图像中所对应的第一特征点;通道亮度值确定模块,用于确定各第一特征点对应于目标色彩通道的通道亮度值;第一相机响应函数获得模块,用于根据各第一特征点对应于目标色彩通道的通道亮度值,确定第一摄像头对应的第一相机响应函数。In one embodiment, the first camera is a visible light camera; the first camera response function determination module includes a target channel image acquisition module, a first feature point determination module, a channel brightness value determination module and a first camera response function acquisition module; wherein: The target channel image acquisition module is used to acquire the target channel images of each first calibration image corresponding to the target color channel respectively; the first feature point determination module is used to determine the same position in the same scene in each target channel image corresponding to the first channel image. a feature point; a channel luminance value determination module for determining a channel luminance value of each first feature point corresponding to a target color channel; a first camera response function obtaining module for determining a channel luminance value corresponding to each first feature point corresponding to the target color channel; The channel luminance value determines the first camera response function corresponding to the first camera.
在一个实施例中,目标通道图像获取模块包括通道分离模块和分离通道图像处理模块;其中:通道分离模块,用于对第一标定图像进行通道分离,得到各分离通道图像;分离通道图像处理模块,用于根据各分离通道图像得到对应于目标色彩通道的目标通道图像。In one embodiment, the target channel image acquisition module includes a channel separation module and a separated channel image processing module; wherein: the channel separation module is used to perform channel separation on the first calibration image to obtain each separated channel image; the separated channel image processing module , which is used to obtain the target channel image corresponding to the target color channel according to the separate channel images.
在一个实施例中,目标通道图像获取模块包括目标色彩空间图像获取模块和目标色彩空间图像处理模块;其中:目标色彩空间图像获取模块,用于将第一标定图像变换至包括目标色彩通道的目标色彩空间,得到目标色彩空间图像;目标色彩空间图像处理模块,用于根据目标色彩空间图像得到对应于目标色彩通道的目标通道图像。In one embodiment, the target channel image acquisition module includes a target color space image acquisition module and a target color space image processing module; wherein: the target color space image acquisition module is used to transform the first calibration image to a target including the target color channel The color space is used to obtain the target color space image; the target color space image processing module is used to obtain the target channel image corresponding to the target color channel according to the target color space image.
在一个实施例中,第二摄像头为红外摄像头;第二相机响应函数确定模块包括第二特征点确定模块、第二特征点像素确定模块和第二特征点像素处理模块;其中:第二特征点确定模块,用于分别确定相同场景中同一位置在各第二标定图像中所对应的第二特征点;第二特征点像素确定模块,用于确定各第二特征点的像素值;第二特征点像素处理模块,用于根据各第二特征点的像素值确定第二摄像头对应的第二相机响应函数。In one embodiment, the second camera is an infrared camera; the second camera response function determination module includes a second feature point determination module, a second feature point pixel determination module and a second feature point pixel processing module; wherein: the second feature point a determination module, used for respectively determining the second feature points corresponding to the same position in each second calibration image in the same scene; a second feature point pixel determination module, used for determining the pixel value of each second feature point; the second feature point The point pixel processing module is configured to determine the second camera response function corresponding to the second camera according to the pixel value of each second feature point.
在一个实施例中,像素映射关系确定模块包括匹配点对获取模块、匹配点对像素确定模块、相对照度值确定模块、照度映射关系确定模块和照度映射关系处理模块;其中:匹配点对获取模块,用于获取至少一对匹配点对,匹配点对根据从第一标定图像中提取的第一匹配点和从第二标定图像中提取的第二匹配点进行特征匹配得到;匹配点对像素确定模块,用于分别确定匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值;相对照度值确定模块,用于根据第一点像素值和第一相机响应函数确定第一相对照度值;根据第二点像素值和第二相机响应函数确定第二相对照度值;照度映射关系确定模块,用于基于第一相对照度值和第二相对照度值确定照度映射关系;照度映射关系处理模块,用于根据照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the pixel mapping relationship determination module includes a matching point pair acquisition module, a matching point pair pixel determination module, a relative illuminance value determination module, an illuminance mapping relationship determination module, and an illuminance mapping relationship processing module; wherein: the matching point pair acquisition module , used to obtain at least one pair of matching point pairs, the matching point pairs are obtained by performing feature matching on the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image; the matching point pair pixel is determined by a module for respectively determining the pixel value of the first point of the first matching point and the pixel value of the second point of the second matching point in the matching point pair; the relative illuminance value determination module is used for determining the pixel value of the first point and the first camera The response function determines the first relative illuminance value; the second relative illuminance value is determined according to the second point pixel value and the second camera response function; the illuminance mapping relationship determination module is used for determining the illuminance based on the first relative illuminance value and the second relative illuminance value A mapping relationship; an illuminance mapping relationship processing module, configured to determine the pixel mapping relationship between the first camera and the second camera according to the illuminance mapping relationship.
在一个实施例中,第一标定图像和第二标定图像包括处于相同场景中具备不同区域的标定目标;像素映射关系确定模块包括第一区域像素确定模块、第二区域像素确定模块和区域像素分析模块;其中:第一区域像素确定模块,用于确定在第一标定图像中,标定目标的各区域分别对应的第一区域像素值;第二区域像素确定模块,用于确定在第二标定图像中,标定目标的各区域分别对应的第二区域像素值;区域像素分析模块,用于根据标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the first calibration image and the second calibration image include calibration targets with different regions in the same scene; the pixel mapping relationship determination module includes a first region pixel determination module, a second region pixel determination module, and a region pixel analysis module module; wherein: the first area pixel determination module is used to determine the first area pixel values corresponding to each area of the calibration target in the first calibration image; the second area pixel determination module is used to determine the second calibration image. , the second area pixel values corresponding to each area of the calibration target respectively; the area pixel analysis module is used to determine the first area pixel value according to the corresponding relationship between the first area pixel value and the second area pixel value of the same area in the calibration target. The pixel mapping relationship between the camera and the second camera.
在一个实施例中,标定目标中每个区域具有预设的对应的纯色。In one embodiment, each area in the calibration target has a preset corresponding solid color.
在一个实施例中,像素映射处理模块1304包括原始像素确定模块、映射像素获得模块和图像更新模块;其中:原始像素确定模块,用于分别确定第二图像中各像素点的原始像素值;映射像素获得模块,用于基于第一摄像头与第二摄像头之间的像素映射关系对各原始像素值进行像素值映射,得到第二图像中各 像素点分别对应的映射像素值;图像更新模块,用于基于各映射像素值更新第二图像,获得第二图像对应的映射图像。In one embodiment, the pixel mapping processing module 1304 includes an original pixel determination module, a mapped pixel acquisition module and an image update module; wherein: an original pixel determination module is used to determine the original pixel value of each pixel in the second image respectively; mapping The pixel obtaining module is used to perform pixel value mapping on each original pixel value based on the pixel mapping relationship between the first camera and the second camera, so as to obtain the mapped pixel value corresponding to each pixel point in the second image; the image updating module, using The second image is updated based on each mapped pixel value to obtain a mapped image corresponding to the second image.
在一个实施例中,图像对齐处理模块1306包括畸变校正模块、立体校正模块和网格对齐模块;其中:畸变校正模块,用于分别对第一图像和第二图像对应的映射图像进行畸变校正,得到第一畸变校正图像和第二畸变校正图像;立体校正模块,用于分别对第一畸变校正图像和第二畸变校正图像进行立体校正,获得第一校正图像和第二校正图像;网格对齐模块,用于将第一校正图像和第二校正图像进行网格对齐。In one embodiment, the image alignment processing module 1306 includes a distortion correction module, a stereo correction module, and a grid alignment module; wherein: the distortion correction module is configured to perform distortion correction on the mapped images corresponding to the first image and the second image, respectively, obtaining a first distortion corrected image and a second distortion corrected image; a stereo correction module for performing stereo correction on the first distortion corrected image and the second distortion corrected image respectively to obtain the first corrected image and the second corrected image; grid alignment The module is used for grid-aligning the first corrected image and the second corrected image.
在一个实施例中,网格对齐模块包括网格划分模块、网格特征提取模块和图像变换模块;其中:括网格划分模块,用于分别将第一校正图像和第二校正图像进行网格划分,获得第一校正图像对应的各第一网格和第二校正图像对应的各第二网格;网格特征提取模块,用于分别对各第一网格和各第二网格中进行网格特征点检测,获得第一网格对应的第一网格特征点和第二网格对应的第二网格特征点;图像变换模块,用于基于各第一网格特征点和第二网格特征点对第一校正图像和第二校正图像进行图像变换,以对齐第一校正图像和第二校正图像。In one embodiment, the grid alignment module includes a grid division module, a grid feature extraction module and an image transformation module; wherein: a grid division module is included for meshing the first corrected image and the second corrected image respectively Divide to obtain each first grid corresponding to the first corrected image and each second grid corresponding to the second corrected image; the grid feature extraction module is used to perform the first grid and each second grid respectively. Grid feature point detection, to obtain the first grid feature point corresponding to the first grid and the second grid feature point corresponding to the second grid; the image transformation module is used for each first grid feature point and second grid feature point based on The grid feature points perform image transformation on the first corrected image and the second corrected image to align the first corrected image and the second corrected image.
在一个实施例中,还包括匹配对构建模块、投影参数确定模块和投影对齐模块;其中:匹配对构建模块,用于根据第一校正特征点和第二校正特征点构建特征点匹配对;第一校正特征点从第一校正图像中提取得到,第二校正特征点从第二校正图像提取得到;投影参数确定模块,用于基于各特征点匹配对中的校正特征点之间的偏移参数,确定第一校正图像和第二校正图像之间的投影参数;投影对齐模块,用于通过投影参数对第一校正图像和第二校正图像进行投影对齐,得到第一投影对齐图像和第二投影对齐图像;网格对齐模块还用于将第一投影对齐图像和第二投影对齐图像进行网格对齐。In one embodiment, it also includes a matching pair building module, a projection parameter determining module, and a projection alignment module; wherein: a matching pair building module is used to build a matching pair of feature points according to the first corrected feature point and the second corrected feature point; A correction feature point is extracted from the first correction image, and the second correction feature point is extracted from the second correction image; the projection parameter determination module is used to match the offset parameters between the correction feature points based on each feature point pair , determine the projection parameters between the first corrected image and the second corrected image; the projection alignment module is used to perform projection alignment on the first corrected image and the second corrected image through the projection parameters to obtain the first projected aligned image and the second projected Align the images; the grid alignment module is also used for grid alignment of the first projection alignment image and the second projection alignment image.
图14为一个实施例的双目摄像头的像素映射关系确定装置1400的结构框图。如图14所示,双目摄像头的像素映射关系确定装置1400包括,其中:FIG. 14 is a structural block diagram of an apparatus 1400 for determining a pixel mapping relationship of a binocular camera according to an embodiment. As shown in FIG. 14 , the device 1400 for determining the pixel mapping relationship of the binocular camera includes:
标定图像组获取模块1402,用于获取第一标定图像组和第二标定图像组;第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,第二标定图像组包括由双目摄像头中第二摄像头在相同场景、不同曝光时间条件下拍摄获得的第二标定图像;The calibration image group obtaining module 1402 is used to obtain the first calibration image group and the second calibration image group; the first calibration image group includes the first calibration image obtained by the first camera in the binocular camera under the same scene and different exposure time conditions. Calibration images, the second calibration image group includes second calibration images obtained by shooting the second camera in the binocular camera under the same scene and different exposure time conditions;
第一相机响应函数确定模块1404,用于基于各第一标定图像确定第一摄像头对应的第一相机响应函数;a first camera response function determination module 1404, configured to determine a first camera response function corresponding to the first camera based on each first calibration image;
第二相机响应函数确定模块1406,用于基于各第二标定图像确定第二摄像头对应的第二相机响应函数;A second camera response function determining module 1406, configured to determine a second camera response function corresponding to the second camera based on each second calibration image;
像素映射关系确定模块1408,用于根据第一相机响应函数和第二相机响应函数,确定第一摄像头与第二摄像头之间的像素映射关系。The pixel mapping relationship determining module 1408 is configured to determine the pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function.
在一个实施例中,第一摄像头为可见光摄像头;第一相机响应函数确定模块1404包括目标通道图像获取模块、第一特征点确定模块、通道亮度值确定模块和第一相机响应函数获得模块;其中:目标通道图像获取模块,用于获取各第一标定图像分别对应于目标色彩通道的目标通道图像;第一特征点确定模块,用于确定相同场景中同一位置在各目标通道图像中所对应的第一特征点;通道亮度值确定模块,用于确定各第一特征点对应于目标色彩通道的通道亮度值;第一相机响应函数获得模块,用于根据各第一特征点对应于目标色彩通道的通道亮度值,确定第一摄像头对应的第一相机响应函数。In one embodiment, the first camera is a visible light camera; the first camera response function determination module 1404 includes a target channel image acquisition module, a first feature point determination module, a channel brightness value determination module, and a first camera response function acquisition module; wherein : The target channel image acquisition module is used to acquire the target channel images of the first calibration images corresponding to the target color channels respectively; the first feature point determination module is used to determine the corresponding position in each target channel image in the same scene. a first feature point; a channel luminance value determination module for determining the channel luminance value of each first feature point corresponding to the target color channel; a first camera response function obtaining module for determining the target color channel corresponding to each first feature point The channel brightness value of , determines the first camera response function corresponding to the first camera.
在一个实施例中,目标通道图像获取模块包括通道分离模块和分离通道图像处理模块;其中:通道分离模块,用于对第一标定图像进行通道分离,得到各分离通道图像;分离通道图像处理模块,用于根据各分离通道图像得到对应于目标色彩通道的目标通道图像。In one embodiment, the target channel image acquisition module includes a channel separation module and a separated channel image processing module; wherein: the channel separation module is used to perform channel separation on the first calibration image to obtain each separated channel image; the separated channel image processing module , which is used to obtain the target channel image corresponding to the target color channel according to the separate channel images.
在一个实施例中,目标通道图像获取模块包括目标色彩空间图像获取模块和目标色彩空间图像处理模块;其中:目标色彩空间图像获取模块,用于将第一标定图像变换至包括目标色彩通道的目标色彩空间,得到目标色彩空间图像;目标色彩空间图像处理模块,用于根据目标色彩空间图像得到对应于目标色彩通道的目标通道图像。In one embodiment, the target channel image acquisition module includes a target color space image acquisition module and a target color space image processing module; wherein: the target color space image acquisition module is used to transform the first calibration image to a target including the target color channel The color space is used to obtain the target color space image; the target color space image processing module is used to obtain the target channel image corresponding to the target color channel according to the target color space image.
在一个实施例中,第二摄像头为红外摄像头;第二相机响应函数确定模块1406包括第二特征点确定模块、第二特征点像素确定模块和第二特征点像素处理模块;其中:第二特征点确定模块,用于分别确定相同场景中同一位置在各第二标定图像中所对应的第二特征点;第二特征点像素确定模块,用于确定各第二特征点的像素值;第二特征点像素处理模块,用于根据各第二特征点的像素值确定第二摄像头对应的第 二相机响应函数。In one embodiment, the second camera is an infrared camera; the second camera response function determination module 1406 includes a second feature point determination module, a second feature point pixel determination module and a second feature point pixel processing module; wherein: the second feature point The point determination module is used to respectively determine the second feature points corresponding to the same position in each second calibration image in the same scene; the second feature point pixel determination module is used to determine the pixel value of each second feature point; the second The feature point pixel processing module is configured to determine the second camera response function corresponding to the second camera according to the pixel value of each second feature point.
在一个实施例中,像素映射关系确定模块1408包括匹配点对获取模块、匹配点对像素确定模块、相对照度值确定模块、照度映射关系确定模块和照度映射关系处理模块;其中:匹配点对获取模块,用于获取至少一对匹配点对,匹配点对根据从第一标定图像中提取的第一匹配点和从第二标定图像中提取的第二匹配点进行特征匹配得到;匹配点对像素确定模块,用于分别确定匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值;相对照度值确定模块,用于根据第一点像素值和第一相机响应函数确定第一相对照度值;根据第二点像素值和第二相机响应函数确定第二相对照度值;照度映射关系确定模块,用于基于第一相对照度值和第二相对照度值确定照度映射关系;照度映射关系处理模块,用于根据照度映射关系确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the pixel mapping relationship determination module 1408 includes a matching point pair acquisition module, a matching point pair pixel determination module, a relative illuminance value determination module, an illuminance mapping relationship determination module, and an illuminance mapping relationship processing module; wherein: the matching point pair acquisition The module is used to obtain at least one pair of matching point pairs, and the matching point pairs are obtained by performing feature matching on the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image; the matching point pair pixel The determining module is used to respectively determine the pixel value of the first point of the first matching point and the pixel value of the second point of the second matching point in the matching point pair; the relative illuminance value determination module is used to determine the pixel value of the first point and the first point pixel value The camera response function determines the first relative illuminance value; the second relative illuminance value is determined according to the second point pixel value and the second camera response function; the illuminance mapping relationship determination module is used for determining based on the first relative illuminance value and the second relative illuminance value The illuminance mapping relationship; the illuminance mapping relationship processing module is used to determine the pixel mapping relationship between the first camera and the second camera according to the illuminance mapping relationship.
在一个实施例中,第一标定图像和第二标定图像包括处于相同场景中具备不同区域的标定目标;像素映射关系确定模块1408包括第一区域像素确定模块、第二区域像素确定模块和区域像素分析模块;其中:第一区域像素确定模块,用于确定在第一标定图像中,标定目标的各区域分别对应的第一区域像素值;第二区域像素确定模块,用于确定在第二标定图像中,标定目标的各区域分别对应的第二区域像素值;区域像素分析模块,用于根据标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,确定第一摄像头与第二摄像头之间的像素映射关系。In one embodiment, the first calibration image and the second calibration image include calibration targets with different regions in the same scene; the pixel mapping relationship determination module 1408 includes a first region pixel determination module, a second region pixel determination module and a region pixel An analysis module; wherein: a first area pixel determination module is used to determine the first area pixel values corresponding to each area of the calibration target in the first calibration image; the second area pixel determination module is used to determine the first area pixel value in the second calibration image In the image, the pixel values of the second area corresponding to each area of the calibration target respectively; the area pixel analysis module is used to determine the first area pixel value according to the corresponding relationship between the pixel value of the first area and the pixel value of the second area in the same area of the calibration target. Pixel mapping relationship between a camera and a second camera.
在一个实施例中,标定目标中每个区域具有预设的对应的纯色。In one embodiment, each area in the calibration target has a preset corresponding solid color.
上述图像处理装置或双目摄像头的像素映射关系确定装置中各个模块的划分仅仅用于举例说明,在其他实施例中,可将图像处理装置或双目摄像头的像素映射关系确定装置按照需要划分为不同的模块,以完成上述图像处理装置或双目摄像头的像素映射关系确定装置的全部或部分功能。The division of each module in the image processing device or the device for determining the pixel mapping relationship of the binocular camera is only used for illustration. In other embodiments, the device for determining the pixel mapping relationship of the image processing device or the binocular camera can be divided into Different modules are used to complete all or part of the functions of the image processing device or the device for determining the pixel mapping relationship of the binocular camera.
关于图像处理装置的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。关于双目摄像头的像素映射关系确定装置的具体限定可以参见上文中对于双目摄像头的像素映射关系确定方法的限定,在此不再赘述。上述图像处理装置或双目摄像头的像素映射关系确定装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific limitation of the image processing apparatus, reference may be made to the limitation of the image processing method above, which will not be repeated here. For the specific limitation of the device for determining the pixel mapping relationship of the binocular camera, reference may be made to the definition of the method for determining the pixel mapping relationship of the binocular camera above, which will not be repeated here. Each module in the image processing device or the device for determining the pixel mapping relationship of the binocular camera can be implemented in whole or in part by software, hardware, and combinations thereof. The above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
图15为一个实施例中电子设备的内部结构示意图。如图15所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种图像处理方法或双目摄像头的像素映射关系确定。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备。FIG. 15 is a schematic diagram of the internal structure of an electronic device in one embodiment. As shown in Figure 15, the electronic device includes a processor and a memory connected by a system bus. Among them, the processor is used to provide computing and control capabilities to support the operation of the entire electronic device. The memory may include non-volatile storage media and internal memory. The nonvolatile storage medium stores an operating system and a computer program. The computer program can be executed by the processor to implement an image processing method provided by the following embodiments or the pixel mapping relationship determination of a binocular camera. Internal memory provides a cached execution environment for operating system computer programs in non-volatile storage media. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales, a sales terminal), a vehicle-mounted computer, a wearable device, and the like.
本申请实施例中提供的图像处理装置或双目摄像头的像素映射关系确定装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在电子设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的过程。The implementation of each module in the image processing apparatus or the apparatus for determining a pixel mapping relationship of a binocular camera provided in the embodiment of the present application may be in the form of a computer program. The computer program can be run on a terminal or server. The program modules constituted by the computer program can be stored on the memory of the electronic device. When the computer program is executed by the processor, the process of the method described in the embodiments of the present application is implemented.
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行图像处理方法的过程。Embodiments of the present application also provide a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when executed by one or more processors, cause the processors to perform the processes of the image processing method.
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行图像处理方法。A computer program product containing instructions, when run on a computer, causes the computer to perform an image processing method.
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行双目摄像头的像素映射关系确定方法的过程。Embodiments of the present application also provide a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, cause the processors to execute the pixel mapping relationship of the binocular camera The process of determining the method.
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行双目摄像头的像素映射关系确定方法。A computer program product containing instructions, when run on a computer, causes the computer to execute a method for determining a pixel mapping relationship of a binocular camera.
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。 作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。Any reference to a memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM).
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only represent several embodiments of the present application, and the descriptions thereof are relatively specific and detailed, but should not be construed as a limitation on the scope of the patent of the present application. It should be pointed out that for those skilled in the art, without departing from the concept of the present application, several modifications and improvements can be made, which all belong to the protection scope of the present application. Therefore, the scope of protection of the patent of the present application shall be subject to the appended claims.

Claims (25)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, comprising:
    获取待处理的第一图像和第二图像;所述第一图像由第一摄像头拍摄得到,所述第二图像由第二摄像头拍摄得到;acquiring a first image and a second image to be processed; the first image is captured by the first camera, and the second image is captured by the second camera;
    基于所述第一摄像头与所述第二摄像头之间的像素映射关系,对所述第二图像进行像素映射,获得所述第二图像对应的映射图像;其中,所述像素映射关系基于所述第一摄像头的第一相机响应函数与所述第二摄像头的第二相机响应函数确定;及Based on the pixel mapping relationship between the first camera and the second camera, pixel mapping is performed on the second image to obtain a mapped image corresponding to the second image; wherein the pixel mapping relationship is based on the The first camera response function of the first camera and the second camera response function of the second camera are determined; and
    将所述第二图像对应的映射图像和所述第一图像进行对齐。Align the mapping image corresponding to the second image with the first image.
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取第一标定图像组和第二标定图像组;所述第一标定图像组包括由所述第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,所述第二标定图像组包括由所述第二摄像头在所述相同场景、不同曝光时间条件下拍摄获得的第二标定图像;Obtain a first calibration image group and a second calibration image group; the first calibration image group includes a first calibration image obtained by shooting the first camera under the same scene and different exposure time conditions, and the second calibration image The group includes a second calibration image captured by the second camera under the conditions of the same scene and different exposure times;
    基于各所述第一标定图像确定所述第一摄像头对应的第一相机响应函数;determining a first camera response function corresponding to the first camera based on each of the first calibration images;
    基于各所述第二标定图像确定所述第二摄像头对应的第二相机响应函数;及determining a second camera response function corresponding to the second camera based on each of the second calibration images; and
    根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系。A pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
  3. 根据权利要求2所述的方法,其特征在于,所述第一摄像头为可见光摄像头;所述基于各所述第一标定图像确定所述第一摄像头对应的第一相机响应函数,包括:The method according to claim 2, wherein the first camera is a visible light camera; and the determining a first camera response function corresponding to the first camera based on each of the first calibration images comprises:
    获取各所述第一标定图像分别对应于目标色彩通道的目标通道图像;acquiring target channel images of each of the first calibration images corresponding to the target color channel respectively;
    确定所述相同场景中同一位置在各所述目标通道图像中所对应的第一特征点;determining the first feature point corresponding to the same position in each of the target channel images in the same scene;
    确定各所述第一特征点对应于所述目标色彩通道的通道亮度值;及determining a channel luminance value of each of the first feature points corresponding to the target color channel; and
    根据各所述第一特征点对应于所述目标色彩通道的通道亮度值,确定所述第一摄像头对应的第一相机响应函数。A first camera response function corresponding to the first camera is determined according to a channel luminance value of each of the first feature points corresponding to the target color channel.
  4. 根据权利要求3所述的方法,其特征在于,所述获取各所述第一标定图像分别对应于目标色彩通道的目标通道图像,包括:The method according to claim 3, wherein the acquiring the target channel images of the first calibration images corresponding to the target color channels respectively comprises:
    对所述第一标定图像进行通道分离,得到各分离通道图像;及performing channel separation on the first calibration image to obtain separate channel images; and
    根据各所述分离通道图像得到对应于目标色彩通道的目标通道图像。A target channel image corresponding to the target color channel is obtained according to each of the separated channel images.
  5. 根据权利要求3所述的方法,其特征在于,所述获取各所述第一标定图像分别对应于目标色彩通道的目标通道图像,包括:The method according to claim 3, wherein the acquiring the target channel images of the first calibration images corresponding to the target color channels respectively comprises:
    将所述第一标定图像变换至包括目标色彩通道的目标色彩空间,得到目标色彩空间图像;及transforming the first calibration image to a target color space including a target color channel to obtain a target color space image; and
    根据所述目标色彩空间图像得到对应于所述目标色彩通道的目标通道图像。A target channel image corresponding to the target color channel is obtained according to the target color space image.
  6. 根据权利要求2所述的方法,其特征在于,所述第二摄像头为红外摄像头;所述基于各所述第二标定图像确定所述第二摄像头对应的第二相机响应函数,包括:The method according to claim 2, wherein the second camera is an infrared camera; and the determining a second camera response function corresponding to the second camera based on each of the second calibration images comprises:
    分别确定所述相同场景中同一位置在各所述第二标定图像中所对应的第二特征点;respectively determining the second feature points corresponding to the same position in each of the second calibration images in the same scene;
    确定各所述第二特征点的像素值;及determining the pixel value of each of the second feature points; and
    根据各所述第二特征点的像素值确定所述第二摄像头对应的第二相机响应函数。A second camera response function corresponding to the second camera is determined according to the pixel value of each of the second feature points.
  7. 根据权利要求2所述的方法,其特征在于,所述根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系,包括:The method according to claim 2, wherein the pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function ,include:
    获取至少一对匹配点对,所述匹配点对根据从所述第一标定图像中提取的第一匹配点和从所述第二标定图像中提取的第二匹配点进行特征匹配得到;Obtain at least one pair of matching point pairs, and the matching point pairs are obtained by performing feature matching on the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image;
    分别确定所述匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值;respectively determining the pixel value of the first point of the first matching point and the pixel value of the second point of the second matching point in the pair of matching points;
    根据所述第一点像素值和所述第一相机响应函数确定第一相对照度值;根据所述第二点像素值和所述第二相机响应函数确定第二相对照度值;Determine a first relative illuminance value according to the first point pixel value and the first camera response function; determine a second relative illuminance value according to the second point pixel value and the second camera response function;
    基于所述第一相对照度值和所述第二相对照度值确定照度映射关系;及determining an illuminance mapping relationship based on the first relative illuminance value and the second relative illuminance value; and
    根据所述照度映射关系确定所述第一摄像头与所述第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the illuminance mapping relationship.
  8. 根据权利要求2所述的方法,其特征在于,所述第一标定图像和所述第二标定图像包括处于所述 相同场景中具备不同区域的标定目标;所述根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系,包括:The method according to claim 2, wherein the first calibration image and the second calibration image include calibration targets with different regions in the same scene; the first camera response function according to the first camera response function and the second camera response function to determine the pixel mapping relationship between the first camera and the second camera, including:
    确定在所述第一标定图像中,所述标定目标的各区域分别对应的第一区域像素值;Determining in the first calibration image, the first region pixel values corresponding to each region of the calibration target respectively;
    确定在所述第二标定图像中,所述标定目标的各区域分别对应的第二区域像素值;及determining, in the second calibration image, pixel values of the second region corresponding to each region of the calibration target; and
    根据所述标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,确定所述第一摄像头与所述第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the corresponding relationship between the first region pixel value and the second region pixel value of the same region in the calibration target.
  9. 根据权利要求8所述的方法,其特征在于,所述标定目标中每个区域具有预设的对应的纯色。The method according to claim 8, wherein each area in the calibration target has a preset corresponding solid color.
  10. 根据权利要求1所述的方法,其特征在于,所述基于所述第一摄像头与所述第二摄像头之间的像素映射关系,对所述第二图像进行像素映射,获得所述第二图像对应的映射图像,包括:The method according to claim 1, wherein the second image is obtained by performing pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera Corresponding mapping images, including:
    分别确定所述第二图像中各像素点的原始像素值;respectively determining the original pixel value of each pixel in the second image;
    基于所述第一摄像头与所述第二摄像头之间的像素映射关系对各所述原始像素值进行像素值映射,得到所述第二图像中各像素点分别对应的映射像素值;及Perform pixel value mapping on each of the original pixel values based on the pixel mapping relationship between the first camera and the second camera, to obtain mapped pixel values corresponding to each pixel in the second image; and
    基于各所述映射像素值更新所述第二图像,获得所述第二图像对应的映射图像。The second image is updated based on each of the mapped pixel values to obtain a mapped image corresponding to the second image.
  11. 根据权利要求1至10任意一项所述的方法,其特征在于,所述将所述第一图像和所述第二图像对应的映射图像进行对齐,包括:The method according to any one of claims 1 to 10, wherein the aligning the mapping images corresponding to the first image and the second image comprises:
    分别对所述第一图像和所述第二图像对应的映射图像进行畸变校正,得到第一畸变校正图像和第二畸变校正图像;Performing distortion correction on the mapping images corresponding to the first image and the second image, respectively, to obtain a first distortion corrected image and a second distortion corrected image;
    分别对所述第一畸变校正图像和所述第二畸变校正图像进行立体校正,获得第一校正图像和第二校正图像;及respectively performing stereo correction on the first distortion corrected image and the second distortion corrected image to obtain a first corrected image and a second corrected image; and
    将所述第一校正图像和所述第二校正图像进行网格对齐。Grid-aligning the first corrected image and the second corrected image.
  12. 根据权利要求11所述的方法,其特征在于,所述将所述第一校正图像和所述第二校正图像进行网格对齐,包括:The method according to claim 11, wherein the grid alignment of the first corrected image and the second corrected image comprises:
    分别将所述第一校正图像和所述第二校正图像进行网格划分,获得所述第一校正图像对应的各第一网格和所述第二校正图像对应的各第二网格;Perform grid division on the first corrected image and the second corrected image, respectively, to obtain each first grid corresponding to the first corrected image and each second grid corresponding to the second corrected image;
    分别对各所述第一网格和各所述第二网格中进行网格特征点检测,获得所述第一网格对应的第一网格特征点和所述第二网格对应的第二网格特征点;及Perform grid feature point detection on each of the first grids and each of the second grids, respectively, to obtain a first grid feature point corresponding to the first grid and a first grid corresponding to the second grid. two grid feature points; and
    基于各所述第一网格特征点和所述第二网格特征点对所述第一校正图像和所述第二校正图像进行图像变换,以对齐所述第一校正图像和所述第二校正图像。Image transformation is performed on the first corrected image and the second corrected image based on each of the first grid feature points and the second grid feature points, so as to align the first corrected image and the second corrected image Correct the image.
  13. 根据权利要求11所述的方法,其特征在于,在所述分别对所述第一畸变校正图像和所述第二畸变校正图像进行立体校正,获得第一校正图像和第二校正图像之后,还包括:The method according to claim 11, wherein after the stereoscopic correction is performed on the first distortion corrected image and the second distortion corrected image respectively to obtain the first corrected image and the second corrected image, further include:
    根据第一校正特征点和第二校正特征点构建特征点匹配对;所述第一校正特征点从所述第一校正图像中提取得到,所述第二校正特征点从所述第二校正图像提取得到;A feature point matching pair is constructed according to the first corrected feature point and the second corrected feature point; the first corrected feature point is extracted from the first corrected image, and the second corrected feature point is obtained from the second corrected image extracted;
    基于各所述特征点匹配对中的校正特征点之间的偏移参数,确定所述第一校正图像和所述第二校正图像之间的投影参数;determining a projection parameter between the first corrected image and the second corrected image based on the offset parameter between the corrected feature points in each of the feature point matching pairs;
    通过所述投影参数对所述第一校正图像和所述第二校正图像进行投影对齐,得到第一投影对齐图像和第二投影对齐图像;Perform projection alignment on the first corrected image and the second corrected image by using the projection parameters to obtain a first projected aligned image and a second projected aligned image;
    所述将所述第一校正图像和所述第二校正图像进行网格对齐,包括:The grid alignment of the first corrected image and the second corrected image includes:
    将所述第一投影对齐图像和所述第二投影对齐图像进行网格对齐。Grid-aligning the first projected alignment image and the second projected aligned image.
  14. 一种双目摄像头的像素映射关系确定方法,其特征在于,包括:A method for determining a pixel mapping relationship of a binocular camera, comprising:
    获取第一标定图像组和第二标定图像组;所述第一标定图像组包括由双目摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,所述第二标定图像组包括由所述双目摄像头中第二摄像头在所述相同场景、不同曝光时间条件下拍摄获得的第二标定图像;Obtain a first calibration image group and a second calibration image group; the first calibration image group includes the first calibration image obtained by shooting the first camera in the binocular camera under the same scene and different exposure time conditions, the second calibration image The calibration image group includes a second calibration image captured by a second camera in the binocular camera under the conditions of the same scene and different exposure times;
    基于各所述第一标定图像确定所述第一摄像头对应的第一相机响应函数;determining a first camera response function corresponding to the first camera based on each of the first calibration images;
    基于各所述第二标定图像确定所述第二摄像头对应的第二相机响应函数;及determining a second camera response function corresponding to the second camera based on each of the second calibration images; and
    根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系。A pixel mapping relationship between the first camera and the second camera is determined according to the first camera response function and the second camera response function.
  15. 根据权利要求14所述的方法,其特征在于,所述第一摄像头为可见光摄像头;所述基于各所述第一标定图像确定所述第一摄像头对应的第一相机响应函数,包括:The method according to claim 14, wherein the first camera is a visible light camera; and the determining a first camera response function corresponding to the first camera based on each of the first calibration images comprises:
    获取各所述第一标定图像分别对应于目标色彩通道的目标通道图像;acquiring target channel images of each of the first calibration images corresponding to the target color channel respectively;
    确定所述相同场景中同一位置在各所述目标通道图像所对应的第一特征点;determining the first feature point corresponding to each target channel image at the same position in the same scene;
    确定各所述第一特征点对应于所述目标色彩通道的通道亮度值;及determining a channel luminance value of each of the first feature points corresponding to the target color channel; and
    根据各所述第一特征点对应于所述目标色彩通道的通道亮度值,确定所述第一摄像头对应的第一相机响应函数。A first camera response function corresponding to the first camera is determined according to a channel luminance value of each of the first feature points corresponding to the target color channel.
  16. 根据权利要求15所述的方法,其特征在于,所述获取各所述第一标定图像分别对应于目标色彩通道的目标通道图像,包括:The method according to claim 15, wherein the acquiring the target channel images corresponding to the first calibration images respectively corresponding to the target color channel comprises:
    对所述第一标定图像进行通道分离,得到各分离通道图像;及performing channel separation on the first calibration image to obtain separate channel images; and
    根据各所述分离通道图像得到对应于目标色彩通道的目标通道图像。A target channel image corresponding to the target color channel is obtained according to each of the separated channel images.
  17. 根据权利要求15所述的方法,其特征在于,所述获取各所述第一标定图像分别对应于目标色彩通道的目标通道图像,包括:The method according to claim 15, wherein the acquiring the target channel images corresponding to the first calibration images respectively corresponding to the target color channel comprises:
    将所述第一标定图像变换至包括目标色彩通道的目标色彩空间,得到目标色彩空间图像;及transforming the first calibration image to a target color space including a target color channel to obtain a target color space image; and
    根据所述目标色彩空间图像得到对应于所述目标色彩通道的目标通道图像。A target channel image corresponding to the target color channel is obtained according to the target color space image.
  18. 根据权利要求14所述的方法,其特征在于,所述第二摄像头为红外摄像头;所述基于各所述第二标定图像确定所述第二摄像头对应的第二相机响应函数,包括:The method according to claim 14, wherein the second camera is an infrared camera; and the determining a second camera response function corresponding to the second camera based on each of the second calibration images comprises:
    分别确定所述相同场景中同一位置在各所述第二标定图像中所对应的第二特征点;respectively determining the second feature points corresponding to the same position in each of the second calibration images in the same scene;
    确定各所述第二特征点的像素值;及determining the pixel value of each of the second feature points; and
    根据各所述第二特征点的像素值确定所述第二摄像头对应的第二相机响应函数。A second camera response function corresponding to the second camera is determined according to the pixel value of each of the second feature points.
  19. 根据权利要求14至18任意一项所述的方法,其特征在于,所述根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系,包括:The method according to any one of claims 14 to 18, wherein the determining the relationship between the first camera and the second camera according to the first camera response function and the second camera response function The pixel mapping relationship between, including:
    获取至少一对匹配点对,所述匹配点对根据从所述第一标定图像中提取的第一匹配点和从所述第二标定图像中提取的第二匹配点进行位置匹配得到;Obtain at least one pair of matching point pairs, and the matching point pairs are obtained by performing position matching on the first matching point extracted from the first calibration image and the second matching point extracted from the second calibration image;
    分别确定所述匹配点对中第一匹配点的第一点像素值和第二匹配点的第二点像素值;respectively determining the pixel value of the first point of the first matching point and the pixel value of the second point of the second matching point in the pair of matching points;
    根据所述第一点像素值和所述第一相机响应函数确定第一相对照度值;根据所述第二点像素值和所述第二相机响应函数确定第二相对照度值;Determine a first relative illuminance value according to the first point pixel value and the first camera response function; determine a second relative illuminance value according to the second point pixel value and the second camera response function;
    基于所述第一相对照度值和所述第二相对照度值确定照度映射关系;及determining an illuminance mapping relationship based on the first relative illuminance value and the second relative illuminance value; and
    根据所述照度映射关系确定所述第一摄像头与所述第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the illuminance mapping relationship.
  20. 根据权利要求14至18任意一项所述的方法,其特征在于,所述第一标定图像和所述第二标定图像包括处于所述相同场景中具备不同区域的标定目标;所述根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系,包括:The method according to any one of claims 14 to 18, wherein the first calibration image and the second calibration image include calibration targets with different regions in the same scene; The first camera response function and the second camera response function determine the pixel mapping relationship between the first camera and the second camera, including:
    确定在所述第一标定图像中,所述标定目标的各区域分别对应的第一区域像素值;Determining in the first calibration image, the first region pixel values corresponding to each region of the calibration target respectively;
    确定在所述第二标定图像中,所述标定目标的各区域分别对应的第二区域像素值;及determining, in the second calibration image, pixel values of the second region corresponding to each region of the calibration target; and
    根据所述标定目标中相同区域的第一区域像素值和第二区域像素值之间的对应关系,确定所述第一摄像头与所述第二摄像头之间的像素映射关系。The pixel mapping relationship between the first camera and the second camera is determined according to the corresponding relationship between the first region pixel value and the second region pixel value of the same region in the calibration target.
  21. 根据权利要求20所述的方法,其特征在于,所述标定目标中每个区域具有预设的对应的纯色。The method according to claim 20, wherein each area in the calibration target has a preset corresponding solid color.
  22. 一种图像处理装置,其特征在于,包括:An image processing device, comprising:
    待处理图像获取模块,用于获取待处理的第一图像和第二图像;所述第一图像由第一摄像头拍摄得到,所述第二图像由第二摄像头拍摄得到;a to-be-processed image acquisition module, configured to acquire the to-be-processed first image and the second image; the first image is captured by the first camera, and the second image is captured by the second camera;
    像素映射处理模块,用于基于所述第一摄像头与所述第二摄像头之间的像素映射关系,对所述第二图像进行像素映射,获得所述第二图像对应的映射图像;其中,所述像素映射关系基于所述第一摄像头的第一相机响应函数与所述第二摄像头的第二相机响应函数确定;A pixel mapping processing module, configured to perform pixel mapping on the second image based on the pixel mapping relationship between the first camera and the second camera to obtain a mapped image corresponding to the second image; wherein the The pixel mapping relationship is determined based on the first camera response function of the first camera and the second camera response function of the second camera;
    图像对齐处理模块,用于将所述第二图像对应的映射图像和所述第一图像进行对齐。The image alignment processing module is used for aligning the mapping image corresponding to the second image and the first image.
  23. 一种双目摄像头的像素映射关系确定装置,其特征在于,包括:A device for determining a pixel mapping relationship of a binocular camera, comprising:
    标定图像组获取模块,用于获取第一标定图像组和第二标定图像组;所述第一标定图像组包括由双目 摄像头中第一摄像头在相同场景、不同曝光时间条件下拍摄获得的第一标定图像,所述第二标定图像组包括由所述双目摄像头中第二摄像头在所述相同场景、不同曝光时间条件下拍摄获得的第二标定图像;The calibration image group acquisition module is used to obtain the first calibration image group and the second calibration image group; the first calibration image group includes the first camera in the binocular camera shooting under the same scene and different exposure time conditions. a calibration image, wherein the second calibration image group includes a second calibration image obtained by shooting a second camera in the binocular camera under the conditions of the same scene and different exposure times;
    第一相机响应函数确定模块,用于基于各所述第一标定图像确定所述第一摄像头对应的第一相机响应函数;a first camera response function determination module, configured to determine a first camera response function corresponding to the first camera based on each of the first calibration images;
    第二相机响应函数确定模块,用于基于各所述第二标定图像确定所述第二摄像头对应的第二相机响应函数;A second camera response function determination module, configured to determine a second camera response function corresponding to the second camera based on each of the second calibration images;
    像素映射关系确定模块,用于根据所述第一相机响应函数和所述第二相机响应函数,确定所述第一摄像头与所述第二摄像头之间的像素映射关系。A pixel mapping relationship determination module, configured to determine a pixel mapping relationship between the first camera and the second camera according to the first camera response function and the second camera response function.
  24. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至21中任一项所述的方法的步骤。An electronic device comprising a memory and a processor, wherein a computer program is stored in the memory, characterized in that, when the computer program is executed by the processor, the processor executes any one of claims 1 to 21. A step of the method.
  25. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至21中任一项所述的方法的步骤。A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 21 are implemented.
PCT/CN2021/116809 2020-11-12 2021-09-07 Image processing method and apparatus, electronic device, and computer-readable storage medium WO2022100242A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011260818.9 2020-11-12
CN202011260818.9A CN112258579B (en) 2020-11-12 2020-11-12 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2022100242A1 true WO2022100242A1 (en) 2022-05-19

Family

ID=74265659

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116809 WO2022100242A1 (en) 2020-11-12 2021-09-07 Image processing method and apparatus, electronic device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN112258579B (en)
WO (1) WO2022100242A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792327A (en) * 2022-06-23 2022-07-26 中国科学院空天信息创新研究院 Image processing method and system
CN116309760A (en) * 2023-05-26 2023-06-23 安徽高哲信息技术有限公司 Cereal image alignment method and cereal detection equipment
CN116993643A (en) * 2023-09-27 2023-11-03 山东建筑大学 Unmanned aerial vehicle photogrammetry image correction method based on artificial intelligence
CN117455767A (en) * 2023-12-26 2024-01-26 深圳金三立视频科技股份有限公司 Panoramic image stitching method, device, equipment and storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258579B (en) * 2020-11-12 2023-03-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113298187B (en) * 2021-06-23 2023-05-12 展讯通信(上海)有限公司 Image processing method and device and computer readable storage medium
CN113538538B (en) * 2021-07-29 2022-09-30 合肥的卢深视科技有限公司 Binocular image alignment method, electronic device, and computer-readable storage medium
CN113837133A (en) * 2021-09-29 2021-12-24 维沃移动通信有限公司 Camera data migration method and device
CN114240866B (en) * 2021-12-09 2022-07-08 广东省农业科学院环境园艺研究所 Tissue culture seedling grading method and device based on two-dimensional image and three-dimensional growth information
CN117350919A (en) * 2022-06-28 2024-01-05 中兴通讯股份有限公司 Image fusion method, device and storage medium
CN115797426B (en) * 2023-02-13 2023-05-12 合肥的卢深视科技有限公司 Image alignment method, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430298A (en) * 2015-12-08 2016-03-23 天津大学 Method for simultaneously exposing and synthesizing HDR image via stereo camera system
CN105933617A (en) * 2016-05-19 2016-09-07 中国人民解放军装备学院 High dynamic range image fusion method used for overcoming influence of dynamic problem
WO2020097130A1 (en) * 2018-11-06 2020-05-14 Flir Commercial Systems, Inc. Response normalization for overlapped multi-image applications
CN111741281A (en) * 2020-06-30 2020-10-02 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN112258579A (en) * 2020-11-12 2021-01-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319709B (en) * 2018-02-06 2021-03-30 Oppo广东移动通信有限公司 Position information processing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430298A (en) * 2015-12-08 2016-03-23 天津大学 Method for simultaneously exposing and synthesizing HDR image via stereo camera system
CN105933617A (en) * 2016-05-19 2016-09-07 中国人民解放军装备学院 High dynamic range image fusion method used for overcoming influence of dynamic problem
WO2020097130A1 (en) * 2018-11-06 2020-05-14 Flir Commercial Systems, Inc. Response normalization for overlapped multi-image applications
CN111741281A (en) * 2020-06-30 2020-10-02 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN112258579A (en) * 2020-11-12 2021-01-22 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792327A (en) * 2022-06-23 2022-07-26 中国科学院空天信息创新研究院 Image processing method and system
CN114792327B (en) * 2022-06-23 2022-11-04 中国科学院空天信息创新研究院 Image processing method and system
CN116309760A (en) * 2023-05-26 2023-06-23 安徽高哲信息技术有限公司 Cereal image alignment method and cereal detection equipment
CN116309760B (en) * 2023-05-26 2023-09-19 安徽高哲信息技术有限公司 Cereal image alignment method and cereal detection equipment
CN116993643A (en) * 2023-09-27 2023-11-03 山东建筑大学 Unmanned aerial vehicle photogrammetry image correction method based on artificial intelligence
CN116993643B (en) * 2023-09-27 2023-12-12 山东建筑大学 Unmanned aerial vehicle photogrammetry image correction method based on artificial intelligence
CN117455767A (en) * 2023-12-26 2024-01-26 深圳金三立视频科技股份有限公司 Panoramic image stitching method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112258579A (en) 2021-01-22
CN112258579B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
WO2022100242A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110660088B (en) Image processing method and device
CN112396562B (en) Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene
CN110689581A (en) Structured light module calibration method, electronic device and computer readable storage medium
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2021184302A1 (en) Image processing method and apparatus, imaging device, movable carrier, and storage medium
Kordecki et al. Practical vignetting correction method for digital camera with measurement of surface luminance distribution
CN107403410B (en) Splicing method of thermal infrared images
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2023273094A1 (en) Method, apparatus, and device for determining spectral reflectance
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109166076B (en) Multi-camera splicing brightness adjusting method and device and portable terminal
CN111970432A (en) Image processing method and image processing device
WO2023134103A1 (en) Image fusion method, device, and storage medium
Huang et al. End-to-end full projector compensation
CN112927307A (en) Calibration method, calibration device, electronic equipment and storage medium
WO2023273412A1 (en) Method, apparatus and device for determining spectral reflectance
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN112200848A (en) Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment
CN113554709A (en) Camera-projector system calibration method based on polarization information
WO2022036539A1 (en) Color consistency correction method and device for multiple cameras
CN111757086A (en) Active binocular camera, RGB-D image determination method and device
Li et al. Collaborative color calibration for multi-camera systems
CN109785390B (en) Method and device for image correction
WO2023151210A1 (en) Image processing method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21890770

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21890770

Country of ref document: EP

Kind code of ref document: A1