CN112396562A - Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene - Google Patents

Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene Download PDF

Info

Publication number
CN112396562A
CN112396562A CN202011283187.2A CN202011283187A CN112396562A CN 112396562 A CN112396562 A CN 112396562A CN 202011283187 A CN202011283187 A CN 202011283187A CN 112396562 A CN112396562 A CN 112396562A
Authority
CN
China
Prior art keywords
image
dvs
camera
images
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011283187.2A
Other languages
Chinese (zh)
Other versions
CN112396562B (en
Inventor
黄凯
孟浩
李博洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011283187.2A priority Critical patent/CN112396562B/en
Publication of CN112396562A publication Critical patent/CN112396562A/en
Application granted granted Critical
Publication of CN112396562B publication Critical patent/CN112396562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of robot perception, and particularly relates to a disparity map enhancing method based on RGB and DVS image fusion in a high-dynamic-range scene. The method comprises the following steps: s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera; s2, acquiring an RGB (red, green and blue) image and a DVS (digital video coding) image of a binocular camera in a scene, and performing multi-scale weighted fusion after registration; s3, generating an HDR image aiming at computer vision for the fused image; and S4, generating a disparity map by using the improved binocular stereo matching algorithm SGM based on the HDR image generated in the step S3. In scenes with large imaging dynamic range such as tunnels, the problems of underexposure and high exposure of a camera are solved, the quality of a generated image is improved, meanwhile, aiming at the problems of discontinuity and instability of an image edge area, edge detail information is enriched as much as possible by introducing other information sources, and the accuracy of a finally generated parallax image at the image edge is improved.

Description

Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene
Technical Field
The invention belongs to the field of robot perception, and particularly relates to a disparity map enhancing method based on RGB and DVS image fusion in a high-dynamic-range scene.
Background
HDR (High-Dynamic Range), i.e., a High Dynamic Range image, provides more Dynamic Range and image detail information than a general image. The HDR image is synthesized by LDR (Low-Dynamic Range Low Dynamic Range image) of different exposure times and using LDR images of the best detail corresponding to each exposure time.
Camera calibration is a method for determining parameters of a sensor imaging geometric model, which determines the correlation between the three-dimensional geometric position of a surface point of a space object and a corresponding pixel point in an image. The camera calibration is divided into an internal reference calibration and an external reference calibration. Obtaining a projection relation between a camera coordinate system and an image coordinate system by internal reference calibration; the external reference calibration acquires the coordinate transformation relation between the world coordinate system and the camera coordinate system, and is generally described by a rotation matrix (R) and a translation matrix (T).
The image fusion is to synthesize two or more images into a new image by using a specific algorithm, so that the fused image contains more information. Image fusion algorithms that are currently frequently used include: mathematical morphology, HIS transform, laplacian pyramid fusion, wavelet transform, etc.
The binocular stereo matching algorithm obtains a disparity map through the left and right viewpoint images of the same scene, and further obtains a depth map. The most commonly used algorithm at present is the semi-global matching (SGM) algorithm.
Chinese patent CN111833393A, published as 2020.10.27, discloses a binocular stereo matching method based on edge information, which performs region division on pixel points based on image edge information and by using a superpixel segmentation algorithm, and can finally obtain a more accurate disparity map in an occlusion region and a region where edge information is discontinuous. However, the method is only suitable for high-quality images with low dynamic range, and in scenes with large dynamic range, such as the entrance and the exit of a tunnel, the images generated by the camera have the problems of underexposure and overexposure, and the accuracy rate of the disparity map estimated by the method is greatly reduced.
Disclosure of Invention
In order to overcome at least one defect in the prior art, the invention provides a disparity map enhancement method based on RGB and DVS image fusion in a high-dynamic-range scene, which can more accurately, reliably and effectively realize disparity map enhancement in a scene with a larger imaging dynamic range.
In order to solve the technical problems, the invention adopts the technical scheme that: a disparity map enhancement method based on RGB and DVS image fusion in a high dynamic range scene comprises the following steps:
s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera;
s2, acquiring an RGB (red, green and blue) image and a DVS (digital video coding) image of a binocular camera in a scene, and performing multi-scale weighted fusion after registration;
s3, generating an HDR image aiming at computer vision for the fused image;
and S4, generating a disparity map by using the improved binocular stereo matching algorithm SGM based on the HDR image generated in the step S3.
Furthermore, after the binocular RGB camera and the DVS camera are deployed, the positions of the sensors are relatively unchanged in the process of acquiring data for multiple times, and only one-time calibration is needed in the whole process; the calibration mainly comprises internal reference calibration of a binocular RGB camera and a DVS camera and external reference calibration of the RGB camera and the DVS camera.
Furthermore, the DVS camera is a sensor triggered based on illumination intensity change, and outputs a pulse signal when the illumination intensity changes, and for a single pixel point, the response of the DVS camera depends on the change of the illumination intensity, rather than an absolute illumination intensity value, and has the characteristic of high dynamic range. In addition to this, the object edge is due to the difference in illumination intensity between it and the background, as it can be better captured by the DVS camera. Therefore, the DVS camera is used in the present invention to enhance the edge information of the picture.
Furthermore, data acquired by the DVS camera is asynchronous event stream data, and the concept of frame rate in a common camera is not available, so that an image frame which is not standard and is output by the DVS is obtained by setting a fixed time slice length delta t, continuously accumulating trigger events in the time slice, then overlapping the event streams accumulated in a period of time, and finally passing through an event screen plane with the thickness d.
Further, when the RGB camera uses the checkerboard calibration plate to calibrate the internal reference, fixing the position of the camera and then fixing the checkerboard calibration plate to capture a group of images, and acquiring a plurality of groups of images by moving the checkerboard calibration plate; the DVS camera is only sensitive to the illumination intensity change, so when the DVS camera and the checkerboard calibration board are fixed, the DVS cannot acquire and output image information, a continuously refreshed display screen is used as a trigger source of an event when the internal parameter of the DVS is calibrated, and the internal parameter calibration of the DVS camera and the RGB camera is simultaneously carried out by displaying the checkerboard calibration board on the screen.
Further, if a position deviation occurs between different sensor images, the image after fusion is further amplified in the whole body; therefore, in step S2, the matching process includes: firstly, detecting feature points through SIFT, ORB and SURF feature extraction algorithms, and then matching the feature points; after matching corresponding points between the images are obtained, calculating homography matrixes of the two images according to corresponding point information, and calculating the alignment of the images through the homography matrixes; the homography matrix can be obtained by calculating four coordinate point pairs, and after the homography matrix is obtained, the homography matrix can be subjected to registration and alignment by left-multiplying the source image.
Further, as described in step S2, before the fusion, a method of feature-based image registration is used to reduce the difference in geometric space between different sensors, a mapping transformation model is established between images of different sensors, and pixels of an image are mapped to pixels of another image through the mapping model.
Further, the image fusion part adopts a pyramid transformation method, the image is filtered or sampled to obtain a pyramid-like hierarchical structure, and a weighted fusion method is used for data fusion on each layer of the pyramid to obtain a pyramid-shaped fusion image layer; as the resolution of the sampled image layers gradually decreases, the formula is used for the lower resolution but high frequency image layer fusion:
Figure BDA0002781470610000031
in the formula, CA(i, j) and CB(i, j) respectively represent the pixel values of the two sets of images at (i, j), CF(i, j) represents the pixel value of the fused image at (i, j);
the formula is adopted for the image with higher resolution but lower frequency:
CF(i,j)=(CA(i,j)+CB(i,j))/2
taking the average value as a fusion result; and after the fusion result of each layer is obtained, performing inverse transformation superposition on each layer respectively to obtain an integral fusion image.
Further, in step S3, the quality of the fused image is seriously affected by the overexposure and underexposure problems caused by the drastic change of the light conditions, two groups of images with different exposure degrees are obtained through the automatic multiple exposure control method, and then the HDR image is obtained by applying the Mertens algorithm with the improved pixel weight calculation formula to the two groups of images.
Further, generating the HDR image specifically includes the steps of:
a measure of the response of each pixel is calculated, as follows:
Figure BDA0002781470610000041
where k is 0, k is 1, and I represents a low-exposure image and a high-exposure image, respectivelyi,jRepresents the gray value of the image at (i, j), δ being a constant;
an initial weight for each pixel of the low and high exposure images is then calculated, as follows:
Wi,j,k=min{wCCi,j,k+wEEi,j,k,1}
wherein, Ci,j,kIndicating low exposure or high exposureConstrast weight at (i, j), wCAnd wEWeight coefficients respectively representing the two;
the numerical stability of the results is enhanced by optimizing a weight calculation formula, which is as follows:
Figure BDA0002781470610000042
wherein N represents the number of different exposure images; wi,j,k′Represents the initial weight of the exposure image of the k' th type at index (i, j);
finally by the formula
Figure BDA0002781470610000043
Weighting to obtain an HDR image; i isi,j,k′Representing the gray scale value of the k' th exposure image at (i, j).
Further, in the improved SGM algorithm, Census transformation with illumination invariance is used for replacing mutual information MI in the original SGM algorithm to calculate parallax matching cost, cost aggregation is performed based on a cross neighborhood to improve the performance of the algorithm under the condition that gray information and depth information are discontinuous, and parallax calculation and parallax optimization are performed according to the steps in the SGM algorithm.
The method firstly calibrates the binocular RGB camera and the DVS camera, keeps the relative position between the sensors unchanged when data acquisition is carried out after calibration is finished, and avoids destroying the conversion relation between the sensor coordinate systems. Feature-based registration of the images is performed to reduce geometric spatial differences between the different sensor images prior to fusing the RGB images and the DVS images. And obtaining a hierarchical structure of the registered images by using a pyramid transformation method, fusing each layer by using a weighted fusion method, and obtaining an integral fused image by an inverse transformation and superposition method. On the basis of fusion, HDR images are obtained through images with two different exposures by using a modified Mertens algorithm. And finally, generating a high-quality disparity map from the HDR image by using an improved binocular stereo matching algorithm, namely a semi-global matching algorithm (SGM).
The method can effectively solve the problems of under exposure and high exposure of the camera in scenes with large imaging dynamic range such as tunnels, improve the quality of the generated image, and simultaneously enrich the edge detail information as much as possible by introducing other information sources aiming at the problems of discontinuity and instability of the edge area of the image, thereby improving the accuracy of the finally generated parallax image at the edge of the image.
Compared with the prior art, the beneficial effects are:
1. the edge information of the image is enhanced in a mode of fusing the binocular RGB image and the DVS image, and the accuracy of the generated parallax image at the edge of the image is improved;
2. the HDR image is obtained by using the improved Mertens algorithm through two images with different exposures, so that the high-quality parallax image can be generated in a scene with a large imaging dynamic range;
3. in the binocular matching algorithm, Census transformation is used for replacing the original mutual information calculation in the SGM algorithm, so that the illumination invariance is ensured, and the execution speed of the algorithm is increased.
Drawings
FIG. 1 is a schematic view of the overall process of the present invention.
Fig. 2 is an illustration of the DVS imaging principle of the present invention.
FIG. 3 is a schematic flow chart of the multi-scale weighted fusion method based on pyramid transformation according to the present invention.
FIG. 4 is a schematic diagram of the pyramid structure model of the present invention.
Fig. 5 is a schematic diagram of the flow of generating an HDR image according to the present invention.
Fig. 6 is a schematic flow chart of the improved SGM algorithm of the present invention.
Detailed Description
The drawings are for illustration purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
As shown in fig. 1, a disparity map enhancing method based on RGB and DVS image fusion in a high dynamic range scene includes the following steps:
s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera;
s2, acquiring an RGB (red, green and blue) image and a DVS (digital video coding) image of a binocular camera in a scene, and performing multi-scale weighted fusion after registration;
s3, generating an HDR image aiming at computer vision for the fused image;
and S4, generating a disparity map by using the improved binocular stereo matching algorithm SGM based on the HDR image generated in the step S3.
In one embodiment, after the binocular RGB camera and the DVS camera are deployed, the position of each sensor is ensured to be relatively unchanged in the process of acquiring data for multiple times, and only once calibration is needed in the whole process; the calibration mainly comprises internal reference calibration of a binocular RGB camera and a DVS camera and external reference calibration of the RGB camera and the DVS camera.
Further, as shown in fig. 2, the DVS camera is a sensor triggered based on illumination intensity change, and outputs a pulse signal when the illumination intensity changes, and for a single pixel point, the response depends on the change of the illumination intensity, rather than an absolute illumination intensity value, and has a characteristic of high dynamic range. In addition to this, the object edge is due to the difference in illumination intensity between it and the background, as it can be better captured by the DVS camera. Therefore, the DVS camera is used in the present invention to enhance the edge information of the picture.
The data acquired by the DVS camera is asynchronous event stream data, and the concept of a frame rate in a common camera is not available, so that an image frame which is not standard and is output by the DVS is obtained by setting a fixed time slice length delta t, continuously accumulating trigger events in the time slice, then overlapping the event streams accumulated in a period of time, and finally passing through an event screen plane with the thickness d.
When the RGB camera uses the checkerboard calibration plate to calibrate the internal reference, fixing the position of the camera, fixing the checkerboard calibration plate to capture a group of images, and acquiring a plurality of groups of images in a mode of moving the checkerboard calibration plate; the DVS camera is only sensitive to the illumination intensity change, so when the DVS camera and the checkerboard calibration board are fixed, the DVS cannot acquire and output image information, a continuously refreshed display screen is used as a trigger source of an event when the internal parameter of the DVS is calibrated, and the internal parameter calibration of the DVS camera and the RGB camera is simultaneously carried out by displaying the checkerboard calibration board on the screen.
In addition, if a deviation of position occurs between different sensor images, the image after fusion is further amplified in its entirety; therefore, in step S2, the matching process includes: firstly, detecting feature points through SIFT, ORB and SURF feature extraction algorithms, and then matching the feature points; after matching corresponding points between the images are obtained, calculating homography matrixes of the two images according to corresponding point information, and calculating the alignment of the images through the homography matrixes; the homography matrix can be obtained by calculating four coordinate point pairs, and after the homography matrix is obtained, the homography matrix can be subjected to registration and alignment by left-multiplying the source image.
In step S2, before the fusion, a method of feature-based image registration is used to reduce the difference in geometric space between different sensors, a mapping transformation model is established between images of different sensors, and pixels of an image are mapped to pixels of another image through the mapping model.
In some embodiments, as shown in fig. 3 and 4, the image fusion part uses a pyramid transformation method to filter or sample an image to obtain a pyramid-like hierarchical structure, and performs data fusion on each layer of the pyramid by using a weighted fusion method to obtain a pyramid-shaped fusion image layer; as the resolution of the sampled image layers gradually decreases, the formula is used for the lower resolution but high frequency image layer fusion:
Figure BDA0002781470610000071
in the formula, CA(i, j) and CB(i, j) respectively represent the pixel values of the two sets of images at (i, j), CF(i, j) represents the pixel value of the fused image at (i, j);
taking a larger absolute value as a fusion result, adopting a formula aiming at an image with higher resolution but lower frequency:
CF(i,j)=(CA(i,j)+CB(i,j))/2
taking the average value as a fusion result; and after the fusion result of each layer is obtained, performing inverse transformation superposition on each layer respectively to obtain an integral fusion image.
In another embodiment, in the step S3, the quality of the fused image is seriously affected by the overexposure and underexposure problems caused by the drastic change of the light conditions, two groups of images with different exposure degrees are obtained by the automatic multiple exposure control method, and then the HDR image is obtained by applying the Mertens algorithm with the improved pixel weight calculation formula to the two groups of images. As shown in fig. 5, generating an HDR image specifically includes the following steps:
a measure of the response of each pixel is calculated, as follows:
Figure BDA0002781470610000072
where k is 0, k is 1, and I represents a low-exposure image and a high-exposure image, respectivelyi,jRepresents the gray value of the image at (i, j), δ being a constant;
an initial weight for each pixel of the low and high exposure images is then calculated, as follows:
Wi,j,k=min{wCCi,j,k+wEEi,j,k,1}
wherein, Ci,j,kDenotes the Constrast weight, w, at (i, j) for either a low exposure or a high exposureCAnd wEWeight coefficients respectively representing the two;
the numerical stability of the results is enhanced by optimizing a weight calculation formula, which is as follows:
Figure BDA0002781470610000081
wherein N represents the number of different exposure images; wi,j,k′Represents the initial weight of the exposure image of the k' th type at index (i, j);
finally by the formula
Figure BDA0002781470610000082
Weighting to obtain HDR image, It,j,k′Representing the gray scale value of the k' th exposure image at (i, j).
In some embodiments, as shown in fig. 6, in the improved SGM algorithm, Census transformation with illumination invariance is used to replace mutual information MI in the original SGM algorithm to calculate disparity matching cost, then cost aggregation is performed based on a cross neighborhood to improve the performance of the algorithm under the condition that gray information and depth information are discontinuous, and finally disparity calculation and disparity optimization are performed according to the steps in the SGM algorithm.
The method firstly calibrates the binocular RGB camera and the DVS camera, keeps the relative position between the sensors unchanged when data acquisition is carried out after calibration is finished, and avoids destroying the conversion relation between the sensor coordinate systems. Feature-based registration of the images is performed to reduce geometric spatial differences between the different sensor images prior to fusing the RGB images and the DVS images. And obtaining a hierarchical structure of the registered images by using a pyramid transformation method, fusing each layer by using a weighted fusion method, and obtaining an integral fused image by an inverse transformation and superposition method. On the basis of fusion, HDR images are obtained through images with two different exposures by using a modified Mertens algorithm. And finally, generating a high-quality disparity map from the HDR image by using an improved binocular stereo matching algorithm, namely a semi-global matching algorithm (SGM).
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A disparity map enhancement method based on RGB and DVS image fusion in a high dynamic range scene is characterized by comprising the following steps:
s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera;
s2, acquiring an RGB (red, green and blue) image and a DVS (digital video coding) image of a binocular camera in a scene, and performing multi-scale weighted fusion after registration;
s3, generating an HDR image aiming at computer vision for the fused image;
and S4, generating a disparity map by using the improved binocular stereo matching algorithm SGM based on the HDR image generated in the step S3.
2. The disparity map enhancement method based on RGB and DVS image fusion in the high dynamic range scene according to claim 1, wherein after the binocular RGB camera and the DVS camera are deployed, the position of each sensor is ensured to be relatively unchanged in the process of acquiring data for multiple times, and only one calibration is needed in the whole process; the calibration mainly comprises internal reference calibration of a binocular RGB camera and a DVS camera and external reference calibration of the RGB camera and the DVS camera.
3. The method according to claim 2, wherein the DVS camera is used to enhance edge information of the picture; the data acquired by the DVS camera is asynchronous event stream data, the DVS outputs non-standard image frames, trigger events are continuously accumulated in a time slice by setting a fixed time slice length delta t, then the accumulated event streams in a period of time are overlapped, and finally, the image frames are obtained after passing through an event screen plane with the thickness d.
4. The disparity map enhancing method based on the fusion of the RGB and DVS images in the high dynamic range scene as claimed in claim 2, wherein when the RGB camera uses the checkerboard calibration plate to calibrate the internal reference, the position of the camera is fixed, then the checkerboard calibration plate is fixed to capture a group of images, and a plurality of groups of images are obtained by moving the checkerboard calibration plate; and when the internal reference calibration of the DVS is carried out, a continuously refreshed display screen is used as a trigger source of an event, and the internal reference calibration of the DVS camera and the RGB camera is carried out simultaneously by displaying a checkerboard calibration board on the screen.
5. The method as claimed in claim 1, wherein in step S2, the matching process includes: firstly, detecting feature points through SIFT, ORB and SURF feature extraction algorithms, and then matching the feature points; after matching corresponding points between the images are obtained, calculating homography matrixes of the two images according to corresponding point information, and calculating the alignment of the images through the homography matrixes; the homography matrix can be obtained by calculating four coordinate point pairs, and after the homography matrix is obtained, the homography matrix can be subjected to registration and alignment by left-multiplying the source image.
6. The method for enhancing disparity map based on fusion of RGB and DVS images in high dynamic range scene as claimed in claim 5, wherein in step S2, before the fusion, the method of feature-based image registration is used to reduce the difference of geometric space between different sensors, and a mapping transformation model is established between images of different sensors, and the pixels of the image are mapped to the pixels of another image through the mapping model.
7. The disparity map enhancement method based on RGB and DVS image fusion in the high dynamic range scene as claimed in claim 6, wherein the image fusion part adopts pyramid transformation method, obtains a pyramid-like hierarchical structure by filtering or sampling the image, and performs data fusion on each layer of the pyramid by using weighting fusion method to obtain a pyramid-shaped fusion image layer; as the resolution of the sampled image layers gradually decreases, the formula is used for the lower resolution but high frequency image layer fusion:
Figure FDA0002781470600000021
in the formula, CA(i, j) and CB(i, j) respectively represent the pixel values of the two sets of images at (i, j), CF(i, j) represents the pixel value of the fused image at (i, j);
the formula is adopted for the image with higher resolution but lower frequency:
CF(i,j)=(CA(i,j)+CB(i,j))/2
taking the average value as a fusion result; and after the fusion result of each layer is obtained, performing inverse transformation superposition on each layer respectively to obtain an integral fusion image.
8. The method for enhancing disparity maps based on fusion of RGB and DVS images in high dynamic range scenes according to any one of claims 1 to 7, wherein in step S3, the quality of the fused images is seriously affected by the overexposure and underexposure problems that occur severely due to the change of light conditions, two groups of images with different exposure degrees are obtained by an automatic multiple exposure control method, and then the HDR images are obtained by applying the Mertens algorithm with the improved pixel weight calculation formula to the two groups of images.
9. The method as claimed in claim 8, wherein generating the HDR image specifically includes the following steps:
a measure of the response of each pixel is calculated, as follows:
Figure FDA0002781470600000031
where k is 0, k is 1, and I represents a low-exposure image and a high-exposure image, respectivelyi,jRepresents the gray value of the image at (i, j), δ being a constant;
an initial weight for each pixel of the low and high exposure images is then calculated, as follows:
Wi,j,k=min{wCCi,j,k+wEEi,j,k,1)
wherein, Ci,j,kDenotes the Constrast weight, w, at (i, j) for either a low exposure or a high exposureCAnd wEWeight coefficients respectively representing the two;
the numerical stability of the results is enhanced by optimizing a weight calculation formula, which is as follows:
Figure FDA0002781470600000032
wherein N represents the number of different exposure images; wi,j,k′Represents the initial weight of the exposure image of the k' th type at index (i, j);
finally by the formula
Figure FDA0002781470600000033
Weighting to obtain HDR image, Ii,j,k′Representing the gray scale value of the k' th exposure image at (i, j).
10. The method as claimed in claim 9, wherein in the improved SGM algorithm, Census transformation with illumination invariance is used to replace mutual information MI in the original SGM algorithm to calculate disparity matching cost, then cost aggregation is performed based on a cross neighborhood to improve the performance of the algorithm under the condition of discontinuous gray scale information and depth information, and finally disparity calculation and disparity optimization are performed according to the steps in the SGM algorithm.
CN202011283187.2A 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene Active CN112396562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011283187.2A CN112396562B (en) 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011283187.2A CN112396562B (en) 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene

Publications (2)

Publication Number Publication Date
CN112396562A true CN112396562A (en) 2021-02-23
CN112396562B CN112396562B (en) 2023-09-05

Family

ID=74599972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011283187.2A Active CN112396562B (en) 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene

Country Status (1)

Country Link
CN (1) CN112396562B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033382A (en) * 2021-03-23 2021-06-25 哈尔滨市科佳通用机电股份有限公司 Method, system and device for identifying large-area damage fault of wagon floor
CN113781470A (en) * 2021-09-24 2021-12-10 商汤集团有限公司 Parallax information acquisition method, device and equipment and binocular camera system
CN113947549A (en) * 2021-10-22 2022-01-18 深圳国邦信息技术有限公司 Self-photographing video decoration prop edge processing method and related product
CN114782492A (en) * 2022-04-22 2022-07-22 广东博华超高清创新中心有限公司 Super-resolution improving method for motion image fusing pulse data and frame data
US20220253651A1 (en) * 2021-02-10 2022-08-11 Apple Inc. Image fusion processor circuit for dual-mode image fusion architecture
WO2022179412A1 (en) * 2021-02-26 2022-09-01 华为技术有限公司 Recognition method and electronic device
WO2022198631A1 (en) * 2021-03-26 2022-09-29 Harman International Industries, Incorporated Method, apparatus and system for auto-labeling
CN115150561A (en) * 2022-05-23 2022-10-04 中国人民解放军国防科技大学 High-dynamic imaging system and method
CN115984327A (en) * 2023-01-03 2023-04-18 上海人工智能创新中心 Self-adaptive visual tracking method, system, equipment and storage medium
CN116416172A (en) * 2022-07-21 2023-07-11 上海砹芯科技有限公司 Image synthesis method, device, electronic equipment and storage medium
WO2023143708A1 (en) * 2022-01-26 2023-08-03 Huawei Technologies Co., Ltd. Hdr reconstruction from bracketed exposures and events
WO2024104436A1 (en) * 2022-11-17 2024-05-23 歌尔科技有限公司 Image processing method and apparatus, device, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933617A (en) * 2016-05-19 2016-09-07 中国人民解放军装备学院 High dynamic range image fusion method used for overcoming influence of dynamic problem
CN111062873A (en) * 2019-12-17 2020-04-24 大连理工大学 Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN111260597A (en) * 2020-01-10 2020-06-09 大连理工大学 Parallax image fusion method of multiband stereo camera
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933617A (en) * 2016-05-19 2016-09-07 中国人民解放军装备学院 High dynamic range image fusion method used for overcoming influence of dynamic problem
CN111062873A (en) * 2019-12-17 2020-04-24 大连理工大学 Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN111260597A (en) * 2020-01-10 2020-06-09 大连理工大学 Parallax image fusion method of multiband stereo camera
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11841926B2 (en) * 2021-02-10 2023-12-12 Apple Inc. Image fusion processor circuit for dual-mode image fusion architecture
US20220253651A1 (en) * 2021-02-10 2022-08-11 Apple Inc. Image fusion processor circuit for dual-mode image fusion architecture
WO2022179412A1 (en) * 2021-02-26 2022-09-01 华为技术有限公司 Recognition method and electronic device
CN113033382A (en) * 2021-03-23 2021-06-25 哈尔滨市科佳通用机电股份有限公司 Method, system and device for identifying large-area damage fault of wagon floor
WO2022198631A1 (en) * 2021-03-26 2022-09-29 Harman International Industries, Incorporated Method, apparatus and system for auto-labeling
CN113781470A (en) * 2021-09-24 2021-12-10 商汤集团有限公司 Parallax information acquisition method, device and equipment and binocular camera system
CN113781470B (en) * 2021-09-24 2024-06-11 商汤集团有限公司 Parallax information acquisition method, device, equipment and binocular shooting system
CN113947549B (en) * 2021-10-22 2022-10-25 深圳国邦信息技术有限公司 Self-shooting video decoration prop edge processing method and related product
CN113947549A (en) * 2021-10-22 2022-01-18 深圳国邦信息技术有限公司 Self-photographing video decoration prop edge processing method and related product
WO2023143708A1 (en) * 2022-01-26 2023-08-03 Huawei Technologies Co., Ltd. Hdr reconstruction from bracketed exposures and events
CN114782492A (en) * 2022-04-22 2022-07-22 广东博华超高清创新中心有限公司 Super-resolution improving method for motion image fusing pulse data and frame data
CN115150561A (en) * 2022-05-23 2022-10-04 中国人民解放军国防科技大学 High-dynamic imaging system and method
CN115150561B (en) * 2022-05-23 2023-10-31 中国人民解放军国防科技大学 High dynamic imaging system and method
CN116416172A (en) * 2022-07-21 2023-07-11 上海砹芯科技有限公司 Image synthesis method, device, electronic equipment and storage medium
WO2024104436A1 (en) * 2022-11-17 2024-05-23 歌尔科技有限公司 Image processing method and apparatus, device, and computer-readable storage medium
CN115984327A (en) * 2023-01-03 2023-04-18 上海人工智能创新中心 Self-adaptive visual tracking method, system, equipment and storage medium
CN115984327B (en) * 2023-01-03 2024-05-07 上海人工智能创新中心 Self-adaptive vision tracking method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN112396562B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN112396562B (en) Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene
WO2021120406A1 (en) Infrared and visible light fusion method based on saliency map enhancement
CN111080724B (en) Fusion method of infrared light and visible light
CN110799991B (en) Method and system for performing simultaneous localization and mapping using convolution image transformations
CN108012080B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2019042216A1 (en) Image blurring processing method and device, and photographing terminal
CN113313661B (en) Image fusion method, device, electronic equipment and computer readable storage medium
CN108055452A (en) Image processing method, device and equipment
WO2019105297A1 (en) Image blurring method and apparatus, mobile device, and storage medium
CN108024054A (en) Image processing method, device and equipment
JP2010011223A (en) Signal processing apparatus, signal processing method, program, and recording medium
CN105979238A (en) Method for controlling global imaging consistency of multiple cameras
CN110956661A (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN112348775B (en) Vehicle-mounted looking-around-based pavement pit detection system and method
CN108694741A (en) A kind of three-dimensional rebuilding method and device
CN110428477B (en) Method for forming image of event camera without influence of speed
CN110660090A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN111932587A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110910456B (en) Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching
TW201947536A (en) Image processing method and image processing device
CN108053438A (en) Depth of field acquisition methods, device and equipment
CN108156369A (en) Image processing method and device
CN112648935A (en) Image processing method and device and three-dimensional scanning system
CN114331835A (en) Panoramic image splicing method and device based on optimal mapping matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant