CN112396562B - Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene - Google Patents

Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene Download PDF

Info

Publication number
CN112396562B
CN112396562B CN202011283187.2A CN202011283187A CN112396562B CN 112396562 B CN112396562 B CN 112396562B CN 202011283187 A CN202011283187 A CN 202011283187A CN 112396562 B CN112396562 B CN 112396562B
Authority
CN
China
Prior art keywords
image
images
dvs
camera
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011283187.2A
Other languages
Chinese (zh)
Other versions
CN112396562A (en
Inventor
黄凯
孟浩
李博洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011283187.2A priority Critical patent/CN112396562B/en
Publication of CN112396562A publication Critical patent/CN112396562A/en
Application granted granted Critical
Publication of CN112396562B publication Critical patent/CN112396562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the field of robot perception, and particularly relates to a disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene. Comprising the following steps: s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera; s2, acquiring RGB images and DVS images of a binocular camera in a scene, and performing multi-scale weighted fusion after registration; s3, generating an HDR image aiming at computer vision for the fused image; s4, generating a parallax image based on the HDR image generated in the step S3 by using an improved binocular stereo matching algorithm SGM. In a scene with a large imaging dynamic range such as a tunnel, the problems of underexposure and high exposure of a camera are solved, the quality of a generated image is improved, meanwhile, aiming at the problems of discontinuity and instability of an image edge area, edge detail information is enriched as much as possible by introducing other information sources, and the accuracy of a finally generated parallax image at the image edge is improved.

Description

Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene
Technical Field
The invention belongs to the field of robot perception, and particularly relates to a disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene.
Background
HDR (High-Dynamic Range) is a High Dynamic Range image that provides more Dynamic Range and image detail information than normal images. The HDR image is synthesized by LDR (Low-Dynamic Range image) of different exposure times, and with LDR images of the best detail corresponding to each exposure time.
Camera calibration is a method of determining parameters of a sensor imaging geometric model that determines the three-dimensional geometric position of a surface point of a spatial object in relation to its corresponding pixel point in an image. The camera calibration is divided into an internal reference calibration and an external reference calibration. Obtaining a projection relation between a camera coordinate system and an image coordinate system by internal parameter calibration; the coordinate conversion relation between the world coordinate system and the camera coordinate system is generally described by a rotation matrix (R) and a translation matrix (T).
The image fusion is to integrate two or more images into a new image by using a specific algorithm, so that the fused image contains more information. Image fusion algorithms that are currently in common use include: mathematical morphology, HIS transformation, laplacian pyramid fusion, wavelet transformation, and the like.
The binocular stereo matching algorithm obtains a parallax image through left and right viewpoint images of the same scene, and further obtains a depth image. The most commonly used algorithm at present is the semi-global matching (SGM) algorithm.
Chinese patent CN111833393a, publication date 2020.10.27, discloses a binocular stereo matching method based on edge information, which performs region division on pixel points by using a super-pixel segmentation algorithm based on image edge information, and finally can obtain a more accurate parallax map in a shielding region and an edge information discontinuous region. However, the method is only suitable for high-quality images with low dynamic range, under-exposure and over-exposure problems can occur in images generated by a camera in a scene with large dynamic range such as a tunnel entrance, and the accuracy of parallax images estimated by the method can be greatly reduced.
Disclosure of Invention
The invention aims to overcome at least one defect in the prior art, and provides a disparity map enhancement method based on fusion of RGB and DVS images in a scene with a high dynamic range, which can realize disparity map enhancement more accurately, reliably and effectively in a scene with a larger imaging dynamic range.
In order to solve the technical problems, the invention adopts the following technical scheme: a disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene comprises the following steps:
s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera;
s2, acquiring RGB images and DVS images of a binocular camera in a scene, and performing multi-scale weighted fusion after registration;
s3, generating an HDR image aiming at computer vision for the fused image;
s4, generating a parallax image based on the HDR image generated in the step S3 by using an improved binocular stereo matching algorithm SGM.
Further, after the binocular RGB camera and the DVS camera are deployed, the positions of the sensors are ensured to be relatively unchanged in the process of acquiring data for many times, and only one calibration is needed in the whole process; the calibration mainly comprises the internal parameter calibration of a binocular RGB camera and a DVS camera and the external parameter calibration of the combination of the RGB camera and the DVS camera.
Further, the DVS camera is a sensor triggered based on a change in illumination intensity, outputs a pulse signal when the illumination intensity changes, and has a characteristic of a high dynamic range for a single pixel point in which the response depends on the change in illumination intensity, not an absolute illumination intensity value. In addition, the object edge is better captured by the DVS camera due to the difference in illumination intensity between it and the background. Therefore, the DVS camera is used in the present invention to enhance the edge information of the picture.
Further, the data acquired by the DVS camera is asynchronous event stream data, and there is no concept of frame rate in a common camera, so that the DVS outputs an image frame that is not standard, by setting a fixed time slice length Δt and accumulating trigger events continuously in the time slice, then stacking the event streams accumulated in a period of time together, and finally, obtaining the image frame after passing through an event screen plane with a thickness d.
Further, when the RGB camera uses the checkerboard calibration plate to calibrate internal parameters, the position of the camera is fixed, then the checkerboard calibration plate is fixed to capture a group of images, and a plurality of groups of images are obtained by moving the checkerboard calibration plate; the DVS camera is only sensitive to illumination intensity change, so that when the DVS camera and the checkerboard calibration plate are fixed, the DVS cannot acquire and output image information, a continuously refreshed display screen is used as an event triggering source when DVS internal reference calibration is carried out, and the internal reference calibration of the DVS camera and the RGB camera is simultaneously carried out by displaying the checkerboard calibration plate on the screen.
Further, if a deviation of the positions occurs between the images of the different sensors, the whole fused image can further amplify the deviation; therefore, in the step S2, the matching process includes: firstly, detecting characteristic points through a SIFT, ORB, SURF characteristic extraction algorithm, and then matching the characteristic points; after the matching corresponding points between the images are obtained, calculating homography matrixes of the two images according to the corresponding point information, and calculating the alignment of the images through the homography matrixes; the homography matrix can be calculated through four coordinate point pairs, and after the homography matrix is obtained, registration and alignment can be carried out on the left-hand source images.
Further, as described in step S2, before fusion, a feature-based image registration method is used to reduce geometrical space differences between different sensors, a mapping transformation model between different sensor images is established, and pixels of an image are mapped to pixels of another image through the mapping model.
Further, the image fusion part adopts a pyramid transformation method, filters or samples the image to obtain a pyramid-like layered structure, and performs data fusion on each layer of the pyramid by using a weighted fusion method to obtain a pyramid-shaped fusion image layer; as the resolution of the sampled image layer gradually decreases, the formula is used for the fusion of the image layer with lower resolution but higher frequency:
wherein C is A (i, j) and C B (i, j) represent the pixel values of the two sets of images at (i, j), C F (i, j) represents the pixel value of the fused image at (i, j);
the formula is used for higher resolution but lower frequency images:
C F (i,j)=(C A (i,j)+C B (i,j))/2
taking the average value as a fusion result; and after the fusion result of each layer is obtained, carrying out inverse transformation and superposition on each layer to obtain an integral fusion image.
Further, in the step S3, the quality of the fused image is seriously affected by the overexposure and underexposure caused by the severe change of the light conditions, two sets of images with different exposure degrees are obtained by the automatic multi-exposure control method, and then the Mertens algorithm with the improved pixel weight calculation formula is applied to the two sets of images to obtain the HDR image.
Further, the generating an HDR image specifically includes the following steps:
the measurement of the response of each pixel is calculated as follows:
where k=0 represents a low exposure image, k=1 represents a high exposure image, I i,j Representing the gray value of the image at (i, j), delta being a constant;
the initial weights for each pixel of the low exposure and high exposure images are then calculated as follows:
W i,j,k =min{w C C i,j,k +w E E i,j,k ,1}
wherein C is i,j,k Constrast weights, w, representing either low exposure or high exposure at (i, j) C W E Respectively representing the weight coefficients of the two;
the numerical stability of the result is enhanced by optimizing a weight calculation formula, which is as follows:
wherein N represents the number of different exposure images; w (W) i,j,k′ Representing the initial weight of the kth' exposure image at subscript (i, j);
finally through the formulaWeighting to obtain an HDR image; i i,j,k′ The gray value of the kth' exposure image at (i, j) is represented.
Further, in the improved SGM algorithm, census transformation with illumination invariance is used for replacing mutual information MI in the original SGM algorithm to calculate parallax matching cost, then cost aggregation is carried out based on cross neighborhood to improve performance of the algorithm under the condition of discontinuous gray information and depth information, and finally parallax calculation and parallax optimization are carried out according to steps in the SGM algorithm.
According to the invention, the binocular RGB camera and the DVS camera are calibrated, and the relative positions among the sensors are kept unchanged during data acquisition after calibration is completed, so that the conversion relation among the sensor coordinate systems is prevented from being damaged. Before fusing the RGB images and DVS images, feature-based registration of the images is performed to reduce the geometrical spatial differences between the different sensor images. The registered images are subjected to pyramid transformation to obtain a layered structure, each layer is fused by using a weighted fusion method, and the whole fused image is obtained by an inverse transformation superposition mode. On the basis of the fusion, the HDR image is obtained from two differently exposed images using the Mertens algorithm after modification. Finally, a modified binocular stereo matching algorithm, namely a semi-global matching algorithm (SGM), is used for generating a high-quality disparity map from the HDR image.
The method can effectively solve the problems of underexposure and high exposure of the camera in a scene with a large imaging dynamic range such as a tunnel, improves the quality of the generated image, and simultaneously aims at the problems of discontinuity and instability of the image edge area, enriches the edge detail information as much as possible by introducing other information sources, and improves the accuracy of the finally generated parallax image at the image edge.
Compared with the prior art, the beneficial effects are that:
1. the edge information of the image is enhanced by fusing the binocular RGB image and the DVS image, and the accuracy of the generated parallax image at the edge of the image is improved;
2. the HDR image is obtained through two different exposure images by using the improved Mertens algorithm, so that the invention can generate a high-quality parallax image in a scene with a larger imaging dynamic range;
3. in the binocular matching algorithm, census transformation is used for replacing original mutual information calculation in the SGM algorithm, so that illumination invariance is ensured, and meanwhile, the execution speed of the algorithm is improved.
Drawings
FIG. 1 is a schematic overall flow chart of the method of the present invention.
Fig. 2 is an illustration of the DVS imaging principle of the present invention.
FIG. 3 is a flow chart of a multi-scale weighted fusion method based on pyramid transformation.
FIG. 4 is a schematic diagram of the image pyramid structure model of the present invention.
Fig. 5 is a flowchart illustration of the present invention generating an HDR image.
Fig. 6 is a schematic flow chart of the SGM algorithm modified in the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationship described in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
As shown in fig. 1, a disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene includes the following steps:
s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera;
s2, acquiring RGB images and DVS images of a binocular camera in a scene, and performing multi-scale weighted fusion after registration;
s3, generating an HDR image aiming at computer vision for the fused image;
s4, generating a parallax image based on the HDR image generated in the step S3 by using an improved binocular stereo matching algorithm SGM.
In one embodiment, after deployment of the binocular RGB camera and the DVS camera, the positions of the sensors are ensured to be relatively unchanged in the process of acquiring data for multiple times, and only one calibration is needed in the whole process; the calibration mainly comprises the internal parameter calibration of a binocular RGB camera and a DVS camera and the external parameter calibration of the combination of the RGB camera and the DVS camera.
Further, as shown in fig. 2, the DVS camera is a sensor triggered based on a change in illumination intensity, outputs a pulse signal when the illumination intensity changes, and has a characteristic of a high dynamic range for a single pixel point that its response depends on the change in illumination intensity, not an absolute illumination intensity value. In addition, the object edge is better captured by the DVS camera due to the difference in illumination intensity between it and the background. Therefore, the DVS camera is used in the present invention to enhance the edge information of the picture.
The data acquired by the DVS camera is asynchronous event stream data, and the concept of frame rate in a common camera is not adopted, so that the DVS outputs image frames which are not standard, trigger events are continuously accumulated in a fixed time slice by setting a time slice length deltat, then event streams accumulated in a period of time are overlapped together, and finally the image frames are obtained after passing through an event screen plane with a thickness d.
When the RGB camera uses the checkerboard calibration board to calibrate internal references, fixing the camera position, then fixing the checkerboard calibration board to capture a group of images, and obtaining a plurality of groups of images by moving the checkerboard calibration board; the DVS camera is only sensitive to illumination intensity change, so that when the DVS camera and the checkerboard calibration plate are fixed, the DVS cannot acquire and output image information, a continuously refreshed display screen is used as an event triggering source when DVS internal reference calibration is carried out, and the internal reference calibration of the DVS camera and the RGB camera is simultaneously carried out by displaying the checkerboard calibration plate on the screen.
In addition, if a deviation of the positions occurs between the different sensor images, the whole image after fusion may further amplify the deviation; therefore, in the step S2, the matching process includes: firstly, detecting characteristic points through a SIFT, ORB, SURF characteristic extraction algorithm, and then matching the characteristic points; after the matching corresponding points between the images are obtained, calculating homography matrixes of the two images according to the corresponding point information, and calculating the alignment of the images through the homography matrixes; the homography matrix can be calculated through four coordinate point pairs, and after the homography matrix is obtained, registration and alignment can be carried out on the left-hand source images.
In the step S2, before fusion, a feature-based image registration method is used to reduce geometrical space differences between different sensors, a mapping transformation model between different sensor images is established, and pixels of an image are mapped to pixels of another image through the mapping model.
In some embodiments, as shown in fig. 3 and fig. 4, the image fusion part adopts a pyramid transformation method, filters or samples the image to obtain a layered structure similar to a pyramid, and performs data fusion on each layer of the pyramid by using a weighted fusion method to obtain a pyramid-shaped fused image layer; as the resolution of the sampled image layer gradually decreases, the formula is used for the fusion of the image layer with lower resolution but higher frequency:
wherein C is A (i, j) and C B (i, j) represent the pixel values of the two sets of images at (i, j), C F (i, j) represents the pixel value of the fused image at (i, j);
taking a larger absolute value as a fusion result, and adopting a formula aiming at an image with higher resolution but lower frequency:
C F (i,j)=(C A (i,j)+C B (i,j))/2
taking the average value as a fusion result; and after the fusion result of each layer is obtained, carrying out inverse transformation and superposition on each layer to obtain an integral fusion image.
In another embodiment, in the step S3, the quality of the fused image is seriously affected by the problems of overexposure and underexposure caused by the severe variation of the light conditions, two sets of images with different exposure degrees are obtained by an automatic multi-exposure control method, and then the Mertens algorithm with improved pixel weight calculation formula is applied to the two sets of images to obtain the HDR image. As shown in fig. 5, the generation of the HDR image specifically includes the steps of:
the measurement of the response of each pixel is calculated as follows:
where k=0 represents a low exposure image, k=1 represents a high exposure image, I i,j Representing the gray value of the image at (i, j), delta being a constant;
the initial weights for each pixel of the low exposure and high exposure images are then calculated as follows:
W i,j,k =min{w C C i,j,k +w E E i,j,k ,1}
wherein C is i,j,k Constrast weights, w, representing either low exposure or high exposure at (i, j) C W E Respectively representing the weight coefficients of the two;
the numerical stability of the result is enhanced by optimizing a weight calculation formula, which is as follows:
wherein N represents the number of different exposure images; w (W) i,j,k′ Representing the initial weight of the kth' exposure image at subscript (i, j);
finally through the formulaWeighting to obtain HDR image, I t,j,k′ The gray value of the kth' exposure image at (i, j) is represented.
In some embodiments, as shown in fig. 6, the modified SGM algorithm uses Census transform with illumination invariance to replace mutual information MI in the original SGM algorithm to calculate parallax matching cost, then performs cost aggregation based on crisscross neighborhood to improve performance of the algorithm under the condition of discontinuous gray information and depth information, and finally performs parallax calculation and parallax optimization according to steps in the SGM algorithm.
According to the invention, the binocular RGB camera and the DVS camera are calibrated, and the relative positions among the sensors are kept unchanged during data acquisition after calibration is completed, so that the conversion relation among the sensor coordinate systems is prevented from being damaged. Before fusing the RGB images and DVS images, feature-based registration of the images is performed to reduce the geometrical spatial differences between the different sensor images. The registered images are subjected to pyramid transformation to obtain a layered structure, each layer is fused by using a weighted fusion method, and the whole fused image is obtained by an inverse transformation superposition mode. On the basis of the fusion, the HDR image is obtained from two differently exposed images using the Mertens algorithm after modification. Finally, a modified binocular stereo matching algorithm, namely a semi-global matching algorithm (SGM), is used for generating a high-quality disparity map from the HDR image.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (7)

1. The disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene is characterized by comprising the following steps:
s1, deploying a binocular RGB camera and a DVS camera, and calibrating the binocular RGB camera and the DVS camera;
s2, acquiring RGB images and DVS images of a binocular camera in a scene, and performing multi-scale weighted fusion after registration;
s3, generating an HDR image aiming at computer vision for the fused image; the quality of the fused image is seriously affected due to the problems of overexposure and underexposure which occur in severe light condition change, two groups of images with different exposure degrees are obtained through an automatic multi-exposure control method, and then an HDR image is obtained by applying a Mertens algorithm with an improved pixel weight calculation formula to the two groups of images;
the generation of the HDR image specifically comprises the following steps:
the measurement of the response of each pixel is calculated as follows:
where k=0 represents a low exposure image, k=1 represents a high exposure image, I i,j Representing the gray value of the image at (i, j), delta being a constant;
the initial weights for each pixel of the low exposure and high exposure images are then calculated as follows:
W i,j,k =min{w C C i,j,k +w E E i,j,k ,1}
wherein C is i,j,k Constrast weights, w, representing either low exposure or high exposure at (i, j) C W E Respectively representing the weight coefficients of the two;
the numerical stability of the result is enhanced by optimizing a weight calculation formula, which is as follows:
wherein N represents the number of different exposure images; w (W) i,j,k′ Representing the initial weight of the kth' exposure image at subscript (i, j);
finally lead toOverformulaWeighting to obtain HDR image, I i,j,k′ Representing the gray value of the kth' exposure image at (i, j);
s4, generating a parallax image based on the HDR image generated in the step S3 by using an improved binocular stereo matching algorithm SGM; in the improved SGM algorithm, census transformation with illumination invariance is used for replacing mutual information MI in the original SGM algorithm to calculate parallax matching cost, then, based on cross neighborhood, cost aggregation is carried out to improve the performance of the algorithm under the condition that gray information and depth information are discontinuous, and finally, parallax calculation and parallax optimization are carried out according to steps in the SGM algorithm.
2. The parallax image enhancement method based on fusion of RGB and DVS images in a high dynamic range scene according to claim 1, wherein after deployment of a binocular RGB camera and a DVS camera, the positions of the sensors are ensured to be relatively unchanged in the process of acquiring data for multiple times, and only one calibration is needed in the whole process; the calibration mainly comprises the internal parameter calibration of a binocular RGB camera and a DVS camera and the external parameter calibration of the combination of the RGB camera and the DVS camera.
3. The disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene according to claim 2, wherein edge information of the picture is enhanced using a DVS camera; since the data acquired by the DVS camera is asynchronous event stream data, the DVS outputs image frames that are not standard, by setting a fixed time slice length Δt and accumulating trigger events continuously during the time slice, then stacking the event streams accumulated over a period of time, and finally, obtaining the image frames after passing through an event screen plane with a thickness d.
4. The parallax image enhancement method based on fusion of RGB and DVS images in a high dynamic range scene according to claim 2, wherein when the RGB camera uses a checkerboard calibration plate to calibrate internal parameters, the position of the camera is fixed, then the checkerboard calibration plate is fixed to capture a group of images, and a plurality of groups of images are obtained by moving the checkerboard calibration plate; and when DVS internal reference calibration is carried out, a display screen which is continuously refreshed is used as an triggering source of an event, and the internal reference calibration of the DVS camera and the RGB camera is simultaneously carried out by displaying a checkerboard calibration plate on the screen.
5. The disparity map enhancement method based on fusion of RGB and DVS images in a high dynamic range scene according to claim 1, wherein in step S2, the matching process includes: firstly, detecting characteristic points through a SIFT, ORB, SURF characteristic extraction algorithm, and then matching the characteristic points; after the matching corresponding points between the images are obtained, calculating homography matrixes of the two images according to the corresponding point information, and calculating the alignment of the images through the homography matrixes; the homography matrix is calculated through four coordinate point pairs, and after the homography matrix is obtained, the left-hand source image is registered and aligned.
6. The method for enhancing a disparity map based on fusion of RGB and DVS images in a high dynamic range scene according to claim 5, wherein in step S2, before the fusion, a feature-based image registration method is used to reduce geometrical space differences between different sensors, a mapping transformation model between different sensor images is established, and pixels of an image are mapped to pixels of another image through the mapping model.
7. The parallax image enhancement method based on fusion of RGB and DVS images in a high dynamic range scene according to claim 6, wherein the image fusion part adopts a pyramid transformation method, filters or samples the image to obtain a layered structure similar to a pyramid, and performs data fusion on each layer of the pyramid by using a weighted fusion method to obtain a pyramid-shaped fusion image layer; as the resolution of the sampled image layer gradually decreases, the formula is used for the fusion of the image layer with lower resolution but higher frequency:
wherein C is A (i, j) and C B (i, j) represent the pixel values of the two sets of images at (i, j), C F (i, j) represents the pixel value of the fused image at (i, j);
the formula is used for higher resolution but lower frequency images:
C F (i,j)=(C A (i,j)+C B (i,j))/2
taking the average value as a fusion result; and after the fusion result of each layer is obtained, carrying out inverse transformation and superposition on each layer to obtain an integral fusion image.
CN202011283187.2A 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene Active CN112396562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011283187.2A CN112396562B (en) 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011283187.2A CN112396562B (en) 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene

Publications (2)

Publication Number Publication Date
CN112396562A CN112396562A (en) 2021-02-23
CN112396562B true CN112396562B (en) 2023-09-05

Family

ID=74599972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011283187.2A Active CN112396562B (en) 2020-11-17 2020-11-17 Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene

Country Status (1)

Country Link
CN (1) CN112396562B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11841926B2 (en) * 2021-02-10 2023-12-12 Apple Inc. Image fusion processor circuit for dual-mode image fusion architecture
CN114967907A (en) * 2021-02-26 2022-08-30 华为技术有限公司 Identification method and electronic equipment
CN113033382B (en) * 2021-03-23 2021-10-01 哈尔滨市科佳通用机电股份有限公司 Method, system and device for identifying large-area damage fault of wagon floor
JP2024510899A (en) * 2021-03-26 2024-03-12 ハーマン インターナショナル インダストリーズ インコーポレイテッド Methods, devices and systems for automatic labeling
CN113781470A (en) * 2021-09-24 2021-12-10 商汤集团有限公司 Parallax information acquisition method, device and equipment and binocular camera system
CN113947549B (en) * 2021-10-22 2022-10-25 深圳国邦信息技术有限公司 Self-shooting video decoration prop edge processing method and related product
WO2023143708A1 (en) * 2022-01-26 2023-08-03 Huawei Technologies Co., Ltd. Hdr reconstruction from bracketed exposures and events
CN115150561B (en) * 2022-05-23 2023-10-31 中国人民解放军国防科技大学 High dynamic imaging system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933617A (en) * 2016-05-19 2016-09-07 中国人民解放军装备学院 High dynamic range image fusion method used for overcoming influence of dynamic problem
CN111062873A (en) * 2019-12-17 2020-04-24 大连理工大学 Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN111260597A (en) * 2020-01-10 2020-06-09 大连理工大学 Parallax image fusion method of multiband stereo camera
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933617A (en) * 2016-05-19 2016-09-07 中国人民解放军装备学院 High dynamic range image fusion method used for overcoming influence of dynamic problem
CN111062873A (en) * 2019-12-17 2020-04-24 大连理工大学 Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN111260597A (en) * 2020-01-10 2020-06-09 大连理工大学 Parallax image fusion method of multiband stereo camera
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information

Also Published As

Publication number Publication date
CN112396562A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112396562B (en) Disparity map enhancement method based on fusion of RGB and DVS images in high dynamic range scene
WO2021120406A1 (en) Infrared and visible light fusion method based on saliency map enhancement
CN110023810B (en) Digital correction of optical system aberrations
CN103973989B (en) Obtain the method and system of high-dynamics image
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN111027415B (en) Vehicle detection method based on polarization image
CN111986106B (en) High-dynamic image reconstruction method based on neural network
CN110956661A (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN113643214B (en) Image exposure correction method and system based on artificial intelligence
CN109166076B (en) Multi-camera splicing brightness adjusting method and device and portable terminal
CN113436130B (en) Intelligent sensing system and device for unstructured light field
CN114885074A (en) Event camera denoising method based on space-time density
CN111047636A (en) Obstacle avoidance system and method based on active infrared binocular vision
EP4050553A1 (en) Method and device for restoring image obtained from array camera
CN111798484B (en) Continuous dense optical flow estimation method and system based on event camera
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN112184807A (en) Floor type detection method and system for golf balls and storage medium
JP7133979B2 (en) Image processing device, image processing method, image processing program, and storage medium
CN110910457A (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN113670268B (en) Binocular vision-based unmanned aerial vehicle and electric power tower distance measurement method
CN115578273A (en) Image multi-frame fusion method and device, electronic equipment and storage medium
Ye et al. LFIENet: Light field image enhancement network by fusing exposures of LF-DSLR image pairs
CN115035175A (en) Three-dimensional model construction data processing method and system
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant