CN115861063A - Depth estimation-based panoramic stitching method - Google Patents
Depth estimation-based panoramic stitching method Download PDFInfo
- Publication number
- CN115861063A CN115861063A CN202211512091.8A CN202211512091A CN115861063A CN 115861063 A CN115861063 A CN 115861063A CN 202211512091 A CN202211512091 A CN 202211512091A CN 115861063 A CN115861063 A CN 115861063A
- Authority
- CN
- China
- Prior art keywords
- calibration
- matrix
- depth estimation
- view
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to the technical field of automatic driving, and discloses a depth estimation-based all-round-looking stitching method, which comprises the following steps of S1: setting a calibration plate, opening a looking-around four-way camera, and collecting a calibration image; s2: performing combined calibration on the calibration images, and calculating to obtain an internal reference matrix and a distortion coefficient of each camera; s3: carrying out distortion removal processing on the image data according to the internal reference matrix and the distortion coefficient; s4: carrying out projection transformation on the de-distorted calibration image, and calculating to obtain a homography matrix; s5: calculating a projection matrix according to the internal reference matrix and the distortion coefficient, and calculating a projection matrix after homography transformation according to the homography matrix; s6: transforming the calibration image into a bird-eye view through the projection matrix after homographic transformation; s7: and performing overlapping area matching on the aerial view of each camera based on depth estimation, and splicing the matched aerial views. The method and the device can solve the problems that in the prior art, the overlapping area of adjacent cameras is distorted and the splicing effect is poor.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a depth estimation-based all-round stitching method.
Background
The 360-degree around-looking splicing technology can provide all-around video information without blind areas for the vehicle, and provides more visual environment perception information for a driver, so that the driving safety of the vehicle is greatly improved.
In a general all-round looking system, 4 fisheye cameras are installed on the front, the back, the left and the right of a vehicle body, images of the 4 cameras can be fused by an image splicing method and spliced into an all-round looking top view, the top view displays information around the vehicle within a 360-degree visual angle range, and dead angles and blind areas can be eliminated.
Image stitching refers to aligning the overlapping information of two or more images at spatial positions and combining the images into seamless and high-definition images, and the core of the image stitching technology is to solve the registration and fusion of the overlapping areas of adjacent cameras. Conventional registration algorithms are largely classified into the following two types:
1. region-based image stitching
The splicing method takes the gray value of the image pixel as a research object, compares a region in a matched image with a corresponding region in a reference image, calculates the similarity of the gray value by using a relevant mathematical method, obtains the overlapping region and the position of two images when the correlation coefficient is maximum, and finishes the splicing process; the region-based stitching method relies on similarity calculation, but is difficult to process complex scenes.
2. Image stitching based on depth features
The method takes image depth characteristic as registration of images: matching the characteristics of the corresponding areas overlapped by the images, wherein the matching mainly comprises two parts of characteristic extraction and characteristic matching; firstly, extracting features with large gray scale transformation from two images to be spliced, and then selecting corresponding mapping relation features from a feature set for matching; the splicing algorithm has higher robustness, and does not process the whole image, so that the image information amount is compressed, and the calculated amount is smaller than that of the splicing method based on the region.
However, because the spans of the cameras are large, the illumination intensities are different, so that the deformation and color difference of the object are large. No matter the region-based stitching algorithm or the feature-based stitching algorithm is used, distortion occurs in the overlapping region of adjacent cameras, and the stitching effect is poor.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a depth estimation-based panoramic stitching method, which solves the problems in the prior art that the overlapping area of adjacent cameras is distorted and the stitching effect is poor.
The invention solves the technical problems by the following technical means:
the application provides a look-around stitching method based on depth estimation, which comprises the following steps:
s1: setting a calibration plate, opening a four-way camera looking around, and collecting a calibration image;
s2: performing combined calibration on the calibration images, and calculating to obtain an internal reference matrix and a distortion coefficient of each camera;
s3: carrying out distortion removal processing on the image data according to the internal reference matrix and the distortion coefficient;
s4: carrying out projection transformation on the calibration image after distortion removal, and calculating to obtain a homography matrix;
s5: calculating a projection matrix according to the internal reference matrix and the distortion coefficient, and calculating a projection matrix after homographic transformation according to the homographic matrix;
s6: transforming the calibration image into a bird-eye view through the projection matrix after homography transformation;
s7: and carrying out overlapping area matching on the aerial view of each camera based on depth estimation, and splicing the matched aerial view.
In some optional embodiments, the calibration plate in step S2 is a checkerboard.
In some optional embodiments, the method of step S6 is:
s61: presetting an aerial view output visual field range;
s62: and based on the preset visual field range, transforming the calibration image into a bird's-eye view with a matched size through the projection matrix after homography transformation.
In some optional embodiments, the method of step S61 is:
s611: presetting the width innerShiftWidth of the inner edge of the calibration plate and the left side and the right side of the vehicle;
s612: presetting the height innerShiftheight between the inner edge of the calibration plate and the front end and the rear end of the vehicle;
s613: presetting the visual field width shiftWidth at the outer side of the calibration plate;
s614: presetting the height of a visual field shiftHeight outside the calibration plate;
s615: calculating the width and height of the bird's-eye view according to innerShiftWidth, innerShiftheight, shiftWidth and shiftHeight.
In some optional embodiments, the method of step S7 is:
s71: extracting key feature points of each aerial view overlapping region based on depth estimation;
s72: matching overlapping areas of the adjacent aerial views according to the key feature points;
s73: and splicing the matched aerial view.
In some optional embodiments, the method of step S71 is:
s711: two adjacent aerial views are taken as a data set;
s712: learning from the data set through a deep neural network model to obtain a corresponding feature point matching mapping;
s713: and estimating key feature points in the feature point matching mapping map by a non-maximum inhibition method, and obtaining the key feature points of the bird's-eye view overlapping area.
In some optional embodiments, the method of step S711 is: and extracting data images of the aerial view overlapping area, and taking the data images corresponding to two adjacent aerial views as a data set.
In some optional embodiments, the method of step S72 is:
s721: carrying out brute force matching on the key characteristic points of two adjacent aerial views to obtain accurate matching points;
s722: and according to the matching points, performing overlapping pixel one-to-one mapping on overlapping areas of the adjacent aerial views, and realizing the matching of the overlapping areas of the adjacent aerial views.
In some optional embodiments, the method further comprises:
and inputting the key feature points failed in the brute force matching into the network model, and continuously updating the network model in an iterative manner to obtain a new feature point matching mapping chart.
The invention has the beneficial effects that:
according to the method, the key characteristic points of the overlapping areas of the two adjacent aerial views are estimated through the depth network model, so that the pixels of the overlapping areas can be mapped one by one, and the problem of distortion of the overlapping areas is effectively solved.
Drawings
FIG. 1 is a block flow diagram of a depth estimation-based look-around stitching method of the present invention;
FIG. 2 is a combined calibration diagram of a depth estimation-based all-around stitching method according to the present invention;
FIG. 3 is a bird's eye view image generation diagram in the around view stitching method based on depth estimation.
Detailed Description
The following description of the embodiments of the present invention is provided by way of specific examples, and those skilled in the art will appreciate the advantages and utilities of the present invention from the disclosure herein. It should be noted that the drawings provided in the following embodiments are only for illustrative purposes, are schematic drawings rather than actual drawings, and are not to be construed as limiting the invention, and in order to better illustrate the embodiments of the invention, some components in the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
In the description of the present invention, it should be understood that if there are terms such as "upper", "lower", "left", "right", "front", "back", etc., indicating orientations or positional relationships based on the orientations or positional relationships shown in the drawings, it is merely for convenience of description and simplicity of description, but does not indicate or imply that the indicated device or element must have a specific orientation, be constructed in a specific orientation, and be operated, therefore, the terms describing the positional relationships in the drawings are only used for illustrative purposes and are not to be construed as limiting the present invention, and those skilled in the art can understand the specific meanings for the above applications according to specific situations.
As shown in fig. 1 to 3, the present application provides a depth estimation-based around view stitching method, including:
step 1: setting a calibration plate, opening a four-way camera looking around, and collecting a calibration image; in this embodiment, the calibration board is a checkerboard, the collected calibration image is a calibration image with a checkerboard, the calibration board is 6mx10m, the size of each black and white square is 40cmx40cm, and the size of each square where each circular pattern is located is 80cmx80cm. In the embodiment, the camera is preferably a fisheye camera.
And 2, step: in the embodiment, the internal reference matrix and the distortion coefficient of each camera are calculated according to the coordinates of corresponding points by manually selecting the corresponding points;
and step 3: carrying out distortion removal processing on the image data according to the internal reference matrix and the distortion coefficient, and eliminating the imaging distortion of the camera;
and 4, step 4: performing projection transformation on the de-distorted calibration image, and calculating to obtain a homography matrix, wherein in the embodiment, the expression of the homography matrix isWherein K is a camera internal reference matrix, and R, T is an external reference between two cameras.
And 5: calculating a projection matrix according to the internal reference matrix and the distortion coefficient, and calculating a projection matrix after homography transformation according to the homography matrix;
step 6: and transforming the calibration image into a bird's-eye view through the projection matrix after homography transformation, wherein the method comprises the following steps:
step 61: presetting an aerial view output visual field range, wherein the method comprises the following steps:
step 611: presetting the width innershiftWidth of the inner edge of the calibration plate and the left side and the right side of the vehicle;
step 612: presetting the height innershiftHeight between the inner edge of the calibration plate and the front end and the rear end of the vehicle;
step 613: presetting the view width shiftWidth at the outer side of the calibration plate;
step 614: presetting the height of the field of view shiftHeight outside the calibration plate;
step 615: calculating the width and height of the bird's eye view according to the inner steps of highwidth, innershiftHeight, shiftWidth and shiftHeight. In this embodiment, the width totalWidth =600+2 shiftwidth of the bird's eye view; height totalHeight =1000+2 shiftheight of bird's eye view. In this embodiment, the view width shiftWidth and the view height shiftHeight determine the view range of the bird's-eye view, and the numerical values may be set according to actual needs, and it should be noted that: the larger the value of the projection image, the larger the area viewed from the overhead view, and accordingly, the more serious the deformation of the object at a distance after projection.
Step 62: and based on a preset visual field range, transforming the calibration image into a bird's-eye view with a matched size through the projection matrix after homography transformation.
And 7: and performing overlapping area matching on the aerial view of each camera based on depth estimation, and splicing the matched aerial views.
Step 71: and extracting key feature points of each aerial view overlapping area based on depth estimation, wherein the method comprises the following steps:
step 711: two adjacent aerial views are taken as a data set, and the data set specifically comprises the following steps: and extracting data images of the aerial view overlapping area, and taking the data images corresponding to two adjacent aerial views as a data set.
Step 712: learning from the data set through a deep neural network model to obtain a corresponding feature point matching mapping chart;
step 713: and estimating key feature points in the feature point matching mapping map by a non-maximum inhibition method, and obtaining the key feature points of the bird's-eye view overlapping area.
Step 72: according to the key feature points, matching the overlapping areas of the adjacent aerial views, wherein the method comprises the following steps:
step 721: carrying out brute force matching on key feature points of two adjacent aerial views to obtain accurate matching points;
step 722: and according to the matching points, performing overlapping pixel one-to-one mapping on the overlapping areas of the adjacent aerial views, and realizing the matching of the overlapping areas of the adjacent aerial views.
In this embodiment, step 72 further includes: and inputting the key feature points with the brute force matching failure into the network model, and continuously updating the network model in an iterative manner to obtain a new feature point matching mapping chart.
Step 73: and splicing the matched aerial view to obtain a panoramic aerial view.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims. The techniques, shapes, and configurations not described in detail in the present invention are all known techniques.
Claims (9)
1. A depth estimation-based around view splicing method is characterized by comprising the following steps:
s1: setting a calibration plate, opening a four-way camera looking around, and collecting a calibration image;
s2: performing combined calibration on the calibration images, and calculating to obtain an internal reference matrix and a distortion coefficient of each camera;
s3: carrying out distortion removal processing on the image data according to the internal reference matrix and the distortion coefficient;
s4: carrying out projection transformation on the de-distorted calibration image, and calculating to obtain a homography matrix;
s5: calculating a projection matrix according to the internal reference matrix and the distortion coefficient, and calculating a projection matrix after homography transformation according to the homography matrix;
s6: transforming the calibration image into a bird-eye view through the projection matrix after homography transformation;
s7: and carrying out overlapping area matching on the aerial view of each camera based on depth estimation, and splicing the matched aerial view.
2. The depth estimation-based around view stitching method according to claim 1, wherein the calibration board in step S2 is a checkerboard.
3. The method for depth estimation-based around-the-eye stitching according to claim 1, wherein the method of step S6 is:
s61: presetting an aerial view output visual field range;
s62: and based on the preset visual field range, transforming the calibration image into a bird's-eye view with a matched size through the projection matrix after homographic transformation.
4. The method for depth estimation-based around-the-eye stitching according to claim 3, wherein the method of step S61 is:
s611: presetting the width innerShiftWidth of the inner edge of the calibration plate and the left side and the right side of the vehicle;
s612: presetting the height innerShiftheight between the inner edge of the calibration plate and the front end and the rear end of the vehicle;
s613: presetting the visual field width shiftWidth at the outer side of the calibration plate;
s614: presetting the height of a visual field shiftHeight outside the calibration plate;
s615: calculating the width and height of the bird's-eye view according to innerShiftWidth, innerShiftheight, shiftWidth and shiftHeight.
5. The method for depth estimation-based eye-around stitching according to claim 1, wherein the method of step S7 is:
s71: extracting key feature points of each aerial view overlapping area based on depth estimation;
s72: matching overlapping areas of adjacent aerial views according to the key feature points;
s73: and splicing the matched aerial view.
6. The method for depth estimation-based eye-splice according to claim 5, wherein the method of step S71 is:
s711: two adjacent aerial views are taken as a data set;
s712: learning from the data set through a deep neural network model to obtain a corresponding feature point matching mapping chart;
s713: and estimating key feature points in the feature point matching mapping map by a non-maximum inhibition method, and obtaining the key feature points of the aerial view overlapping area.
7. The method for depth-estimation-based eye-around stitching according to claim 6, wherein the method of step S711 is as follows: and extracting data images of the aerial view overlapping area, and taking the data images corresponding to two adjacent aerial views as a data set.
8. The method for depth estimation-based eye-splice according to claim 6, wherein the method of step S72 is:
s721: carrying out brute force matching on key feature points of two adjacent aerial views to obtain accurate matching points;
s722: and according to the matching points, performing overlapping pixel one-to-one mapping on overlapping areas of the adjacent aerial views, and realizing the matching of the overlapping areas of the adjacent aerial views.
9. The method of claim 8, wherein the method further comprises:
and inputting the key feature points failed in the brute force matching into the network model, and continuously updating the network model in an iterative manner to obtain a new feature point matching mapping chart.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211512091.8A CN115861063A (en) | 2022-11-29 | 2022-11-29 | Depth estimation-based panoramic stitching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211512091.8A CN115861063A (en) | 2022-11-29 | 2022-11-29 | Depth estimation-based panoramic stitching method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115861063A true CN115861063A (en) | 2023-03-28 |
Family
ID=85667846
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211512091.8A Pending CN115861063A (en) | 2022-11-29 | 2022-11-29 | Depth estimation-based panoramic stitching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115861063A (en) |
-
2022
- 2022-11-29 CN CN202211512091.8A patent/CN115861063A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021120406A1 (en) | Infrared and visible light fusion method based on saliency map enhancement | |
JP5739584B2 (en) | 3D image synthesizing apparatus and method for visualizing vehicle periphery | |
CN111179168B (en) | Vehicle-mounted 360-degree panoramic all-around monitoring system and method | |
CN110264395B (en) | Lens calibration method and related device of vehicle-mounted monocular panoramic system | |
JP2009129001A (en) | Operation support system, vehicle, and method for estimating three-dimensional object area | |
JPWO2018235163A1 (en) | Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method | |
CN111768332B (en) | Method for splicing vehicle-mounted panoramic real-time 3D panoramic images and image acquisition device | |
CN101726855A (en) | Correction method of fisheye image distortion on basis of cubic projection | |
CN108171735B (en) | Billion pixel video alignment method and system based on deep learning | |
CN113362228A (en) | Method and system for splicing panoramic images based on improved distortion correction and mark splicing | |
WO2022237272A1 (en) | Road image marking method and device for lane line recognition | |
TW201403553A (en) | Method of automatically correcting bird's eye images | |
KR101705558B1 (en) | Top view creating method for camera installed on vehicle and AVM system | |
CN110099268B (en) | Blind area perspective display method with natural color matching and natural display area fusion | |
CN111243034A (en) | Panoramic auxiliary parking calibration method, device, equipment and storage medium | |
JP6956051B2 (en) | Image processing equipment, driving support system, image processing method and program | |
KR100948872B1 (en) | Camera image correction method and apparatus | |
CN114372919B (en) | Method and system for splicing panoramic all-around images of double-trailer train | |
CN116245722A (en) | Panoramic image stitching system and method applied to heavy high-speed vehicle | |
CN116152068A (en) | Splicing method for solar panel images | |
CN111652937B (en) | Vehicle-mounted camera calibration method and device | |
CN110738696B (en) | Driving blind area perspective video generation method and driving blind area view perspective system | |
CN113762134A (en) | Method for detecting surrounding obstacles in automobile parking based on vision | |
CN114616586A (en) | Image annotation method and device, electronic equipment and computer-readable storage medium | |
CN111815511A (en) | Panoramic image splicing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |