CN107424120A - A kind of image split-joint method in panoramic looking-around system - Google Patents
A kind of image split-joint method in panoramic looking-around system Download PDFInfo
- Publication number
- CN107424120A CN107424120A CN201710237136.8A CN201710237136A CN107424120A CN 107424120 A CN107424120 A CN 107424120A CN 201710237136 A CN201710237136 A CN 201710237136A CN 107424120 A CN107424120 A CN 107424120A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- image
- msub
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000013507 mapping Methods 0.000 claims abstract description 33
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000006243 chemical reaction Methods 0.000 claims abstract description 18
- 238000012937 correction Methods 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims description 55
- 230000003287 optical effect Effects 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000007500 overflow downdraw method Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 6
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000011426 transformation method Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000003702 image correction Methods 0.000 claims description 2
- 241000251468 Actinopterygii Species 0.000 abstract 1
- 230000006872 improvement Effects 0.000 description 7
- 238000009434 installation Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011900 installation process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 125000006850 spacer group Chemical group 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The present invention is the image split-joint method in a kind of panoramic looking-around system, and it is related to computer vision field.This method comprises the following steps:1) combine septal line training pattern and SVM algorithm carries out fish eye images distortion correction;2) establish image and overlook conversion look-up table to realize quickly vertical view conversion, obtain birds-eye view;3) Panorama Mosaic mapping table is generated, panoramic looking-around birds-eye view is rapidly obtained by way of searching Panorama Mosaic mapping table;4) solve the difference in brightness between stitching image two-by-two using improved brightness reconciliation process algorithm, splicing seams are further then eliminated using the image interfusion method of weighted mean method.This method effectively reduces overhead, quickly realizes seamless image splicing, obtains panoramic looking-around birds-eye view.
Description
Technical Field
The invention relates to the field of computer vision, in particular to an image splicing method in a panoramic all-around system.
Background
With the rapid development of national economy, the total number of automobiles is continuously increased, and the traffic environment is increasingly congested, so that the automobiles often run in narrow environments. The automobile is parked in a narrow parking lot through a narrow road or traffic flow, and due to the fact that the visual field of a driver is limited, collision is easy to happen, and unnecessary loss is caused. With the continuous reduction of the cost of the digital processor, the camera and the like, the automobile auxiliary system based on the panoramic all-around vision system becomes the mainstream of the future automobile vision auxiliary system, and the important technical means of the panoramic all-around vision system is to splice the panoramic all-around vision images. The panoramic images are obtained through the splicing of the panoramic all-round images and are output to the display, so that a driver can comprehensively know the conditions around the vehicle body, and the occurrence of vehicle accidents is reduced.
The panoramic all-round looking image splicing step based on the fisheye camera roughly comprises the following steps: image distortion correction, top-view transformation from perspective view to top-view, and stitching of multiple images. The fisheye lens can acquire a wide-range visual angle and a complete spherical image, but the complex optical structure of the fisheye lens causes serious distortion of a fisheye image, so that image distortion correction is performed before image splicing. The perspective transformation is realized by utilizing a space coordinate system transformation method, the method is simple and effective, the calculation can be carried out as long as the installation parameters (such as the position and the angle of the camera) of the camera are determined, but if errors exist in the installation process, the overlook transformation is directly carried out according to the installation parameters, and the correct top view cannot be obtained. Although both the region-based image stitching method and the feature-based image stitching method can complete image stitching, both the two stitching methods require a certain range of overlapping regions between the stitched images, and the images cannot be distorted too much. However, images of overlooking transformation and image difference calculation in the process of splicing the panoramic annular view images have certain distortion, so that the two splicing methods cannot obtain a good result when the images are spliced, and even the panoramic annular view images cannot be spliced.
Disclosure of Invention
The invention aims to provide an image splicing method in a panoramic all-around view system, which can be used for quickly and effectively carrying out seamless splicing and generating a panoramic all-around view, wherein the generated bird's-eye view displays the peripheral information of a vehicle at a view angle of 360 degrees, so that blind areas and dead angles can be eliminated, and the method has good application value.
In order to solve the technical problems, the technical scheme adopted by the invention comprises the following steps:
step 1) combines the space line training model and the SVM algorithm to carry out fisheye image distortion correction, and the method comprises the following steps:
s1.1, constructing a space line training model;
s1.2, correcting the fisheye image by using an SVM algorithm;
step 2) utilizing the relation among a world coordinate system, a camera coordinate system and an image coordinate system, combining a backward mapping method to transform a perspective view into a top view, aiming at the condition that the camera is installed with errors, surrounding X according to the camera coordinate systemC、YC、ZCThe method comprises the following steps of correcting an overlook transformation image again by the deflection angle of an axis, then establishing an image overlook transformation lookup table to realize rapid overlook transformation, and obtaining a bird's-eye view, wherein the overlook transformation lookup table comprises the following steps:
s2.1, establishing a world coordinate system and a camera coordinate system;
s2.2, solving the overlook transformation by using a backward mapping method;
s2.3 construction and minimization of the surrounding X with respect to the camera coordinate System for the case of errors in the camera mountingC、YC、ZCThe optimal values of all deflection angles are obtained through the deflection angle d α, d β and d gamma target functions of the shaft deflection, and then the overlook transformation image is corrected again to obtain a more accurate overlook transformation image;
s2.4, establishing an image overlook transformation lookup table to realize rapid overlook transformation, and obtaining an aerial view;
step 3) determining gaps of image splicing, performing image splicing between the forward image and the lateral image of the vehicle and panoramic image splicing, then generating a panoramic image splicing mapping table, and quickly obtaining a panoramic aerial view through searching the panoramic image splicing mapping table;
s3.1, splicing the forward image and the lateral image to obtain a panoramic aerial view;
s3.2, establishing a panoramic stitching mapping table to complete the stitching of panoramic images;
and 4) solving the brightness difference between every two spliced images by using an improved brightness harmony processing algorithm, and further eliminating the splicing seams by using an image fusion method of a weighted average method.
As a further improvement of the technical scheme of the invention, in the step 1),
the space line training model comprises a plurality of horizontal straight lines, the width of each horizontal straight line is increased by 1.3 times from bottom to top, the space width of adjacent horizontal straight lines is increased by 1.3 times from bottom to top, and a plurality of cross lines are arranged on one horizontal straight line in the horizontal straight lines;
the fisheye image correction by using the SVM algorithm comprises the steps that when the distortion correction of the fisheye image is performed by using the SVM algorithm, the input and the output of a corresponding SVM trainer are respectively the radial distance of an image point in a physical space and the radial distance of the image point in the corresponding fisheye image, then the input and the output data of the SVM trainer are classified and fitted with nonlinear functions, multiple groups of samples are used for training for multiple times, and a conversion model is fitted by regression, so that the mapping relation of the corrected fisheye image and the corresponding pixel coordinate of the distorted fisheye image is established.
As a further improvement of the technical solution of the present invention, in the step S2.1, a certain point P in the world coordinate system is assumedWCoordinates of (2)Is (x)W,yW,zW) The point is denoted P in the camera coordinate systemC(xC,yC,zC) The relationship between the two is shown in formula (1):
wherein α is the angle of rotation of the camera coordinate system about the X-axis relative to the world coordinate system, h ═ OCOWL is the vertical distance between the optical center of the camera and the ground, R is a rotation matrix of 3 × 3, and T is a translation matrix T ═ 00-h]T;
In the step S2.2, assuming that a pixel of a certain point on the target image after the top view transformation is p and a corresponding pixel point on the original image is p', the main steps for solving the top view transformation are as follows:
(1) calculating the corresponding point P of the pixel P in the world coordinate systemW: assume its coordinate as (x)W,yW,zW) The camera optical center passes through the image center, and if p is the pixel of the u-th column and the v-th row on the target image, which is denoted as pixel p (u, v), then:
in the formula, wp、hpThe width and the height of the target image are both in pixel unit; dx and dy are the physical size of the target image in the horizontal and vertical directions, and l is the origin O of the world coordinate systemWDistance to the intersection of the camera's optical axis and the ground. If the included angle between the optical axis of the camera and the horizontal plane is theta and the vertical distance between the optical axis of the camera and the ground is h, l is hcot theta;
(2) calculating a point PWCoordinates P in the camera coordinate systemC(xC,yC,zC) The solving method is shown in formula (1);
(3) calculating PC(xC,yC,zC) Projection point P on image planei(xi,yi) The calculation formula is as follows:
wherein f is the camera focal length.
(4) Calculating a projection point Pi(xi,yi) The pixel point p ' (u ', v ') of the pair in the original image (original perspective) is calculated as:
in the formula, wp′、hp' is the width and height of the input image, both in pixels; dx 'and dy' are physical sizes of the input image in the horizontal and vertical directions.
As a further improvement of the technical solution of the present invention, in the step S2.3, the correcting includes assuming that the camera is installed with an error, the camera coordinate system surrounds XC、YC、ZCThe axes are deflected by angles d α, d β, d γ, respectively, and the calculated point P in step S2.2 is simply the point PWCoordinates P in the camera coordinate systemC(xC,yC,zC) Becomes:
PC=R3R2R1(PW+T) (5)
wherein,
establishing target functions related to d α, d β and d gamma, solving the optimal values of the deflection angles d α, d β and d gamma of the camera by using a method of minimizing function values for three deflection angles by using a linear variation method, then performing top view transformation by using a top view transformation method of a step S2.2, and replacing the solving formula (1) of the second step in the step S2.2 with a formula in the conversion process
As a further improvement of the technical solution of the present invention, in step S2.4, the look-up conversion lookup table includes coordinate relationships between all pixel points in the converted top view and pixel points corresponding to the original perspective view, the system can realize conversion from the perspective view to the top view by querying the look-up conversion lookup table, and quickly realize the top-view conversion of the perspective image by looking up the look-up conversion lookup table.
As a further improvement of the technical scheme of the invention, the splicing of the forward image and the lateral image comprises the following specific steps:
(1) the front image comprises a front image and a rear image, the side image comprises a side image and a right image, the length of the vehicle body is C, the width of the vehicle body is K, and the width of the side image of the vehicle body is H1The width of the forward image is H meters, and two points P are calibrated in a public area covered by the forward image and the side image1And P2And the coordinates of the two points under the panoramic ground coordinate system are respectively marked as P1(X1,Y1) And P2(X2,Y2);
(2) Determining a summation point P in a top view of a lateral image1And P2Corresponding pixel point coordinate R′1And R'2;
(3) Determining a summation point P in a top view of a forward image1And P2Corresponding pixel point coordinate R ″1And R ″)2;
(4) According to point P1And P2Is collinear in the image coordinate systems of the side image and the front image, hence R'1And R'2Determined straight line R'1R'2And R ″)1And R ″)2A defined straight line R ″1R″2Respectively as a splicing seam of the forward image and the lateral image;
(5) and saving the position of the splicing seam, cutting two images to be spliced according to the position of the splicing seam, splicing the top view of the forward image and the top view of the lateral image together according to the splicing seam, and cutting redundant parts except the splicing seam of the forward image and the lateral image according to the splicing seam.
As a further improvement of the technical solution of the present invention, the panoramic image stitching includes:
(1) setting the view range of the panoramic looking-around aerial view, setting the Width of the output panoramic looking-around aerial view as Width and the Height as Height. Since the visual field ranges in the X and Y directions are proportional, setting the visual field in the Y direction to viewrange,the visible range in the X direction is ViewRangeX scale Height;
(2) generating a panoramic stitching mapping table, storing parameters including stitching seam positions, Width and Height of the panoramic looking-around aerial view, a visible range viewRange in a Y direction and a visible range viewRange in an X direction into a table form according to a set visual field range of the panoramic looking-around aerial view, and establishing the panoramic stitching mapping table;
(3) and finishing the panoramic image splicing by a table look-up method according to the established panoramic image splicing mapping table.
As a further improvement of the technical solution of the present invention, the step 4) specifically includes the following steps:
s4.1, eliminating the brightness difference between spliced images by adopting an improved brightness harmony processing algorithm, wherein the method comprises the following steps:
(1) 1/3 common parts of the two images are used as an overlapping area;
(2) calculating the sum S of pixel values of the overlapped regions, respectively1And S2;
(3) Set Differ as S1/S2Multiplying the pixel value of each pixel in one image by Differ for weighting to obtain a new pixel value R, and if R is more than T, keeping the original pixel value unchanged; if R is less than T, the original pixel value is reassigned to be T, and T is the empirical value of 200;
and S4.2, eliminating the splicing seam in the splicing process by using a weighted average image fusion method, so that the image is in smooth transition.
Compared with the prior art, the invention has the following beneficial effects:
1. fisheye distortion correction is carried out by constructing an interval training model and an SVM algorithm, and the interval training model is beneficial to solving the problem of fuzzy edge information of the image after distortion correction.
2. Transforming the perspective view into the top view by utilizing the relation among a world coordinate system, a camera coordinate system and an image coordinate system and combining a backward mapping method; aiming at the condition that the installation of the camera has errors, the objective function of the deflection angles d alpha, d beta and d gamma is minimized to obtain the optimal values of the deflection angles, and the quick overlook transformation is realized by establishing an image overlook transformation lookup table.
3. And generating a panoramic stitching mapping table established by the panoramic looking-around aerial view, and completing the stitching of the panoramic images by a table look-up method, thereby effectively reducing the system operation time.
4. The problem of brightness difference between every two spliced images is effectively solved by using an improved brightness harmony processing algorithm, and the splicing seams are further eliminated by using an image fusion method of a weighted average method, so that the panoramic aerial view has a better visual effect.
Drawings
FIG. 1 is a flowchart of the overall algorithm of the algorithm described in the example;
FIG. 2 is a diagram of a spacing line training model according to an embodiment;
FIG. 3 is a fish-eye image of a training model of the spacer wire according to an embodiment;
FIG. 4 is a diagram of a camera coordinate system and a world coordinate system established during an image stitching process according to an embodiment;
FIG. 5 is a schematic view of a panoramic ground coordinate system according to an embodiment;
FIG. 6 is a flowchart of image stitching according to an embodiment;
FIG. 7 is a flowchart of panoramic image stitching according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, and this embodiment provides an image stitching method in a panoramic all-around view system, where the image stitching method has a flow shown in fig. 1, and specifically includes the following steps:
step S1: carrying out fisheye image distortion correction by combining a space line training model and an SVM algorithm;
the fisheye lens can obtain a wide range of visual angles and a complete spherical image, but the fisheye image is severely distorted due to the complex optical structure of the fisheye lens, so that distortion correction of the image is performed before image splicing, and the method specifically comprises the following steps:
s1.1, constructing a space line training model;
in general, the edge information of the image after distortion correction is fuzzy, and the embodiment of the invention constructs a space line training model to overcome the problem. As shown in fig. 2, the width of the straight lines in fig. 2 increases by 1.3 times, and the interval between the straight lines also increases by 1.3 times, so as to ensure that the training model can still extract clear intersections at the edges of the fisheye images, a plurality of intersecting lines are arranged on a horizontal straight line in the interval line training model. Fig. 3 is a fisheye image of a space line training model.
S1.2, correcting the fisheye image by using an SVM algorithm;
when distortion correction of the fisheye image is carried out by using an SVM algorithm, the input and the output of a corresponding SVM trainer are respectively the radial distance of an image point in a physical space and the radial distance of the image point in the corresponding fisheye image, then classification and fitting of nonlinear functions are carried out on the input and the output data of the SVM trainer, multiple times of training are carried out by using multiple groups of samples, and a conversion model is fitted by regression, so that the mapping relation of the corrected fisheye image and the corresponding pixel coordinate of the distorted fisheye image is established.
Step S2: the transformation from perspective view to top view is performed using the relation between the world coordinate system, the camera coordinate system and the image coordinate system in combination with a backward mapping method. Surrounding X according to the camera coordinate system for the case of camera mounting errorsC、YC、ZCThe method comprises the following steps of correcting an overlook transformation image again by the deflection angle of an axis, establishing an image overlook transformation lookup table to realize rapid overlook transformation, and obtaining a bird's-eye view, wherein the method specifically comprises the following steps:
s2.1, establishing a world coordinate system and a camera coordinate system;
the panoramic vehicle mounted look around system aims at enabling the driver to observe a planar aerial view around the vehicle, i.e. requires the view to be at a top angle perpendicular to the ground. Therefore, it is required toAnd performing overlook transformation on the perspective image, eliminating the perspective effect of the image, and converting the perspective image into a bird's-eye view. The perspective transformation is performed to obtain the bird's-eye view, and firstly, a world coordinate system and a camera coordinate system need to be established. Taking the front camera as an example, as shown in fig. 4, the camera is installed at the middle position in front of the vehicle, and the camera coordinate system takes the camera optical center as the origin OC,ZCThe axis being the optical axis of the camera, XCThe axis being perpendicular to the plane of the sides of the vehicle (out of the plane of the paper in FIG. 4), YCAnd plane XCOCZCAnd is vertical. O is OWIs the origin of the world coordinate system, the world coordinate system ZWThe axis passing through the optical center of the camera and perpendicular to the ground surface, YWThe axis pointing in the direction of travel, X, of the vehicle on the ground planeWAxis and camera coordinate system XCThe axial directions are the same.
World coordinate system first edge ZWAxial translation | OCOWLength (i.e. the perpendicular distance between the camera's optical center and the ground), and then surrounding XWAnd rotating the shaft by a certain angle to obtain a camera coordinate system. Suppose a point P in the world coordinate systemWHas the coordinates of (x)W,yW,zW) The point is denoted P in the camera coordinate systemC(xC,yC,zC) The relationship between the two is shown in formula (1).
Wherein α is the angle of rotation of the camera coordinate system about the X-axis relative to the world coordinate system, h ═ OCOWL is the vertical distance between the optical center of the camera and the ground, R is a rotation matrix of 3 × 3, and T is a translation matrix T ═ 00-h]T。
S2.2, solving the image overlook transformation by using a backward mapping method;
if the camera is accurately installed, the camera coordinate system X is ensuredCWith axes parallel to the ground, Y in the camera coordinate systemCOCZCY in plane and world coordinate systemWOWZWPlane coincident, camera optic axis (Z)CAxis) is determined from the horizontal. In this case, the image is transformed in a top view by a backward mapping method, that is, for each pixel in the target image after the top view transformation, the corresponding pixel in the original perspective image is calculated.
Assuming that a pixel of a certain point on the target image after the overlook transformation is p and a corresponding pixel point on the original image is p', the main steps for solving the overlook transformation are as follows:
(1) calculating the corresponding point P of the pixel P in the world coordinate systemW: assume its coordinate as (x)W,yW,zW) The camera optical center passes through the image center, and if p is the pixel of the u-th column and the v-th row on the target image, which is denoted as pixel p (u, v), then:
in the formula, wp、hpThe width and the height of the target image are both in pixel unit; dx and dy are the physical size of the target image in the horizontal and vertical directions, and l is the origin O of the world coordinate systemWDistance to the intersection of the camera's optical axis and the ground. If the included angle between the optical axis of the camera and the horizontal plane is theta and the vertical distance between the optical axis of the camera and the ground is h, l is hcot theta.
(2) Calculating a point PWCoordinates P in the camera coordinate systemC(xC,yC,zC) The solution method is shown in formula (1).
(3) Calculating PC(xC,yC,zC) Projection point P on image planei(xi,yi) The calculation formula is as follows:
wherein f is the camera focal length.
(4) Calculating a projection point Pi(xi,yi) The pixel point p ' (u ', v ') of the pair in the original image (original perspective) is calculated as:
in the formula, wp'、hp' is the width and height of the input image, both in pixels; dx 'and dy' are physical sizes of the input image in the horizontal and vertical directions.
S2.3, aiming at the situation that the camera is installed with errors, constructing and minimizing an objective function about deflection angles d alpha, d beta and d gamma to obtain the optimal value of each deflection angle, and then correcting the overlook conversion image again to obtain a more accurate overlook conversion image;
if there is an error in the installation of the camera, if the top view is converted according to the step in S2.2, an accurate top view cannot be obtained, and in order to ensure the accuracy of the top view, the top view needs to be corrected based on the method described in S2.2. The correction includes the camera coordinate system surrounding the X assuming errors in the camera mountingC、YC、ZCThe axes are deflected by angles d α, d β, d γ, respectively, and the calculated point P in step S2.2 is simply the point PWCoordinates P in the camera coordinate systemC(xC,yC,zC) Becomes:
PC=R3R2R1(PW+T) (5)
wherein,
in summary, the problem of correcting the installation error of the camera is also converted into a problem of solving the deflection angles d α, d β, and d γ of the camera. The embodiment of the invention solves the deflection angles d alpha, d beta and d gamma of the camera by establishing an objective function related to d alpha, d beta and d gamma and using a method of minimizing a function value by using a linear variation method for three deflection angles.
The embodiment of the invention determines the target function by using the target rectangle: aiming at the condition that the camera is installed with errors, an inaccurate top view is obtained by the method in S2.2, four corner points of a rectangle parallel to the visual field are selected from the inaccurate top view, and the four corner points are respectively marked as p1(u1,v1)、p2(u2,v2)、p3(u3,v3)、p4(u4,v4) The 4 points are calculated according to the method in S2.2 to correspond to point p 'in the original perspective view'1(u′1,v′1)、p'2(u'2,v'2)、p'3(u'3,v'3)、p'4(u'4,v'4) Then, the influence of the deflection angles d α, d β and d γ is added, and p 'is calculated by the inverse solution method of the method of step S2.2'1(u′1,v′1)、p'2(u'2,v'2)、p'3(u'3,v'3)、p'4(u'4,v'4) At the position of the corresponding pixel in the new conversion target image, the inverse solution method comprises the following specific steps:
(1) p 'is obtained from the inverse relation formula of formula (4)'1(u′1,v′1)、p'2(u'2,v'2)、p'3(u'3,v'3)、p'4(u'4,v'4) The respective corresponding coordinates p in the image coordinate system1(x1,y1)、p2(x2,y2)、p3(x3,y3)、p4(x4,y4) The calculation formula is as follows:
w 'in the formula'p、h'pThe width and the height of the input image are respectively based on the pixel unit; dx 'and dy' are physical sizes in the horizontal and vertical directions of the input image, respectively.
(2) The coordinates of the imaging point in the camera coordinate system can be recorded asWhere f is the focal length of the camera.
(3) By using the inverse relation of equation (5), the coordinates of the imaging point in the world coordinate system are:
the angles of deflection d α, d β, d γ are added at this time.
(4) The intersection point of the connecting line of the imaging point and the optical center of the camera and the ground is the coordinate of the ground scene of the top view, and the ground point coordinate P can be obtainedg(xg,ygAnd 0) is:
(5) calculating P from the inverse of equation (2)gIn the new purposePosition of corresponding point P "(u", v ") in target image (corrected overhead image):
respectively point p'1(u′1,v′1)、p'2(u'2,v'2)、p'3(u'3,v'3)、p'4(u'4,v'4) The above-mentioned steps (1) to (5) are carried out to obtain P1”(u1”,v1”)、P2”(u2”,v2”)、P3”(u3”,v3") and P4″(u4″,v4") and then the following objective functions are constructed with respect to the deflection angles d α, d β, d γ:
F(dα,dβ,dγ)=|u″1-u″3|+|u″2-u″4|+|v″1-v″2|+|v″3-v″4|
(13)
the search for d α, d β, and d γ was performed using a linear variation method to minimize F. In general, since the error angle is not too large, it is sufficient to change d α, d β, and d γ within a small range centered on 0. After the optimal values of the deflection angles d alpha, d beta and d gamma are solved, the overlooking transformation method of the step S2.2 is used for overlooking transformation, and the solving formula (1) of the second step in the step S2.2 is replaced by a formula (10) in the conversion process.
And S2.4, establishing an image overlook transformation lookup table to realize rapid overlook transformation, and obtaining the aerial view.
According to the overlooking transformation algorithm flow and the corresponding parameters of the camera, an overlooking transformation lookup table is established, the overlooking transformation lookup table comprises the coordinate relation between all pixel points in the transformed overlooking and the pixel points corresponding to the original perspective view, and the system can realize the transformation from the perspective view to the top view in a mode of inquiring the overlooking transformation lookup table. The overlook conversion of the perspective image can be realized quickly by means of overlook conversion of the lookup table.
Step S3: determining a gap for image splicing, performing image splicing between a vehicle forward image and a side image and panoramic image splicing, then generating a panoramic image splicing mapping table, and quickly acquiring a panoramic aerial view by inquiring the panoramic image splicing mapping table; the method specifically comprises the following steps:
s3.1, splicing the forward image and the side image;
four cameras are arranged on a vehicle body, four images of the front part, the rear part, the left part, the right part, the front part and the rear part are respectively obtained, the splicing of panoramic images is mainly the splicing of forward images and lateral images, and the steps of splicing the forward images and the lateral images are described by taking the forward images and the left images as an example:
(1) as shown in FIG. 5, the vehicle body has a length C, a width K, and a side image width H1Meter, the width of the front image is H meters, and two points P are calibrated in a public area covered by the forward image and the left image1And P2And the coordinates of the two points under the panoramic ground coordinate system are respectively marked as P1(X1,Y1) And P2(X2,Y2);
(2) Determining a summation point P in a top view of the left image1And P2Corresponding pixel point coordinate R'1And R'2;
(3) Determining a summation point P in a top view of a forward image1And P2Corresponding pixel point coordinate R ″1And R ″)2;
(4) According to point P1And P2R 'is collinear in the image coordinate systems of the left image and the front image'1And R'2Determined straight line R'1R'2And R ″)1And R ″)2A defined straight line R ″1R″2Respectively as the splicing seams of the front image and the left image;
(5) and saving the position of the splicing seam, cutting the two images to be spliced according to the position of the splicing seam, splicing the top view of the front image and the top view of the left image together according to the splicing seam, and cutting the redundant parts outside the splicing seam of the front image and the left image according to the splicing seam.
Splicing of the back image and the left image, the front image and the right image, and the back image and the right image refers to a splicing step of the front image and the left image, and a specific image splicing flow is shown in fig. 6.
S3.2, splicing the panoramic images;
and (5) splicing the overlooking images of the four images around the vehicle body pairwise according to the splicing method of the forward images and the lateral images in the step (S3.1), so that the panoramic aerial view can be obtained.
The steps of establishing the panoramic stitching mapping table are as follows:
(1) setting the view field range of the panoramic aerial view:
and setting the Width of the output panoramic aerial view as Width and the Height as Height. Since the visual field ranges in the X and Y directions are proportional, setting the visual field in the Y direction to viewrange,the visible range in the X direction is ViewRangeX scale Height.
(2) Generating a panoramic stitching mapping table:
and according to the set visual field range of the panoramic aerial view, splicing the top views of the four images around the vehicle body in pairs according to the method in the S3.1 to obtain the panoramic image. But this necessarily reduces the system time performance since image stitching involves a large number of calculations such as coordinate transformations. It is considered that the installation position and angle of the cameras and the mutual positions of the cameras are fixed, and the factors can not change along with the change of the content collected by the cameras. The image splicing process is based on a space transformation process of pixel points in an original perspective image acquired by a camera, so that parameters including the position of a splicing seam, the Width and Height of a panoramic aerial view, the visible range ViewRange in the Y direction, the visible range ViewRange in the X direction and the like can be stored in a form of a table, and a panoramic image splicing mapping table is established. And (3.2) according to the parameters set in the step (1) in the panoramic image splicing, the size of the panoramic image is Height-Width, namely the panoramic image splicing mapping table has Height rows and Width columns in total, and data in each cell is defined as (n, i, j) which represents the coordinates of pixel points in the image to be spliced corresponding to the pixel points of the panoramic image. Where n is 1,2,3,4, which sequentially represents four images, i and j are coordinates of pixels in the images.
(3) And according to the established panoramic stitching mapping table, the panoramic image stitching is completed by a table look-up method, so that the system running time is effectively reduced.
The panoramic image stitching process is shown in fig. 7.
Step S4: the method comprises the following steps of solving the brightness difference between every two spliced images by using an improved brightness harmony processing algorithm, and further eliminating the splicing seams by using an image fusion method of a weighted average method, wherein the method specifically comprises the following steps:
(1) eliminating the brightness difference between spliced images;
the different installation angles of the four cameras installed around the vehicle body cause that the photographed images have different degrees of perception on the light source, so that the brightness difference exists among the front image, the rear image, the left image and the right image. The embodiment of the invention adopts an improved brightness harmony processing algorithm to eliminate the brightness difference between spliced images, and the method comprises the following specific steps:
1) 1/3 common parts of the two images are used as an overlapping area;
2) calculating the sum S of pixel values of the overlapped regions, respectively1And S2;
3) Set Differ as S1/S2And multiplying the pixel value of each pixel in one image by Differ for weighting to obtain a new pixel value R. If R > T, the original pixel value is kept unchanged; if R is less than T, the original pixel value is reassigned to be T, and T is the empirical value of 200.
(2) And eliminating the splicing seam in the splicing process by using a weighted average image fusion method so as to enable the image to be in smooth transition. Therefore, the brightness difference and the splicing seams among the spliced images are effectively eliminated, the seamless splicing of the panoramic image is realized, and the panoramic view image has a better visual effect.
The method provided by the invention can be actually embedded into an FPGA (field programmable gate array) to realize, and is applied to a vehicle-mounted panoramic all-around system. The above embodiments only serve to explain the technical solution of the present invention, and the protection scope of the present invention is not limited to the implementation system and the specific implementation steps described in the above embodiments. Therefore, the technical solutions that the specific formulas and algorithms in the above embodiments are simply replaced, but the substantial contents are still consistent with the method of the present invention, and all the technical solutions are within the protection scope of the present invention.
Claims (8)
1. An image splicing method in a panoramic all-round looking system is characterized by comprising the following steps:
step 1) combines the space line training model and the SVM algorithm to carry out fisheye image distortion correction, and the method comprises the following steps:
s1.1, constructing a space line training model;
s1.2, correcting the fisheye image by using an SVM algorithm;
step 2) utilizing the relation among the world coordinate system, the camera coordinate system and the image coordinate system, combining a backward mapping method to transform the perspective view into the top view,surrounding X according to the camera coordinate system for the case of camera mounting errorsC、YC、ZCThe method comprises the following steps of correcting an overlook transformation image again by the deflection angle of an axis, then establishing an image overlook transformation lookup table to realize rapid overlook transformation, and obtaining a bird's-eye view, wherein the overlook transformation lookup table comprises the following steps:
s2.1, establishing a world coordinate system and a camera coordinate system;
s2.2, solving the overlook transformation by using a backward mapping method;
s2.3 construction and minimization of the surrounding X with respect to the camera coordinate System for the case of errors in the camera mountingC、YC、ZCThe optimal values of all deflection angles are obtained through the deflection angle d α, d β and d gamma target functions of the shaft deflection, and then the overlook transformation image is corrected again to obtain a more accurate overlook transformation image;
s2.4, establishing an image overlook transformation lookup table to realize rapid overlook transformation, and obtaining an aerial view;
step 3) determining gaps of image splicing, performing image splicing between the forward image and the lateral image of the vehicle and panoramic image splicing, then generating a panoramic image splicing mapping table, and quickly obtaining a panoramic aerial view through searching the panoramic image splicing mapping table;
s3.1, splicing the forward image and the lateral image to obtain a panoramic aerial view;
s3.2, establishing a panoramic stitching mapping table to complete the stitching of panoramic images;
and 4) solving the brightness difference between every two spliced images by using an improved brightness harmony processing algorithm, and further eliminating the splicing seams by using an image fusion method of a weighted average method.
2. The image stitching method in the panoramic looking-around system according to claim 1, characterized in that in the step 1),
the space line training model comprises a plurality of horizontal straight lines, the width of each horizontal straight line is increased by 1.3 times from bottom to top, the space width of adjacent horizontal straight lines is increased by 1.3 times from bottom to top, and a plurality of cross lines are arranged on one horizontal straight line in the horizontal straight lines;
the fisheye image correction by using the SVM algorithm comprises the steps that when the distortion correction of the fisheye image is performed by using the SVM algorithm, the input and the output of a corresponding SVM trainer are respectively the radial distance of an image point in a physical space and the radial distance of the image point in the corresponding fisheye image, then the input and the output data of the SVM trainer are classified and fitted with nonlinear functions, multiple groups of samples are used for training for multiple times, and a conversion model is fitted by regression, so that the mapping relation of the corrected fisheye image and the corresponding pixel coordinate of the distorted fisheye image is established.
3. The method as claimed in claim 1, wherein in step S2.1, a point P in the world coordinate system is assumedWHas the coordinates of (x)W,yW,zW) The point is denoted P in the camera coordinate systemC(xC,yC,zC) The relationship between the two is shown in formula (1):
<mrow> <msub> <mi>P</mi> <mi>C</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>C</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>C</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>C</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>cos</mi> <mi>&alpha;</mi> </mrow> </mtd> <mtd> <mrow> <mi>sin</mi> <mi>&alpha;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mi>sin</mi> <mi>&alpha;</mi> </mrow> </mtd> <mtd> <mrow> <mi>cos</mi> <mi>&alpha;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>W</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>z</mi> <mi>W</mi> </msub> <mo>-</mo> <mi>h</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>W</mi> </msub> <mo>+</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
wherein α is the angle of rotation of the camera coordinate system about the X-axis relative to the world coordinate system, h ═ OCOWL is the vertical distance between the optical center of the camera and the ground, R is a rotation matrix of 3 × 3, and T is a translation matrix T ═ 00-h]T;
In the step S2.2, assuming that a pixel of a certain point on the target image after the top view transformation is p and a corresponding pixel point on the original image is p', the main steps for solving the top view transformation are as follows:
(1) calculating the corresponding point P of the pixel P in the world coordinate systemW: assume its coordinate as (x)W,yW,zW) The camera optical center passes through the image center, and if p is the pixel of the u-th column and the v-th row on the target image, which is denoted as pixel p (u, v), then:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mi>W</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mi>u</mi> <mo>-</mo> <mfrac> <msub> <mi>w</mi> <mi>p</mi> </msub> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mi>W</mi> </msub> <mo>=</mo> <mo>-</mo> <mrow> <mo>(</mo> <mi>v</mi> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mi>p</mi> </msub> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>y</mi> <mo>+</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>z</mi> <mi>W</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
in the formula, wp、hpThe width and the height of the target image are both in pixel unit; dx and dy are the physical size of the target image in the horizontal and vertical directions, and l is the origin O of the world coordinate systemWDistance to the intersection of the camera's optical axis and the ground. If the included angle between the optical axis of the camera and the horizontal plane is theta and the vertical distance between the optical axis of the camera and the ground is h, l is hcot theta;
(2) calculating a point PWCoordinates P in the camera coordinate systemC(xC,yC,zC) The solving method is shown in formula (1);
(3) calculating PC(xC,yC,zC) Projection point P on image planei(xi,yi) The calculation formula is as follows:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>-</mo> <mi>f</mi> <mfrac> <msub> <mi>x</mi> <mi>C</mi> </msub> <msub> <mi>z</mi> <mi>C</mi> </msub> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>-</mo> <mi>f</mi> <mfrac> <msub> <mi>y</mi> <mi>C</mi> </msub> <msub> <mi>z</mi> <mi>C</mi> </msub> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
wherein f is the camera focal length.
(4) Calculating a projection point Pi(xi,yi) The pixel point p ' (u ', v ') of the pair in the original image (original perspective) is calculated as:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msup> <mi>u</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mfrac> <msub> <mi>x</mi> <mi>i</mi> </msub> <mrow> <msup> <mi>dx</mi> <mo>&prime;</mo> </msup> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msup> <msub> <mi>w</mi> <mi>p</mi> </msub> <mo>&prime;</mo> </msup> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msup> <mi>v</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mo>-</mo> <mfrac> <msub> <mi>y</mi> <mi>i</mi> </msub> <mrow> <msup> <mi>dy</mi> <mo>&prime;</mo> </msup> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <msup> <msub> <mi>h</mi> <mi>p</mi> </msub> <mo>&prime;</mo> </msup> </mrow> <mn>2</mn> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
in the formula, wp'、hp' is the width and height of the input image, both in pixels; dx 'and dy' are physical sizes of the input image in the horizontal and vertical directions.
4. A method for stitching images in a panoramic looking around system according to claim 3, wherein in step S2.3, the correction includes that the camera coordinate system surrounds X assuming that there is an error in the camera mountingC、YC、ZCThe axes are deflected by angles d α, d β, d γ, respectively, and the calculated point P in step S2.2 is simply the point PWCoordinates P in the camera coordinate systemC(xC,yC,zC) Becomes:
PC=R3R2R1(PW+T) (5)
wherein,
<mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <mi>d</mi> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>sin</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <mi>d</mi> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <mi>d</mi> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>cos</mi> <mrow> <mo>(</mo> <mi>&alpha;</mi> <mo>+</mo> <mi>d</mi> <mi>&alpha;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>2
<mrow> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>cos</mi> <mi> </mi> <mi>d</mi> <mi>&beta;</mi> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mi>sin</mi> <mi> </mi> <mi>d</mi> <mi>&beta;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>sin</mi> <mi> </mi> <mi>d</mi> <mi>&beta;</mi> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>cos</mi> <mi> </mi> <mi>d</mi> <mi>&beta;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>R</mi> <mn>3</mn> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>cos</mi> <mi> </mi> <mi>d</mi> <mi>&gamma;</mi> </mrow> </mtd> <mtd> <mrow> <mi>sin</mi> <mi> </mi> <mi>d</mi> <mi>&gamma;</mi> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mi>sin</mi> <mi> </mi> <mi>d</mi> <mi>&gamma;</mi> </mrow> </mtd> <mtd> <mrow> <mi>cos</mi> <mi> </mi> <mi>d</mi> <mi>&gamma;</mi> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
establishing target functions related to d α, d β and d gamma, solving the optimal values of the deflection angles d α, d β and d gamma of the camera by using a method of minimizing function values for three deflection angles by using a linear variation method, then performing top view transformation by using a top view transformation method of a step S2.2, and replacing the solving formula (1) of the second step in the step S2.2 with a formula in the conversion process
5. The method for image stitching in a panoramic looking-around system according to any one of claims 1 to 4, wherein in the step S2.4, the look-up transformation look-up table includes coordinate relationships between all pixel points in the transformed top view and pixel points corresponding to the original perspective view, the system can realize the transformation from the perspective view to the top view by querying the look-up transformation look-up table, and can quickly realize the look-up transformation of the perspective image by looking up the look-up transformation look-up table.
6. The method for stitching images in a panoramic looking-around system according to claim 1, wherein the stitching between the forward image and the side image comprises the following specific steps:
(1) the forward image comprises a front image and a rear image, and the lateral image comprises a front image and a rear imageComprises a lateral image and a right image, the length of the vehicle body is C, the width of the vehicle body is K, and the width of the lateral image of the vehicle body is H1The width of the forward image is H meters, and two points P are calibrated in a public area covered by the forward image and the side image1And P2And the coordinates of the two points under the panoramic ground coordinate system are respectively marked as P1(X1,Y1) And P2(X2,Y2);
(2) Determining a summation point P in a top view of a lateral image1And P2Corresponding pixel point coordinate R1'and R'2;
(3) Determining a summation point P in a top view of a forward image1And P2Corresponding pixel point coordinate R1'and R'2';
(4) According to point P1And P2Collinear in the image coordinate systems of the lateral and forward images, hence at R1'and R'2Determined straight line R1'R'2And R1'and R'2' determined straight line R1”R'2' as a splice seam for the forward and side images, respectively;
(5) and saving the position of the splicing seam, cutting two images to be spliced according to the position of the splicing seam, splicing the top view of the forward image and the top view of the lateral image together according to the splicing seam, and cutting redundant parts except the splicing seam of the forward image and the lateral image according to the splicing seam.
7. The method of claim 6, wherein the image stitching comprises:
(1) setting the view range of the panoramic looking-around aerial view, setting the Width of the output panoramic looking-around aerial view as Width and the Height as Height. Since the visual field ranges in the X and Y directions are proportional, setting the visual field in the Y direction to viewrange,the visible range in the X direction isViewRangeX=scale·Height;
(2) Generating a panoramic image splicing mapping table, storing parameters including the splicing seam position, the Width and the Height of the panoramic looking-around aerial view, the visible range viewRange in the Y direction and the visible range viewRange in the X direction into a table form according to the set visual field range of the panoramic looking-around aerial view, and establishing the panoramic image splicing mapping table;
(3) and finishing the panoramic image splicing by a table look-up method according to the established panoramic image splicing mapping table.
8. The image stitching method in the panoramic looking-around system according to claim 1, wherein the step 4) specifically comprises the following steps:
s4.1, eliminating the brightness difference between spliced images by adopting an improved brightness harmony processing algorithm, wherein the method comprises the following steps:
(1) 1/3 common parts of the two images are used as an overlapping area;
(2) calculating the sum S of pixel values of the overlapped regions, respectively1And S2;
(3) Set Differ as S1/S2Multiplying the pixel value of each pixel in one image by Differ for weighting to obtain a new pixel value R, and if R is more than T, keeping the original pixel value unchanged; if R is less than T, the original pixel value is reassigned to be T, and T is the empirical value of 200;
and S4.2, eliminating the splicing seam in the splicing process by using a weighted average image fusion method, so that the image is in smooth transition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710237136.8A CN107424120A (en) | 2017-04-12 | 2017-04-12 | A kind of image split-joint method in panoramic looking-around system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710237136.8A CN107424120A (en) | 2017-04-12 | 2017-04-12 | A kind of image split-joint method in panoramic looking-around system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107424120A true CN107424120A (en) | 2017-12-01 |
Family
ID=60423221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710237136.8A Pending CN107424120A (en) | 2017-04-12 | 2017-04-12 | A kind of image split-joint method in panoramic looking-around system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107424120A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107958440A (en) * | 2017-12-08 | 2018-04-24 | 合肥工业大学 | Double fish eye images real time panoramic image split-joint methods and system are realized on GPU |
CN108198133A (en) * | 2017-12-06 | 2018-06-22 | 云南联合视觉科技有限公司 | A kind of quick joining method of vehicle panoramic image |
CN108364333A (en) * | 2018-02-11 | 2018-08-03 | 成都康烨科技有限公司 | Method and device based on multi-direction photography fitting vertical view |
CN108492254A (en) * | 2018-03-27 | 2018-09-04 | 西安优艾智合机器人科技有限公司 | Image capturing system and method |
CN108638999A (en) * | 2018-05-16 | 2018-10-12 | 浙江零跑科技有限公司 | A kind of collision early warning system and method for looking around input based on 360 degree |
CN109064397A (en) * | 2018-07-04 | 2018-12-21 | 广州希脉创新科技有限公司 | A kind of image split-joint method and system based on camera shooting earphone |
CN110341597A (en) * | 2018-04-02 | 2019-10-18 | 杭州海康威视数字技术股份有限公司 | A kind of vehicle-mounted panoramic video display system, method and Vehicle Controller |
CN110428361A (en) * | 2019-07-25 | 2019-11-08 | 北京麒麟智能科技有限公司 | A kind of multiplex image acquisition method based on artificial intelligence |
CN110689512A (en) * | 2019-09-24 | 2020-01-14 | 中国科学院武汉岩土力学研究所 | Method for quickly splicing and fusing annular images of panoramic video in hole into image |
CN111071152A (en) * | 2018-10-19 | 2020-04-28 | 图森有限公司 | Fisheye image processing system and method |
CN111242842A (en) * | 2020-01-15 | 2020-06-05 | 深圳市中天安驰有限责任公司 | Image conversion method, terminal and storage medium |
CN111369439A (en) * | 2020-02-29 | 2020-07-03 | 华南理工大学 | Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view |
CN111669547A (en) * | 2020-05-29 | 2020-09-15 | 成都易瞳科技有限公司 | Panoramic video structuring method |
CN112078538A (en) * | 2020-09-10 | 2020-12-15 | 浙江亚太机电股份有限公司 | Automatic opening system of car tail-gate based on-vehicle system of looking around |
CN112435161A (en) * | 2020-11-12 | 2021-03-02 | 蘑菇车联信息科技有限公司 | Panoramic all-around image splicing method and system, electronic equipment and storage medium |
CN112734639A (en) * | 2020-12-28 | 2021-04-30 | 南京欣威视通信息科技股份有限公司 | Image display splicing method and system |
CN113012030A (en) * | 2019-12-20 | 2021-06-22 | 北京金山云网络技术有限公司 | Image splicing method, device and equipment |
WO2021121251A1 (en) * | 2019-12-16 | 2021-06-24 | 长沙智能驾驶研究院有限公司 | Method and device for generating vehicle panoramic surround view image |
CN113223092A (en) * | 2021-05-12 | 2021-08-06 | 天津大学 | Panoramic video generation method |
WO2022017528A1 (en) * | 2020-07-24 | 2022-01-27 | 展讯通信(天津)有限公司 | Display method and system for vehicle-mounted avm, and electronic device and storage medium |
WO2023272457A1 (en) * | 2021-06-28 | 2023-01-05 | 华为技术有限公司 | Apparatus and system for image splicing, and related method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102045546A (en) * | 2010-12-15 | 2011-05-04 | 广州致远电子有限公司 | Panoramic parking assist system |
US20140184737A1 (en) * | 2012-12-27 | 2014-07-03 | Hon Hai Precision Industry Co., Ltd. | Driving assistant system and method |
-
2017
- 2017-04-12 CN CN201710237136.8A patent/CN107424120A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102045546A (en) * | 2010-12-15 | 2011-05-04 | 广州致远电子有限公司 | Panoramic parking assist system |
US20140184737A1 (en) * | 2012-12-27 | 2014-07-03 | Hon Hai Precision Industry Co., Ltd. | Driving assistant system and method |
Non-Patent Citations (5)
Title |
---|
冯为嘉: "基于鱼眼镜头的全方位视觉及全景立体球视觉研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
吴永祺: "基于全景视觉的汽车安全驾驶辅助系统的设计研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
姜丽凤: "全景图拼接关键技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王旭东: "基于环视的自动泊车方法研究与系统设计", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
赵凯: "全景可视化辅助泊车系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198133A (en) * | 2017-12-06 | 2018-06-22 | 云南联合视觉科技有限公司 | A kind of quick joining method of vehicle panoramic image |
CN108198133B (en) * | 2017-12-06 | 2021-09-17 | 云南联合视觉科技有限公司 | Rapid splicing method for vehicle panoramic images |
CN107958440A (en) * | 2017-12-08 | 2018-04-24 | 合肥工业大学 | Double fish eye images real time panoramic image split-joint methods and system are realized on GPU |
CN108364333A (en) * | 2018-02-11 | 2018-08-03 | 成都康烨科技有限公司 | Method and device based on multi-direction photography fitting vertical view |
CN108492254A (en) * | 2018-03-27 | 2018-09-04 | 西安优艾智合机器人科技有限公司 | Image capturing system and method |
CN110341597A (en) * | 2018-04-02 | 2019-10-18 | 杭州海康威视数字技术股份有限公司 | A kind of vehicle-mounted panoramic video display system, method and Vehicle Controller |
CN110341597B (en) * | 2018-04-02 | 2020-11-27 | 杭州海康威视数字技术股份有限公司 | Vehicle-mounted panoramic video display system and method and vehicle-mounted controller |
CN108638999A (en) * | 2018-05-16 | 2018-10-12 | 浙江零跑科技有限公司 | A kind of collision early warning system and method for looking around input based on 360 degree |
CN109064397A (en) * | 2018-07-04 | 2018-12-21 | 广州希脉创新科技有限公司 | A kind of image split-joint method and system based on camera shooting earphone |
CN111071152B (en) * | 2018-10-19 | 2023-10-03 | 图森有限公司 | Fish-eye image processing system and method |
CN111071152A (en) * | 2018-10-19 | 2020-04-28 | 图森有限公司 | Fisheye image processing system and method |
US11935210B2 (en) | 2018-10-19 | 2024-03-19 | Tusimple, Inc. | System and method for fisheye image processing |
CN110428361A (en) * | 2019-07-25 | 2019-11-08 | 北京麒麟智能科技有限公司 | A kind of multiplex image acquisition method based on artificial intelligence |
CN110689512A (en) * | 2019-09-24 | 2020-01-14 | 中国科学院武汉岩土力学研究所 | Method for quickly splicing and fusing annular images of panoramic video in hole into image |
CN110689512B (en) * | 2019-09-24 | 2022-03-08 | 中国科学院武汉岩土力学研究所 | Method for quickly splicing and fusing annular images of panoramic video in hole into image |
WO2021121251A1 (en) * | 2019-12-16 | 2021-06-24 | 长沙智能驾驶研究院有限公司 | Method and device for generating vehicle panoramic surround view image |
US11843865B2 (en) | 2019-12-16 | 2023-12-12 | Changsha Intelligent Driving Institute Corp., Ltd | Method and device for generating vehicle panoramic surround view image |
CN113012030A (en) * | 2019-12-20 | 2021-06-22 | 北京金山云网络技术有限公司 | Image splicing method, device and equipment |
CN111242842B (en) * | 2020-01-15 | 2023-11-10 | 江苏中天安驰科技有限公司 | Image conversion method, terminal and storage medium |
CN111242842A (en) * | 2020-01-15 | 2020-06-05 | 深圳市中天安驰有限责任公司 | Image conversion method, terminal and storage medium |
CN111369439B (en) * | 2020-02-29 | 2023-05-23 | 华南理工大学 | Panoramic all-around image real-time splicing method for automatic parking space identification based on all-around |
CN111369439A (en) * | 2020-02-29 | 2020-07-03 | 华南理工大学 | Panoramic view image real-time splicing method for automatic parking stall identification based on panoramic view |
CN111669547A (en) * | 2020-05-29 | 2020-09-15 | 成都易瞳科技有限公司 | Panoramic video structuring method |
CN111669547B (en) * | 2020-05-29 | 2022-03-11 | 成都易瞳科技有限公司 | Panoramic video structuring method |
WO2022017528A1 (en) * | 2020-07-24 | 2022-01-27 | 展讯通信(天津)有限公司 | Display method and system for vehicle-mounted avm, and electronic device and storage medium |
CN112078538A (en) * | 2020-09-10 | 2020-12-15 | 浙江亚太机电股份有限公司 | Automatic opening system of car tail-gate based on-vehicle system of looking around |
CN112435161A (en) * | 2020-11-12 | 2021-03-02 | 蘑菇车联信息科技有限公司 | Panoramic all-around image splicing method and system, electronic equipment and storage medium |
CN112734639B (en) * | 2020-12-28 | 2023-09-12 | 南京欣威视通信息科技股份有限公司 | Image display stitching method and system |
CN112734639A (en) * | 2020-12-28 | 2021-04-30 | 南京欣威视通信息科技股份有限公司 | Image display splicing method and system |
CN113223092A (en) * | 2021-05-12 | 2021-08-06 | 天津大学 | Panoramic video generation method |
WO2023272457A1 (en) * | 2021-06-28 | 2023-01-05 | 华为技术有限公司 | Apparatus and system for image splicing, and related method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107424120A (en) | A kind of image split-joint method in panoramic looking-around system | |
CN109741455B (en) | Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system | |
CN108263283B (en) | Method for calibrating and splicing panoramic all-round looking system of multi-marshalling variable-angle vehicle | |
US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
CN109903227B (en) | Panoramic image splicing method based on camera geometric position relation | |
JP5739584B2 (en) | 3D image synthesizing apparatus and method for visualizing vehicle periphery | |
US8446471B2 (en) | Method and system for generating surrounding seamless bird-view image with distance interface | |
CN103617606B (en) | For assisting the vehicle multi-angle panorama generation method of driving | |
WO2022088103A1 (en) | Image calibration method and apparatus | |
US20110156887A1 (en) | Method and system for forming surrounding seamless bird-view image | |
CN111062873A (en) | Parallax image splicing and visualization method based on multiple pairs of binocular cameras | |
EP2061234A1 (en) | Imaging apparatus | |
CN110288527B (en) | Panoramic aerial view generation method of vehicle-mounted panoramic camera | |
CN112224132A (en) | Vehicle panoramic all-around obstacle early warning method | |
CN113362228A (en) | Method and system for splicing panoramic images based on improved distortion correction and mark splicing | |
CN103501409A (en) | Ultrahigh resolution panorama speed dome AIO (All-In-One) system | |
CN111028155A (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
CN111652937B (en) | Vehicle-mounted camera calibration method and device | |
CN115239820A (en) | Split type flying vehicle aerial view real-time splicing and parking space detection method | |
CN113610927B (en) | AVM camera parameter calibration method and device and electronic equipment | |
CN111768332A (en) | Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device | |
JP2007049276A (en) | On-vehicle panorama camera system | |
CN107492125A (en) | The processing method of automobile fish eye lens panoramic view picture | |
CN115936995A (en) | Panoramic splicing method for four-way fisheye cameras of vehicle | |
CN113658262A (en) | Camera external parameter calibration method, device, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171201 |
|
RJ01 | Rejection of invention patent application after publication |