CN116883242A - Image stitching method based on angle matching - Google Patents

Image stitching method based on angle matching Download PDF

Info

Publication number
CN116883242A
CN116883242A CN202310869869.9A CN202310869869A CN116883242A CN 116883242 A CN116883242 A CN 116883242A CN 202310869869 A CN202310869869 A CN 202310869869A CN 116883242 A CN116883242 A CN 116883242A
Authority
CN
China
Prior art keywords
image
camera
plane
range
spliced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310869869.9A
Other languages
Chinese (zh)
Inventor
刘明
杨鹏
周玉婷
董立泉
孔令琴
褚旭红
赵跃进
惠梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202310869869.9A priority Critical patent/CN116883242A/en
Publication of CN116883242A publication Critical patent/CN116883242A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses an image stitching method based on angle matching, belongs to the technical field of machine vision, and mainly relates to the problem of solving the position of an image on a plane to be stitched and pixel-by-pixel filling. The method uses the turntable information of the acquired image and the turntable angle difference value when the camera points to the plane to be spliced as external parameters, and uses the internal and external parameters to reversely solve the pixel point of the image frame corresponding to a certain point on the splicing plane, and fills the pixel value of the point into the splicing plane. The invention does not need to rely on image characteristics for matching, so that the invention has great application value in extremely complex scenes and scenes with wide viewing angles; in addition, the invention can ensure that the image quality is still clear after the image is partially enlarged, and the problems of barrel distortion and the like caused by a wide-angle lens can not be generated.

Description

Image stitching method based on angle matching
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image stitching method based on angle matching.
Background
The image stitching technology has wide application in the fields of computer vision and image processing, such as panoramic photography, remote sensing image processing, medical image stitching and the like; the importance of the method is that the method can expand the field of view, enhance information, fuse data, reconstruct scenes and provide high-quality visual display, and the application and the value make image stitching one of the core technologies essential in the fields of computer vision and image processing.
At present, an image stitching technology based on feature points is a common image stitching method, and the basic principle of the image stitching technology is that feature points in an image are detected, and image alignment and stitching are performed by using matching relations among the feature points. However, for the situation that images lacking obvious characteristic point textures and large-area repeated areas exist in the images, characteristic point detection and matching may be limited, so that artifacts or splicing fracture occur in splicing results; in the process of image alignment, errors of feature point matching are transferred and amplified, which may cause distortion or dislocation of the whole image, and the method has high requirement on computing resources, and may require long computing time and large storage space when processing a large-scale image or a large number of feature points.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image stitching method based on angle matching, which solves the problem of the influence of image texture characteristics on stitching and simultaneously avoids the transmission of errors in the process of stitching large scenes.
An image stitching method based on angle matching comprises the following steps:
step 1, calibrating a camera by using a checkerboard and Zhang Zhengyou calibration method, and calculating an internal reference matrix and a distortion coefficient of the camera;
step 2, designating an angle range required to be spliced, and dividing a scene into a plurality of small rectangular areas by using a set step length to scan;
step 3, calculating a scale factor from a camera normalized plane to a plane to be spliced by using cylindrical projection;
step 4, calculating the size of the images to be spliced according to the specified splicing angle range, and calculating the position of the images to be spliced in an initial camera coordinate system;
step 5, calculating the coordinates of the images to be spliced corresponding to the four corner points of the current image frame respectively by the rotation angle of the cradle head, so as to determine the range of the panoramic image to be spliced corresponding to the current image frame;
and step 6, including but not limited to, reversely solving the image frame pixel points corresponding to the pixel points in the panoramic image range to be spliced by utilizing the splicing range, and filling the pixel values into the splicing plane.
The image stitching method based on angle matching provided by the invention has the following beneficial effects:
(1) Compared with the traditional panoramic pictures and panoramic videos, the core algorithm of the invention does not need to be based on image characteristics when splicing the images, so the core algorithm has great application value in extremely complex scenes, scenes with similarity of environmental characteristics (such as ice surface, water surface, grassland, sky and the like) and scenes with wide visual angles.
(2) The definition of the invention is several times of that of the traditional method, the definition of the invention can still be kept after the local magnification, the problems of barrel distortion and the like caused by the wide-angle lens can not be generated, and the image quality of the image can be ensured in all directions.
(3) According to the invention, by means of acquiring the turntable angle information of the current picture and the turntable angle difference value when the camera points to the plane to be spliced, a field of view overlapping area between original images is not needed, and the error transmission during large scene splicing is avoided.
(4) Compared with an array camera, the invention has the advantages of lower cost, small operation difficulty, small volume and weight, flexible shooting range, operation completion by a single person and low equipment failure rate.
Drawings
FIG. 1 is a system workflow diagram of an image stitching method based on angle matching according to an embodiment of the present invention;
FIG. 2 is a schematic representation of a turret null calibration;
FIG. 3 is a schematic plan view;
FIG. 4 is a schematic view of a scene scan;
fig. 5 is a schematic diagram of a bit image processing scheme.
Detailed Description
In order that the manner in which the above recited features and advantages of the present invention are obtained, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
The invention discloses an image stitching method based on angle matching, which has the core ideas that characteristic points are not relied on during image stitching, the range of an image on a plane to be stitched is calculated by utilizing angle information of a turntable when the image is acquired, an image frame pixel point corresponding to a certain point on the stitching plane is reversely solved by utilizing internal and external parameters, and the pixel value of the point is filled in the stitching plane.
Referring to fig. 1, a flowchart of one embodiment of an angle-matching-based image stitching method of the present invention is shown. The image stitching method based on angle matching in the embodiment comprises the following steps:
and step 1, calibrating the camera by using a checkerboard, and calculating an internal reference matrix and a distortion coefficient of the camera.
In this embodiment, the calibration is performed by using a checkerboard calibration plate, the size of the checkerboard is 30×30mm, the number of the checkerboards is 12×9, the precision is 0.1mm, the material is an aluminum plate, and the checkerboard calibration plate is required to be adhered to the plate glass in order to ensure the surface flatness. 15 images are used as input, so that the complexity of the placement position of the calibration plate is increased as much as possible in the shooting process in order to increase the robustness of the calibration result; and inputting the size of the checkerboard, and calculating a conversion matrix from the pixel coordinate system to the world coordinate system to obtain a camera internal reference matrix and distortion coefficients.
And step 2, designating an angle range required to be spliced, and scanning the scene with the set step length.
In the present embodiment, the angular scanning ranges in the x, y directions are defined, range_x_left=range_x_right=95, range_y_up=range_y_down=30, in degrees.
The step 3 specifically comprises the following steps:
s301, converting coordinates from a pixel coordinate system to normalized plane coordinates by using camera internal parameter calculation:
where (norm_x, norm_y) is the normalized plane coordinate, (u, v) is the pixel coordinate, fx, fy, cx, cy is the reference matrix parameter, which has been found in step 1.
S302, selecting four corner points of the upper left, the upper right, the lower left and the lower right of the image, and determining the scale factor from a camera normalization plane to a plane to be spliced through the following steps:
step 4, determining the position of the image to be spliced in the initial camera coordinate system by the following formula:
x_left=scale_z*range_x_left
x_right=scale_z*range_x_right
y_up=scale_z*tan(range_y_up)
y_down=scale_z*tan(range_y_down)
wherein range_x_left, range_x_right, range_y_up, range_y_down are defined in step 2, so that the coordinates of the upper left corner of the plane to be stitched in the initial camera coordinate system are (-x_left, -y_up, scale_z), the total width w= [ x_left+x_right ], and the total height h= [ y_up+y_down ].
The step 5 specifically comprises the following steps:
s501, calculating a rotation matrix corresponding to the current image frame by the following formula:
wherein, (coder_x, coder_y) is the coding amount of the current image; (Coder_x0, coder_y0) is the average value of all image coding amounts, and the corresponding camera position is the initial position of the camera; the VERTICALPOS is the coding amount of the motor y when the camera looks straight ahead and the x motor and the y motor are perpendicular to each other, and an example diagram refers to FIG. 2; code2 radial is the amount of motor code converted into radian, and coding_circle is the amount of code of one turn of the encoder.
S502, calculating three-dimensional physical coordinates of a single image on a plane to be spliced by the following formula:
(trans_x,trans_y,trans_z) T =R*(Intrins -1 *(u,v,1) T +offset)-offset
wherein, (trans_x, trans_y, trans_z) is the physical three-dimensional coordinate on the plane to be spliced; intrins is an internal reference matrix of the camera, and is obtained in the step 1; offset is the offset of the camera optical center from the center of rotation at the zero position camera coordinates.
S503, calculating camera coordinates of the image on the plane to be spliced by the following formula:
where (cam_x, cam_y, cam_z) is the camera coordinates.
S504, calculating image coordinates of four corner points of the image on a plane to be spliced by the following formula:
img_x=scale_z*(range_x_left-angle)
img_y=cam_y+y_up
wherein angle is the angle of the image relative to the center point; (img_x, img_y) is the image coordinates on the stitching plane corresponding to the corner points.
Step 6, filling pixel points in the image frame into a plane to be spliced, which specifically comprises the following steps:
s601, calculating image coordinates of four corner points of an image on a plane to be spliced according to the step S504, solving a maximum inscribed rectangle by using 8 boundary points after obtaining the image coordinates of the four corner points, and calculating xyz coordinates of the rectangle under a camera coordinate system of the plane to be spliced according to the following formula:
y=meshgrid_y_range-y_up
wherein, the mergerrange_x and the mergery range are the range of the x coordinate and the y coordinate of the maximum inscribed rectangle respectively;
s602, generating a grid with x y size, and converting coordinates in the camera coordinate system into coordinates in the panorama using the rotation matrix in step S501.
And S603, calculating a normalized plane coordinate, carrying out distortion correction according to the distortion parameters in the step 1, and mapping the normalized plane coordinate into an image coordinate system according to an internal reference matrix.
And S604, calculating indexes of pixel points according to the image coordinate values in S603, selecting corresponding pixel points from the image data, and filling the image frames at all positions pixel by pixel to a plane to be spliced to obtain the image segments of the splicing area. Finally, the image splicing function based on the angle of the turntable can be realized.

Claims (7)

1. An image stitching method based on angle matching is characterized by comprising the following steps:
step 1, calibrating a camera, and calculating an internal reference matrix and a distortion coefficient of the camera;
step 2, designating an angle range required to be spliced, and scanning a scene with a set step length;
step 3, calculating a scale factor from a camera normalized plane to a plane to be spliced by using cylindrical projection;
step 4, calculating the size of the images to be spliced according to the specified splicing angle range, and calculating the position of the images to be spliced in an initial camera coordinate system;
step 5, calculating the coordinates of the images to be spliced corresponding to the four corner points of the current image frame respectively by the rotation angle of the cradle head, so as to determine the range of the panoramic image to be spliced corresponding to the current image frame;
step 6, reversely solving the pixel point P in the panoramic image range to be spliced according to the splicing range m Corresponding image frame pixel point P c And P is taken c Is filled to P m
2. The image stitching method based on angle matching according to claim 1, wherein step 1 specifically comprises:
the pixel coordinates from the physical coordinates in the world coordinate system to the pixel coordinate system are obtained by:
wherein Z is c Is a scale factor, (u, v) is a pixel coordinate of the target point in a undistorted pixel coordinate system, (X) w ,Y v ,Z w ) Is the physical coordinate of the target point in the world coordinate system; a is an internal reference matrix, and the matrix is a matrix, f x 、f y normalized focal lengths on the x-axis and y-axis, respectively, and +.>(c x ,c y ) Is the origin of the image; e is an external reference matrix,>
calibration of camera parameters includes, but is not limited to, using a Zhang Zhengyou calibration method in which the world coordinate system is fixed on the checkerboard of the input first image to be detected, so that the physical coordinate Z of any corner on the checkerboard w The side length of the grid on the checkerboard is known, and a plurality of (u, v) and (X) are obtained by calibrating the corner points w ,Y v ,Z w =0), thereby obtaining the parameters of the reference matrix a and the distortion coefficient k 1 ,k 2 ,k 3
3. The image stitching method based on angle matching according to claim 1, wherein in step 2, when the scene image is acquired, the scene image is divided into a plurality of small rectangular areas in a fixed step size mode and scanned one by one, and the scanning step size can be adjusted appropriately according to different environmental requirements.
Because the camera is fixed on the turntable, the pose parameters of the turntable are regarded as pose parameters of the camera, namely external parameters of the camera, the turntable is a two-axis bracket turntable, the pitch angle and the yaw angle of the camera can be adjusted, the turntable comprises an encoder, the angle position corresponding to the image frame can be recorded, the current image frame and the motor coding amount corresponding to the current image frame are recorded every time of scanning, and the motor coding amount is used as absolute description of the image position.
4. The image stitching method based on angle matching according to claim 1, wherein in step 3, specifically comprising:
s301, calculating the coordinate conversion from the pixel coordinate system down to the normalized plane coordinate by:
where (norm_x, norm_y) is the normalized plane coordinate, (u, v) is the pixel coordinate, fx, fy, cx, cy is the reference matrix parameter, which has been found in step 1.
S302, selecting four corner points of the upper left, the upper right, the lower left and the lower right of the image, and determining the scale factor from a camera normalization plane to a plane to be spliced through the following steps:
5. the image stitching method based on angle matching according to claim 1, wherein in step 4 the position of the image to be stitched in the initial camera coordinate system is determined by:
x_left=scale_z*range_x_left
x_right=scale_z*range_x_right
y_up=scale_z*tan(range_y_up)
y_down=scale_z*tan(range_y_down)
wherein range_x_left, range_x_right, range_y_up, range_y_down are defined stitching angles in the x and y directions, respectively, so as to obtain the coordinates (-x_left, -y_up, scale_z) of the upper left corner of the plane to be stitched in the initial camera coordinate system, and the total width w= [ x_left+x_right ], and the total height h= [ y_up+y_down ].
6. The image stitching method based on angle matching according to claim 1, wherein in step 5, specifically comprising:
s501, calculating a rotation matrix corresponding to the current image frame by the following formula:
wherein, (coder_x, coder_y) is the coding amount of the current image; (Coder_x0, coder_y0) is the average value of all image coding amounts, and the corresponding camera position is the initial position of the camera; the VERTICALPOS is the coding amount of the motor y when the camera looks straight ahead and the x motor and the y motor are mutually perpendicular; code2 radial is the amount of motor code converted into radian, and coding_circle is the amount of code of one turn of the encoder.
S502, calculating three-dimensional physical coordinates of a single image on a plane to be spliced by the following formula:
(trans_x,trans_y,trans_z) T =R*(Intrins -1 *(u,v,1) T +offset)-offset
wherein, (trans_x, trans_y, trans_z) is the physical three-dimensional coordinate on the plane to be spliced; intrins is an internal reference matrix of the camera, and is obtained in the step 1; offset is the offset of the camera optical center from the center of rotation at the zero position camera coordinates.
S503, calculating camera coordinates of the image on the plane to be spliced by the following formula:
wherein (cam_x, cam_y, cam_z) is camera coordinates; scale_z is the scale factor obtained in step 3.
S504, calculating image coordinates of four corner points of the image on a plane to be spliced by the following formula:
img_x=scale_z*(range_x_left-angle)
img_y=cam_y+y_up
wherein angle is the angle of the image relative to the center point; (img_x, img_y) is the image coordinates corresponding to the corner points.
7. The image stitching method based on angle matching according to claim 1, wherein in step 6, specifically comprising:
s601, calculating image coordinates of four corner points of an image on a plane to be spliced according to the step S504, solving a maximum inscribed rectangle by using 8 boundary points after obtaining the image coordinates of the four corner points, and calculating xyz coordinates of the rectangle under a camera coordinate system of the plane to be spliced according to the following formula:
y=meshgrid_y_range-y_up
wherein, the mergrid_range_x and the mergrid_y_range are the ranges of the x coordinate and the y coordinate of the maximum inscribed rectangle respectively.
S602, generating a grid with x y size, and converting coordinates in the camera coordinate system into coordinates in the panorama using the rotation matrix in step S501.
And S603, calculating a normalized plane coordinate, carrying out distortion correction according to the distortion parameters in the step 1, and mapping the normalized plane coordinate into an image coordinate system according to an internal reference matrix.
And S604, calculating indexes of pixel points according to the image coordinate values in S603, selecting corresponding pixel points from the image data, and filling the image frames at all positions pixel by pixel to a plane to be spliced to obtain the image segments of the splicing area.
CN202310869869.9A 2023-07-17 2023-07-17 Image stitching method based on angle matching Pending CN116883242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310869869.9A CN116883242A (en) 2023-07-17 2023-07-17 Image stitching method based on angle matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310869869.9A CN116883242A (en) 2023-07-17 2023-07-17 Image stitching method based on angle matching

Publications (1)

Publication Number Publication Date
CN116883242A true CN116883242A (en) 2023-10-13

Family

ID=88271089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310869869.9A Pending CN116883242A (en) 2023-07-17 2023-07-17 Image stitching method based on angle matching

Country Status (1)

Country Link
CN (1) CN116883242A (en)

Similar Documents

Publication Publication Date Title
CN111052176B (en) Seamless image stitching
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN110782394A (en) Panoramic video rapid splicing method and system
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN107424118A (en) Based on the spherical panorama mosaic method for improving Lens Distortion Correction
US20100259624A1 (en) Method and apparatus for calibrating video camera
US10063792B1 (en) Formatting stitched panoramic frames for transmission
WO2016018392A1 (en) Three dimensional scanning system and framework
CN111461963B (en) Fisheye image stitching method and device
CN113808220A (en) Calibration method and system of binocular camera, electronic equipment and storage medium
CN111340737B (en) Image correction method, device and electronic system
CN111145269B (en) Calibration method for external orientation elements of fisheye camera and single-line laser radar
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
CN113920206A (en) Calibration method of perspective tilt-shift camera
CN111189415A (en) Multifunctional three-dimensional measurement reconstruction system and method based on line structured light
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN108898550B (en) Image splicing method based on space triangular patch fitting
KR20040053877A (en) Method of Lens Distortion Correction and Orthoimage Reconstruction In Digital Camera and A Digital Camera Using Thereof
JP7474137B2 (en) Information processing device and control method thereof
CN110942475B (en) Ultraviolet and visible light image fusion system and rapid image registration method
CN112258581A (en) On-site calibration method for multi-fish glasses head panoramic camera
EP4071713A1 (en) Parameter calibration method and apapratus
CN113962853B (en) Automatic precise resolving method for rotary linear array scanning image pose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination