CN111507894A - Image splicing processing method and device - Google Patents
Image splicing processing method and device Download PDFInfo
- Publication number
- CN111507894A CN111507894A CN202010307821.5A CN202010307821A CN111507894A CN 111507894 A CN111507894 A CN 111507894A CN 202010307821 A CN202010307821 A CN 202010307821A CN 111507894 A CN111507894 A CN 111507894A
- Authority
- CN
- China
- Prior art keywords
- images
- dome camera
- image
- coordinates
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 41
- 239000011159 matrix material Substances 0.000 claims description 251
- 238000004590 computer program Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 abstract description 8
- 230000005540 biological transmission Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000746 purification Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image splicing processing method and device, wherein the method comprises the following steps: adjusting the rotation angle of the dome camera in the horizontal direction under the target magnification, collecting a plurality of images, and determining a target focal length corresponding to the target magnification; projecting the coordinate points of the plurality of images onto a spherical surface; back projecting part of pixels in the overlapping area on the spherical surface to the plurality of images, and determining a reprojection error between adjacent images after back projection; adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle; and reversely projecting the projection points on the spherical surface to a plane according to the adjusted focal length and the roll angle to obtain a spliced image of the images, so that the problems that the SIFT algorithm is relatively time-consuming in scale-invariant feature transform (SIFT) in the related technology, the images to be spliced need to have abundant textural features, and the images cannot normally work in certain regions with few features can be solved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image stitching processing method and device.
Background
The Pan-Tilt-Zoom (PTZ) camera can adjust focal length, posture and the like, has the characteristics of flexibility, large field range, strong adaptability to illumination conditions and the like, and is widely applied to the field of monitoring and security protection.
Camera calibration is used as a technology for connecting the relationship between three-dimensional geometric positions and corresponding points in an image, and can be used for distortion correction, three-dimensional reconstruction and the like. The two researches mainly include camera calibration under the condition of fixed focal length, a high-precision three-dimensional target or a plane calibration plate is needed, and then a more accurate calibration result is obtained by means of a linear least square method and a nonlinear optimization method. However, due to the influence of external environmental factors, internal factors and the like, parameters of the camera may change, and the zoom camera sometimes needs to calibrate internal camera parameters at different focal lengths. Therefore, in outdoor work occasions, the calibration of the camera by the reference calibration object is difficult to realize.
At present, the image stitching technology is widely applied, and the general steps of the image stitching technology comprise detection and matching of feature points in all images, purification and elimination of the feature points, error matching, prediction and refinement of camera internal and external parameters, image projection transformation, image fusion and the like. The characteristic point detection and pairing consumes more time, the characteristic point method splicing has certain requirements on image quality, and accurate splicing can not be achieved if the scene is a sky scene or a wall scene with few textures.
The related technology provides an image splicing method, which comprises the steps of extracting features of a reference image to obtain a first feature point set, and extracting features of an image to be registered to obtain a second feature point set; performing character recognition on the reference image to obtain at least one first character area, and performing character recognition on the image to be registered to obtain at least one second character area; removing the feature points in the first character area from the first feature point set to obtain a third feature point set, and removing the feature points in the second character area from the second feature point set to obtain a fourth feature point set; matching the feature points in the third feature point set with the feature points in the fourth feature point set to obtain model parameters of the image transformation model; and registering the image to be registered and the reference image according to the model parameters, and then splicing to obtain a spliced image.
In the scheme, the SIFT algorithm is relatively time-consuming in scale invariant feature transform, and the images to be spliced need to have abundant texture features, such as characters in the images. For some regions where features are sparse, this approach may not work properly.
Aiming at the problems that the SIFT algorithm is relatively time-consuming in the scale invariant feature transform in the correlation technique, images to be spliced need to have abundant textural features, and certain regions with rare features cannot work normally, a solution is not provided.
Disclosure of Invention
The embodiment of the invention provides an image fusion processing method and device, which at least solve the problems that the SIFT algorithm is relatively time-consuming in the Scale Invariant Feature Transform (SIFT) algorithm in the related technology, images to be spliced need to have abundant textural features, and certain regions with few features cannot work normally.
According to an embodiment of the present invention, there is provided an image stitching processing method, including:
adjusting the rotation angle of a dome camera in the horizontal direction under a target magnification, collecting a plurality of images, and determining a target focal length corresponding to the target magnification, wherein an overlapping area exists between adjacent images in the plurality of images;
projecting coordinate points of the images to a spherical surface according to a pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
back projecting part of pixels in the overlapped area on the spherical surface into the plurality of images, and determining a reprojection error between adjacent images after back projection;
adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and reversely projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the plurality of images.
Optionally, adjusting the target focal length and the roll angle of the dome camera according to the reprojection error, and obtaining the adjusted focal length and the adjusted roll angle includes:
and adjusting the target focal length and the roll angle of the ball machine to obtain the adjusted focal length and the adjusted roll angle, so that the remapping error is minimum.
Optionally, projecting the coordinate points of the plurality of images onto a spherical surface according to a pre-established correspondence between the dome camera coordinates and the world coordinates of the plurality of images comprises:
establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the rotation angle;
and projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images.
Optionally, the establishing of the correspondence between the coordinate points of the plurality of images and the world coordinates according to the rotation angle includes:
determining a horizontal yaw attitude matrix of the dome camera according to the rotation angle;
acquiring a pitch angle attitude matrix and a roll angle attitude matrix of the ball machine, wherein the roll angle of the ball machine is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the ball machine;
determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix;
and establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the internal reference matrix of the dome camera, wherein the internal reference matrix of the dome camera is determined according to the target focal length.
Optionally, projecting the coordinate points of the plurality of images onto a spherical surface according to the correspondence between the dome camera coordinates and the world coordinates of the plurality of images comprises:
determining a horizontal field angle after image splicing according to the rotation angle and the horizontal field angle of the dome camera;
determining the width of the spliced image according to the widths of the plurality of images;
and projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images, the width of the spliced images and the horizontal field angle of the spliced images.
Optionally, the method further comprises:
establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the internal reference matrix of the dome camera in the following mode:
wherein (X, Y, Z) is the world coordinate, (X, Y) is the dome camera coordinate of the plurality of images, R is the attitude matrix, and K is the reference matrix of the dome camera;
projecting coordinate points of the plurality of images to a spherical surface according to the corresponding relation between the dome coordinates and world coordinates of the plurality of images, the width of the spliced image and the horizontal field angle of the spliced image in the following manner:
wherein, wnewξ is the horizontal angle of view of the stitched image, and u and v are the projection coordinate values of the dome coordinates of the plurality of images projected onto the spherical surface.
Optionally, before back-projecting a part of pixels in the coinciding region on the sphere into the plurality of images and determining a reprojection error between adjacent images after back-projection, the method further comprises:
recording projection coordinate values of the upper left corner coordinate points and the lower right corner coordinate points of the plurality of images in the spherical surface;
determining an average value of a projection coordinate value of the lower right corner coordinate point of a first image and a projection coordinate value of the upper left corner coordinate point of a second image in the plurality of images, and determining the average value as a center coordinate of a superposition area of two adjacent images, wherein the first image and the second image are two adjacent images, and the center coordinates of the first image and the last image in the plurality of images are the projection coordinate value of the upper left corner coordinate point and the projection coordinate value of the lower right corner coordinate point respectively;
and acquiring the partial pixel points in the overlapping area according to the central coordinate of the overlapping area.
Optionally, the back projecting the projection point on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a stitched image of the plurality of images includes:
determining an adjusted attitude matrix and an adjusted internal reference matrix by using the adjusted roll angle and the adjusted focal length;
and reversely projecting the projection points on the spherical surface to the plane according to the adjusted attitude matrix and the adjusted internal reference matrix to obtain a spliced image of the plurality of images.
Optionally, the method further comprises:
and reversely projecting the projection points on the spherical surface to the plane according to the adjusted attitude matrix and the adjusted internal reference matrix in the following way to obtain a spliced image of the plurality of images:
wherein (X, Y, Z) is the world coordinate, R 'is the adjusted posture matrix, K' is the adjusted internal reference matrix, and u, v are the projection coordinate values of the coordinate points of the plurality of images after being projected onto the spherical surface.
Optionally, determining the target focal length corresponding to the target magnification includes:
establishing a corresponding relation between the multiplying power and the focal length of the dome camera;
and determining a target focal length corresponding to the target magnification according to the corresponding relation between the magnification and the focal length.
Optionally, the establishing of the corresponding relationship between the magnification and the focal length of the dome camera includes:
acquiring a third image through the dome camera under different magnifications, and acquiring a fourth image after controlling the dome camera to rotate for a preset angle in the horizontal direction and the vertical direction, wherein the third image and the fourth image have an overlapping area;
acquiring a first characteristic point from the third image, and acquiring a second characteristic point from the fourth image;
determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera;
acquiring a preset number of matching points from the third image and the fourth image through feature extraction, and determining a homography matrix according to the preset number of matching points, wherein the preset number is an integer greater than or equal to 4;
determining the focal length of the dome camera according to the homography matrix and the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point;
and establishing a corresponding relation between the magnification and the focal length according to the focal lengths corresponding to different magnifications.
Optionally, determining the correspondence between the coordinates of the first feature point and the coordinates of the second feature point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera includes:
determining a horizontal yaw attitude matrix of the dome camera and a pitch angle attitude matrix of the dome camera according to the preset angles of rotation of the dome camera in the horizontal direction and the vertical direction respectively;
determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix, wherein the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera;
determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera in the following way:
wherein p is1Is the coordinate of the first feature point, p2Is the coordinate of the second feature point, G is an object matrix, R is the attitude matrix, F is determined according to the internal reference matrix,
optionally, determining the focal length of the dome camera according to the correspondence between the homography matrix and the coordinates of the first feature point and the coordinates of the second feature point includes:
determining the target matrix from the homography matrix by: g ═ CHC-1Wherein H is the homography matrix, C is determined from the internal reference matrix,cx、cyrepresenting the offset of the optical axis of the dome camera in an image coordinate system;
determining the focal length of the dome camera according to the target matrix and the attitude matrix in the following way:
wherein f is the focal length, G (i, j) and R (i, j) represent the ith row and the jth column elements of G, R, and i and j are integers more than 0.
According to another embodiment of the present invention, there is also provided an image stitching processing apparatus including:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for adjusting the rotation angle of the dome camera in the horizontal direction under the target magnification, acquiring a plurality of images and determining the target focal length corresponding to the target magnification, and a superposition area exists between adjacent images in the plurality of images;
the projection module is used for projecting coordinate points of the images to a spherical surface according to the pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
the first back projection module is used for back projecting partial pixels in the overlapping area on the spherical surface to the plurality of images and determining a reprojection error between adjacent images after back projection;
the adjusting module is used for adjusting the target focal length and the roll angle of the dome camera according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and the second back projection module is used for back projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the plurality of images.
Optionally, the adjusting module is further used for
And adjusting the target focal length and the roll angle of the ball machine to obtain the adjusted focal length and the adjusted roll angle, so that the remapping error is minimum.
Optionally, the projection module comprises:
the first establishing submodule is used for establishing the corresponding relation between the dome camera coordinates and the world coordinates of the images according to the rotation angle;
and the projection submodule is used for projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the coordinates of the dome camera of the images and the world coordinates.
Optionally, the first establishing sub-module includes:
the first determining unit is used for determining a horizontal yaw attitude matrix of the dome camera according to the rotating angle;
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a pitch angle attitude matrix and a roll angle attitude matrix of the dome camera, the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera;
the second determining unit is used for determining the attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix;
the first establishing unit is used for establishing the corresponding relation between the dome camera coordinates and the world coordinates of the images according to the attitude matrix and the internal reference matrix of the dome camera, wherein the internal reference matrix of the dome camera is determined according to the target focal length.
Optionally, the projection sub-module comprises:
the third determining unit is used for determining a horizontal field angle after image splicing according to the rotation angle and the horizontal field angle of the dome camera;
a fourth determining unit, configured to determine the width of the stitched image according to the widths of the multiple images;
and the projection unit is used for projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images, the width of the spliced images and the horizontal field angle of the spliced images.
Optionally, the first establishing unit is further configured to establish a correspondence relationship between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the reference matrix of the dome camera by:
wherein (X, Y, Z) is the world coordinate, (X, Y) is the dome camera coordinate of the plurality of images, R is the attitude matrix, and K is the reference matrix of the dome camera;
the projection unit is further configured to project coordinate points of the plurality of images to a spherical surface according to a correspondence between dome coordinates and world coordinates of the plurality of images, a width of the stitched image, and a horizontal field angle of the stitched image, in the following manner:
wherein, wnewξ is the horizontal angle of view of the stitched image, and u and v are the projection coordinate values of the dome coordinates of the plurality of images projected onto the spherical surface.
Optionally, the apparatus further comprises:
the recording module is used for recording projection coordinate values of the coordinate points at the upper left corner and the lower right corner of the plurality of images in the spherical surface;
a second determining module, configured to determine an average of a projection coordinate value of the lower-right corner coordinate point of a first image and a projection coordinate value of the upper-left corner coordinate point of a second image in the multiple images, and determine the average as a center coordinate of an overlapping area between two adjacent images, where the first image and the second image are the two adjacent images, and the center coordinates of the first image and the last image in the multiple images are the projection coordinate value of the upper-left corner coordinate point and the projection coordinate value of the lower-right corner coordinate point, respectively;
and the acquisition module is used for acquiring the partial pixel points in the overlapping area according to the central coordinate of the overlapping area.
Optionally, the second back projection module comprises:
the third determining submodule is used for determining an adjusted attitude matrix and an adjusted internal reference matrix by using the adjusted roll angle and the adjusted focal length;
and the back projection submodule is used for back projecting the projection points on the spherical surface into the plane according to the adjusted attitude matrix and the adjusted internal reference matrix to obtain a spliced image of the plurality of images.
Optionally, the back projection sub-module is further configured to back project the projection point on the spherical surface to the plane according to the adjusted posture matrix and the adjusted internal reference matrix in the following manner, so as to obtain a stitched image of the multiple images:
wherein (X, Y, Z) is the world coordinate, R 'is the adjusted posture matrix, K' is the adjusted internal reference matrix, and u, v are the projection coordinate values of the coordinate points of the plurality of images after being projected onto the spherical surface.
Optionally, the first determining module includes:
the second establishing sub-module is used for establishing the corresponding relation between the multiplying power and the focal length of the dome camera;
and the fourth determining submodule is used for determining the target focal length corresponding to the target magnification according to the corresponding relation between the magnification and the focal length.
Optionally, the second establishing sub-module includes:
the acquisition unit is used for acquiring a third image through the dome camera under different magnifications respectively, and acquiring a fourth image after controlling the dome camera to rotate for a preset angle in the horizontal direction and the vertical direction, wherein the third image and the fourth image have an overlapped area;
a second acquiring unit, configured to acquire a first feature point from the third image, and acquire a second feature point from the fourth image;
a fifth determining unit, configured to determine, according to an internal reference matrix of the dome camera and a posture matrix of the dome camera, a correspondence between the coordinates of the first feature point and the coordinates of the second feature point;
a sixth determining unit, configured to obtain a predetermined number of matching points from the third image and the fourth image through feature extraction, and determine a homography matrix according to the predetermined number of matching points, where the predetermined number is an integer greater than or equal to 4;
a seventh determining unit, configured to determine a focal length of the dome camera according to the correspondence between the homography matrix and the coordinates of the first feature point and the coordinates of the second feature point;
and the second establishing unit is used for establishing the corresponding relation between the multiplying power and the focal length according to the focal lengths corresponding to different multiplying powers.
Optionally, the fifth determining unit is further configured to
Determining a horizontal yaw attitude matrix of the dome camera and a pitch angle attitude matrix of the dome camera according to the preset angles of rotation of the dome camera in the horizontal direction and the vertical direction respectively;
determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix, wherein the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera;
determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera in the following way:
wherein p is1Is the coordinate of the first feature point, p2Is the coordinate of the second feature point, G is an object matrix, R is the attitude matrix, F is determined according to the internal reference matrix,
optionally, the seventh determining unit is further configured to
According to the following waysThe homography matrix determines the target matrix: g ═ CHC-1Wherein H is the homography matrix, C is determined from the internal reference matrix,cx、cyrepresenting the offset of the optical axis of the dome camera in an image coordinate system;
determining the focal length of the dome camera according to the target matrix and the attitude matrix in the following way:
wherein f is the focal length, G (i, j) and R (i, j) represent the ith row and the jth column elements of G, R, and i and j are integers more than 0.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the method, the rotation angle of the dome camera in the horizontal direction is adjusted under the target magnification, a plurality of images are collected, and the target focal length corresponding to the target magnification is determined, wherein a superposition area exists between adjacent images in the plurality of images; projecting coordinate points of the images to a spherical surface according to a pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle; back projecting part of pixels in the overlapped area on the spherical surface into the plurality of images, and determining a reprojection error between adjacent images after back projection; adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle; according to the adjusted focal length and the adjusted roll angle, the projection points on the spherical surface are reversely projected to the plane to obtain the spliced images of the images, the problems that in the related technology, the scale invariant feature transform SIFT algorithm is relatively time-consuming, the images to be spliced need to have abundant texture features, and the images cannot normally work in certain regions with few features can be solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of an image stitching processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image stitching processing method according to an embodiment of the present invention;
fig. 3 is a block diagram of an image stitching processing apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the image stitching processing method according to the embodiment of the present invention, as shown in fig. 1, a mobile terminal 10 may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, and optionally, the mobile terminal may further include a transmission device 106 for a communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the message receiving method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, an image stitching processing method operating in the mobile terminal or the network architecture is provided, and fig. 2 is a flowchart of the image stitching processing method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, adjusting the rotation angle of the dome camera in the horizontal direction under a target magnification, collecting a plurality of images, and determining a target focal length corresponding to the target magnification, wherein an overlapping area exists between adjacent images in the plurality of images;
step S204, projecting coordinate points of the images to a spherical surface according to a pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
step S206, back projecting partial pixels in the overlapping area on the spherical surface to the plurality of images, and determining a re-projection error between adjacent images after back projection;
step S208, adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
further, step S208 may specifically include:
and adjusting the target focal length and the roll angle of the ball machine to obtain the adjusted focal length and the adjusted roll angle, so that the remapping error is minimum.
And step S210, reversely projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the plurality of images.
Through the steps S202 to S210, the problems that the SIFT algorithm is relatively time-consuming and images to be spliced need to have abundant textural features and cannot normally work in certain regions with rare features in the related technology can be solved, and the camera-based attitude does not need to be spliced by using a method similar to SIFT feature extraction and pairing through optimizing the attitude angle and the focal length and is short in time consumption and capable of normally working in scenes with few textures.
In an embodiment of the present invention, the step S204 may specifically include:
s2041, establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the rotation angle;
further, determining a horizontal yaw attitude matrix of the dome camera according to the rotation angle; acquiring a pitch angle attitude matrix and a roll angle attitude matrix of the ball machine, wherein the roll angle of the ball machine is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the ball machine; determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix; and establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the internal reference matrix of the dome camera, wherein the internal reference matrix of the dome camera is determined according to the target focal length.
Specifically, the corresponding relationship between the dome camera coordinates and the world coordinates of the plurality of images may be established according to the attitude matrix and the reference matrix of the dome camera in the following manner:
wherein (X, Y, Z) is the world coordinate, (X, Y) is the dome camera coordinate of the plurality of images, R is the attitude matrix, and K is the reference matrix of the dome camera;
the coordinate points of the plurality of images can be projected to a spherical surface according to the correspondence of the dome coordinates and the world coordinates of the plurality of images, the width of the stitched image, and the horizontal field angle of the stitched image by:
wherein, wnewξ is the horizontal angle of view of the stitched image, and u and v are the projection coordinate values of the dome coordinates of the plurality of images projected onto the spherical surface.
S2042, projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images.
Further, the step S2042 may specifically include:
determining a horizontal field angle after image splicing according to the rotation angle and the horizontal field angle of the dome camera;
determining the width of the spliced image according to the widths of the plurality of images;
and projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images, the width of the spliced images and the horizontal field angle of the spliced images.
In the embodiment of the invention, before the partial pixels in the overlapped area on the spherical surface are reversely projected to the plurality of images and the reprojection error between the adjacent images after the reverse projection is determined, the projection coordinate values of the upper left corner coordinate point and the lower right corner coordinate point of the plurality of images in the spherical surface are recorded; determining an average value of a projection coordinate value of the lower right corner coordinate point of a first image and a projection coordinate value of the upper left corner coordinate point of a second image in the plurality of images, and determining the average value as a center coordinate of a superposition area of two adjacent images, wherein the first image and the second image are two adjacent images, and the center coordinates of the first image and the last image in the plurality of images are the projection coordinate value of the upper left corner coordinate point and the projection coordinate value of the lower right corner coordinate point respectively; and acquiring the partial pixel points in the overlapping area according to the central coordinate of the overlapping area.
In an embodiment of the present invention, the step S210 may specifically include:
determining an adjusted attitude matrix and an adjusted internal reference matrix by using the adjusted roll angle and the adjusted focal length;
and reversely projecting the projection points on the spherical surface to the plane according to the adjusted attitude matrix and the adjusted internal reference matrix to obtain a spliced image of the plurality of images.
Specifically, the projection points on the spherical surface are back-projected into the plane according to the adjusted attitude matrix and the adjusted internal reference matrix in the following manner, so as to obtain a stitched image of the plurality of images:
wherein (X, Y, Z) is the world coordinate, R 'is the adjusted posture matrix, K' is the adjusted internal reference matrix, and u, v are the projection coordinate values of the coordinate points of the plurality of images after being projected onto the spherical surface.
In an embodiment of the present invention, the step S202 may specifically include:
s2021, establishing a corresponding relation between the multiplying power and the focal length of the dome camera;
further, the method comprises the following steps:
s11, acquiring a third image through the ball machine under different magnifications, and acquiring a fourth image after controlling the ball machine to rotate for a preset angle in the horizontal direction and the vertical direction, wherein the third image and the fourth image have an overlapping area; acquiring a first characteristic point from the third image, and acquiring a second characteristic point from the fourth image; determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera;
specifically, a horizontal yaw angle attitude matrix of the dome camera and a pitch angle attitude matrix of the dome camera are determined according to the preset angles of rotation of the dome camera in the horizontal direction and the vertical direction respectively; determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix, wherein the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera; determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera in the following way:
wherein p is1Is the coordinate of the first feature point, p2Is the coordinate of the second feature point, G is an object matrix, R is the attitude matrix, F is determined according to the internal reference matrix,
s12, acquiring a predetermined number of matching points from the third image and the fourth image through feature extraction, and determining a homography matrix according to the predetermined number of matching points, wherein the predetermined number is an integer greater than or equal to 4; determining the focal length of the dome camera according to the homography matrix and the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point; and establishing a corresponding relation between the magnification and the focal length according to the focal lengths corresponding to different magnifications.
Specifically, the target matrix is determined according to the homography matrix in the following manner: g ═ CHC-1Wherein H is the homography matrix, C is determined from the internal reference matrix,cx、cythe optical axis of the ball machine is shown to be inAn offset in an image coordinate system;
determining the focal length of the dome camera according to the target matrix and the attitude matrix in the following way:
wherein f is the focal length, G (i, j) and R (i, j) represent the ith row and the jth column elements of G, R, and i and j are integers more than 0.
S2022, determining a target focal length corresponding to the target magnification according to the corresponding relation between the magnification and the focal length.
The following provides a detailed description of embodiments of the invention.
According to the embodiment of the invention, the internal parameters of the camera can be accurately acquired when the PTZ dome camera is zoomed, and the focal length and the roll angle are optimized through the minimum reprojection error according to the rotating angle information of the dome camera, so that spliced images under different magnifications are obtained.
Fw,FcRepresenting world and ball machine coordinate systems, respectively, with the two coordinate systems being linked by camera internal and external reference matrices, when the PTZ ball machine is in a certain state, assume α1As a horizontal rotation angle value, β1Is a vertical pitch angle value, K is a camera internal reference, wherein:
fxand fyRespectively are normalized focal lengths on an x axis and a y axis in an image coordinate system; cxAnd CyRespectively, the shift amount of the optical axis of the camera in the image coordinate system, generally Cx=W/2,CyW is the width of the image and H is the height of the image, H/2.
Respectively acquiring images by a ball machine under different magnifications to obtain a first image (corresponding to the third image), controlling the ball machine to rotate for a preset angle in the horizontal direction and the vertical direction, acquiring the images again to obtain a second image (corresponding to the fourth image), shooting the images by the ball machine at a certain position to ensure that the scene details in a view field are rich, then respectively rotating for a certain angle in the horizontal direction and the vertical direction, and recording as α2And β2And ensuring that two adjacent pairs of images contain enough characteristic points and have more overlapped areas.
Extracting and matching feature points of the first image and the second image by using an ORB algorithm to obtain n pairs of matched feature points, and obtaining the corresponding relation of coordinates of the matched feature points:
wherein p is1And p2Respectively representing some two feature points paired in the first image and the second image. The coordinate of the characteristic point in the three-dimensional world coordinate system is Pw,s1And s1And respectively representing the scale information of the feature points in the corresponding camera coordinate system. Matrix arrayRotation matrices respectively representing horizontal yaw angle and vertical pitch angle, whereinIs expressed as follows The same can be obtained):
combining the above formulas to obtainCan be arranged to obtain p2=KRK-1p1Wherein, R is the camera extrinsic parameter angle matrix, can directly acquire through the angle information of ball machine:the camera internal reference matrix K may be decomposed as follows:
where the expressions for matrices C and F are as follows:
can be arranged to obtain p2=CFRF-1C-1p1=CGC-1p1=Hp1。
The expressions of the homography matrix H and the matrix G are as follows:
H=CGC-1,G=FRF-1
convert it to matrix form:
two equations can be listed using a pair of matching point pairs, and solving the matrix H of 8 parameters requires at least 4 matching point pairs, but requires that every three of the four pairs cannot be collinear; through feature extraction and matching algorithm, four pairs or more than four pairs of matching points can be extracted from two adjacent images, then a homography matrix H can be obtained by utilizing a least square method, and G can be obtained through calculation by combining the definitions of the matrixes F and G: g ═ C-1HC。
Then using the matrix multiplication relationship:
determining the focal length of the ball machine to obtain fxAnd fyThe calculated value of (a):
wherein G (i, j), R (i, j) (i 1,2, 3; j 1,2,3) respectively represent the ith row and jth column elements of matrix G, R. And (4) gradually advancing the multiplying power of the ball machine one by one, repeating the method, calculating the focal length under each multiplying power, and establishing a lookup table. At a larger magnification, fitting a quadratic function by using a least square method according to the data of the previous lookup table:
f(x)=ax2and + bx + c to obtain three coefficients of a, b and c, and obtaining the focal length value under each multiplying power.
Setting angle delta α of each horizontal rotation of the dome camera under a certain magnification by using the relationship between the magnification and the focal length in the lookup table, so that a certain overlapping area exists between two adjacent images to obtain n images, and determining the width w of the images obtained by splicingnew. And (3) obtaining a horizontal angle of view theta and a longitudinal angle of view omega of the dome camera according to the focal length f:
where arctan denotes the arctangent, w denotes the width of the acquired image, h denotes the height of the acquired image, and the units are all pixels.
Calculate horizontal field of view ξ of the stitched image:
ξ=(n-1)×Δα+θ。
calculating and splicing to obtain the height h of the image according to the spherical projectionnew:
hnew=wnew÷ξ×ω。
Adjusting the width and height of the spliced image to be evenly divided by 2: divide by 2 to get the integer, then multiply by 2, namely:
wnew=[wnew÷2]×2,hnew=[hnew÷2]×2。
wherein "[ ]" denotes a rounding symbol, i.e. [ x ] denotes a maximum integer less than or equal to x.
Assuming that the roll angle γ of the dome camera is 0 degrees, the vertical pitch angle β can be read from the SDK, and the horizontal yaw angle α is derived from the horizontal rotation angle of the camera, and is different for each position α (n) to (n-1) × Δ α, where α (n) represents the horizontal yaw angle for the i to 1,2rollAttitude matrix R of pitch anglepitchYaw attitude matrix RyawRespectively as follows:
will pitch angle attitude matrix RpitchYaw attitude matrix RyawAnd roll angle attitude matrix RrollThe product of (a) is used as the attitude matrix R in a certain state of the camera, i.e. R ═ Rpitch×Ryaw×Rroll。
Then, establishing a relation according to the two-dimensional coordinate point (X, Y) in the image and the world coordinate (X, Y, Z) corresponding to the two-dimensional coordinate point:
in the formula, R is a posture matrix, and K is a camera internal reference matrix. Then according to the forward projection formula in the spherical projection:
projecting all points in the n images onto a spherical surface, whereinRecording coordinate values of a coordinate point (0,0) at the upper left corner and a coordinate point (w-1, h-1) at the lower right corner of each image in a spherical surface, averaging the projected coordinate value at the lower right corner of the ith image and the projected coordinate at the upper left corner of the (i + 1) th image to obtain the coordinate of the center of the overlapped area of the two adjacent images, wherein the center coordinates of the first image and the last image are the projected coordinate at the upper left corner and the projected coordinate at the lower right corner respectively. Then according to the back projection formula in the spherical projection:
projecting the coordinate values of the ith image and the (i + 1) th image in the overlapping area in a reverse direction to each image by the above formula to respectively obtain the pixel coordinates of the ith image and the (i + 1) th image with the upper left corner of the image as the origin, respectively obtaining the pixel values of the two images at the two coordinate values, subtracting and taking the absolute value to obtain the reprojection error of one pixel point, and obtaining the reprojection errors of all the pixel points in the overlapping areas of the rest pixel points and the rest adjacent images according to the method, accumulating and adding, then finely adjusting the roll angle gamma to minimize the reprojection error, and recording the roll angle at the moment as ξbest. Similarly, the focal length f is finely adjusted to minimize the remapping error, and the focal length at this time is recorded as fbest。
Using optimized roll angle ξbestAnd focal length fbestObtaining an attitude matrix R and an internal reference matrix K, orthographically projecting each image to a spherical surface, then back-projecting projection points on all the spherical surfaces to a plane, and then using multi-resolutionAnd (4) processing the overlapped area by a rate fusion algorithm to enable the overlapped area to be transited smoothly, and finally obtaining a splicing map.
The camera calibration method of the embodiment of the invention belongs to self-calibration, namely calibration can be completed without a target, the focal length of the camera can be obtained under different multiplying powers, and the applicability is wide. Based on the posture of the camera, the posture angle and the focal length are optimized through the minimum remapping error, splicing is not needed to be carried out by using a method similar to SIFT feature extraction and pairing, time consumption is short, and the camera can normally work aiming at scenes with few textures.
Example 2
According to another embodiment of the present invention, there is also provided an image stitching processing apparatus, and fig. 3 is a block diagram of the image stitching processing apparatus according to the embodiment of the present invention, as shown in fig. 3, including:
the first determining module 32 is configured to adjust a rotation angle of the dome camera in the horizontal direction under a target magnification, acquire a plurality of images, and determine a target focal length corresponding to the target magnification, where an overlapping area exists between adjacent images in the plurality of images;
the projection module 34 is configured to project coordinate points of the plurality of images onto a spherical surface according to a pre-established correspondence between dome camera coordinates and world coordinates of the plurality of images, where the correspondence between the dome camera coordinates and the world coordinates of the plurality of images is determined according to the rotation angle;
a first back projection module 36, configured to back-project a part of pixels in the overlapping area on the spherical surface into the plurality of images, and determine a reprojection error between adjacent images after back projection;
the adjusting module 38 is configured to adjust the target focal length and the roll angle of the dome camera according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and the second back projection module 310 is configured to back project the projection point on the spherical surface to a plane according to the adjusted focal length and the adjusted roll angle, so as to obtain a stitched image of the multiple images.
Optionally, the adjusting module 38 is further used for
And adjusting the target focal length and the roll angle of the ball machine to obtain the adjusted focal length and the adjusted roll angle, so that the remapping error is minimum.
Optionally, the projection module 34 includes:
the first establishing submodule is used for establishing the corresponding relation between the dome camera coordinates and the world coordinates of the images according to the rotation angle;
and the projection submodule is used for projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the coordinates of the dome camera of the images and the world coordinates.
Optionally, the first establishing sub-module includes:
the first determining unit is used for determining a horizontal yaw attitude matrix of the dome camera according to the rotating angle;
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a pitch angle attitude matrix and a roll angle attitude matrix of the dome camera, the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera;
the second determining unit is used for determining the attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix;
the first establishing unit is used for establishing the corresponding relation between the dome camera coordinates and the world coordinates of the images according to the attitude matrix and the internal reference matrix of the dome camera, wherein the internal reference matrix of the dome camera is determined according to the target focal length.
Optionally, the projection sub-module comprises:
the third determining unit is used for determining a horizontal field angle after image splicing according to the rotation angle and the horizontal field angle of the dome camera;
a fourth determining unit, configured to determine the width of the stitched image according to the widths of the multiple images;
and the projection unit is used for projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images, the width of the spliced images and the horizontal field angle of the spliced images.
Optionally, the first establishing unit is further configured to establish a correspondence relationship between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the reference matrix of the dome camera by:
wherein (X, Y, Z) is the world coordinate, (X, Y) is the dome camera coordinate of the plurality of images, R is the attitude matrix, and K is the reference matrix of the dome camera;
the projection unit is further configured to project coordinate points of the plurality of images to a spherical surface according to a correspondence between dome coordinates and world coordinates of the plurality of images, a width of the stitched image, and a horizontal field angle of the stitched image, in the following manner:
wherein, wnewξ is the horizontal angle of view of the stitched image, and u and v are the projection coordinate values of the dome coordinates of the plurality of images projected onto the spherical surface.
Optionally, the apparatus further comprises:
the recording module is used for recording projection coordinate values of the coordinate points at the upper left corner and the lower right corner of the plurality of images in the spherical surface;
a second determining module, configured to determine an average of a projection coordinate value of the lower-right corner coordinate point of a first image and a projection coordinate value of the upper-left corner coordinate point of a second image in the multiple images, and determine the average as a center coordinate of an overlapping area between two adjacent images, where the first image and the second image are the two adjacent images, and the center coordinates of the first image and the last image in the multiple images are the projection coordinate value of the upper-left corner coordinate point and the projection coordinate value of the lower-right corner coordinate point, respectively;
and the acquisition module is used for acquiring the partial pixel points in the overlapping area according to the central coordinate of the overlapping area.
Optionally, the second back projection module 310 comprises:
the third determining submodule is used for determining an adjusted attitude matrix and an adjusted internal reference matrix by using the adjusted roll angle and the adjusted focal length;
and the back projection submodule is used for back projecting the projection points on the spherical surface into the plane according to the adjusted attitude matrix and the adjusted internal reference matrix to obtain a spliced image of the plurality of images.
Optionally, the back projection sub-module is further configured to back project the projection point on the spherical surface to the plane according to the adjusted posture matrix and the adjusted internal reference matrix in the following manner, so as to obtain a stitched image of the multiple images:
wherein (X, Y, Z) is the world coordinate, R 'is the adjusted posture matrix, K' is the adjusted internal reference matrix, and u, v are the projection coordinate values of the coordinate points of the plurality of images after being projected onto the spherical surface.
Optionally, the first determining module 32 includes:
the second establishing sub-module is used for establishing the corresponding relation between the multiplying power and the focal length of the dome camera;
and the fourth determining submodule is used for determining the target focal length corresponding to the target magnification according to the corresponding relation between the magnification and the focal length.
Optionally, the second establishing sub-module includes:
the acquisition unit is used for acquiring a third image through the dome camera under different magnifications respectively, and acquiring a fourth image after controlling the dome camera to rotate for a preset angle in the horizontal direction and the vertical direction, wherein the third image and the fourth image have an overlapped area;
a second acquiring unit, configured to acquire a first feature point from the third image, and acquire a second feature point from the fourth image;
a fifth determining unit, configured to determine, according to an internal reference matrix of the dome camera and a posture matrix of the dome camera, a correspondence between the coordinates of the first feature point and the coordinates of the second feature point;
a sixth determining unit, configured to obtain a predetermined number of matching points from the third image and the fourth image through feature extraction, and determine a homography matrix according to the predetermined number of matching points, where the predetermined number is an integer greater than or equal to 4;
a seventh determining unit, configured to determine a focal length of the dome camera according to the correspondence between the homography matrix and the coordinates of the first feature point and the coordinates of the second feature point;
and the second establishing unit is used for establishing the corresponding relation between the multiplying power and the focal length according to the focal lengths corresponding to different multiplying powers.
Optionally, the fifth determining unit is further configured to
Determining a horizontal yaw attitude matrix of the dome camera and a pitch angle attitude matrix of the dome camera according to the preset angles of rotation of the dome camera in the horizontal direction and the vertical direction respectively;
determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix, wherein the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera;
determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera in the following way:
wherein p is1Is the coordinate of the first feature point, p2Is the coordinate of the second feature point, G is an object matrix, R is the attitude matrix, F is determined according to the internal reference matrix,
optionally, the seventh determining unit is further configured to
Determining the target matrix from the homography matrix by: g ═ CHC-1Wherein H is the homography matrix, C is determined from the internal reference matrix,cx、cyrepresenting the offset of the optical axis of the dome camera in an image coordinate system;
determining the focal length of the dome camera according to the target matrix and the attitude matrix in the following way:
wherein f is the focal length, G (i, j) and R (i, j) represent the ith row and the jth column elements of G, R, and i and j are integers more than 0.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, adjusting the rotation angle of the dome camera in the horizontal direction under the target magnification, collecting a plurality of images, and determining the target focal length corresponding to the target magnification, wherein an overlapping area exists between adjacent images in the plurality of images;
s2, projecting coordinate points of the images to a spherical surface according to a pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
s3, back projecting partial pixels in the overlapping area on the spherical surface into the plurality of images, and determining the reprojection error between the adjacent images after back projection;
s4, adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and S5, reversely projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the multiple images.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Example 4
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, adjusting the rotation angle of the dome camera in the horizontal direction under the target magnification, collecting a plurality of images, and determining the target focal length corresponding to the target magnification, wherein an overlapping area exists between adjacent images in the plurality of images;
s2, projecting coordinate points of the images to a spherical surface according to a pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
s3, back projecting partial pixels in the overlapping area on the spherical surface into the plurality of images, and determining the reprojection error between the adjacent images after back projection;
s4, adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and S5, reversely projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the multiple images.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (16)
1. An image stitching processing method is characterized by comprising the following steps:
adjusting the rotation angle of a dome camera in the horizontal direction under a target magnification, collecting a plurality of images, and determining a target focal length corresponding to the target magnification, wherein an overlapping area exists between adjacent images in the plurality of images;
projecting coordinate points of the images to a spherical surface according to a pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
back projecting part of pixels in the overlapped area on the spherical surface into the plurality of images, and determining a reprojection error between adjacent images after back projection;
adjusting the target focal length and the roll angle of the ball machine according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and reversely projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the plurality of images.
2. The method of claim 1, wherein adjusting the target focal length and the roll angle of the ball machine according to the reprojection error, and obtaining the adjusted focal length and the adjusted roll angle comprises:
and adjusting the target focal length and the roll angle of the ball machine to obtain the adjusted focal length and the adjusted roll angle, so that the remapping error is minimum.
3. The method of claim 1, wherein projecting the coordinate points of the plurality of images onto a spherical surface according to a pre-established correspondence of dome coordinates and world coordinates of the plurality of images comprises:
establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the rotation angle;
and projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images.
4. The method of claim 3, wherein establishing correspondence between coordinate points of the plurality of images and world coordinates according to the rotation angle comprises:
determining a horizontal yaw attitude matrix of the dome camera according to the rotation angle;
acquiring a pitch angle attitude matrix and a roll angle attitude matrix of the ball machine, wherein the roll angle of the ball machine is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the ball machine;
determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix;
and establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the internal reference matrix of the dome camera, wherein the internal reference matrix of the dome camera is determined according to the target focal length.
5. The method of claim 4, wherein projecting the coordinate points of the plurality of images onto a spherical surface according to the correspondence of the dome camera coordinates and world coordinates of the plurality of images comprises:
determining a horizontal field angle after image splicing according to the rotation angle and the horizontal field angle of the dome camera;
determining the width of the spliced image according to the widths of the plurality of images;
and projecting the coordinate points of the images to a spherical surface according to the corresponding relation between the dome camera coordinates and the world coordinates of the images, the width of the spliced images and the horizontal field angle of the spliced images.
6. The method of claim 5, further comprising:
establishing a corresponding relation between the dome camera coordinates and world coordinates of the plurality of images according to the attitude matrix and the internal reference matrix of the dome camera in the following mode:
wherein (X, Y, Z) is the world coordinate, (X, Y) is the dome camera coordinate of the plurality of images, R is the attitude matrix, and K is the reference matrix of the dome camera;
projecting coordinate points of the plurality of images to a spherical surface according to the corresponding relation between the dome coordinates and world coordinates of the plurality of images, the width of the spliced image and the horizontal field angle of the spliced image in the following manner:
wherein, wnewξ is the horizontal field angle of the stitched image, u, v are the dome coordinates of the plurality of images projected ontoThe projected coordinate values behind the sphere.
7. The method of claim 1, wherein prior to back projecting a portion of pixels within a coincident region on the sphere into the plurality of images, determining a reprojection error between adjacent images after back projection, the method further comprises:
recording projection coordinate values of the upper left corner coordinate points and the lower right corner coordinate points of the plurality of images in the spherical surface;
determining an average value of a projection coordinate value of the lower right corner coordinate point of a first image and a projection coordinate value of the upper left corner coordinate point of a second image in the plurality of images, and determining the average value as a center coordinate of a superposition area of two adjacent images, wherein the first image and the second image are two adjacent images, and the center coordinates of the first image and the last image in the plurality of images are the projection coordinate value of the upper left corner coordinate point and the projection coordinate value of the lower right corner coordinate point respectively;
and acquiring the partial pixel points in the overlapping area according to the central coordinate of the overlapping area.
8. The method of claim 1, wherein back-projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a stitched image of the plurality of images comprises:
determining an adjusted attitude matrix and an adjusted internal reference matrix by using the adjusted roll angle and the adjusted focal length;
and reversely projecting the projection points on the spherical surface to the plane according to the adjusted attitude matrix and the adjusted internal reference matrix to obtain a spliced image of the plurality of images.
9. The method of claim 8, further comprising:
and reversely projecting the projection points on the spherical surface to the plane according to the adjusted attitude matrix and the adjusted internal reference matrix in the following way to obtain a spliced image of the plurality of images:
wherein (X, Y, Z) is the world coordinate, R 'is the adjusted posture matrix, K' is the adjusted internal reference matrix, and u, v are the projection coordinate values of the coordinate points of the plurality of images after being projected onto the spherical surface.
10. The method according to any one of claims 1 to 7, wherein determining the target focal length corresponding to the target magnification comprises:
establishing a corresponding relation between the multiplying power and the focal length of the dome camera;
and determining a target focal length corresponding to the target magnification according to the corresponding relation between the magnification and the focal length.
11. The method of claim 10, wherein establishing a correspondence of a magnification of the dome camera to a focal length comprises:
acquiring a third image through the dome camera under different magnifications, and acquiring a fourth image after controlling the dome camera to rotate for a preset angle in the horizontal direction and the vertical direction, wherein the third image and the fourth image have an overlapping area;
acquiring a first characteristic point from the third image, and acquiring a second characteristic point from the fourth image;
determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera;
acquiring a preset number of matching points from the third image and the fourth image through feature extraction, and determining a homography matrix according to the preset number of matching points, wherein the preset number is an integer greater than or equal to 4;
determining the focal length of the dome camera according to the homography matrix and the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point;
and establishing a corresponding relation between the magnification and the focal length according to the focal lengths corresponding to different magnifications.
12. The method of claim 11, wherein determining the correspondence of the coordinates of the first feature point and the coordinates of the second feature point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera comprises:
determining a horizontal yaw attitude matrix of the dome camera and a pitch angle attitude matrix of the dome camera according to the preset angles of rotation of the dome camera in the horizontal direction and the vertical direction respectively;
determining an attitude matrix of the dome camera according to the horizontal yaw angle attitude matrix, the pitch angle attitude matrix and the roll angle attitude matrix, wherein the roll angle of the dome camera is set to be 0, and the roll angle attitude matrix is determined according to the roll angle of the dome camera;
determining the corresponding relation between the coordinates of the first characteristic point and the coordinates of the second characteristic point according to the internal reference matrix of the dome camera and the attitude matrix of the dome camera in the following way:
13. the method of claim 12, wherein determining the focal length of the dome camera according to the correspondence of the homography matrix, the coordinates of the first feature point, and the coordinates of the second feature point comprises:
determining the target matrix from the homography matrix by: g ═ CHC-1Wherein H is the homography matrix, C is determined from the internal reference matrix,cx、cyrepresenting the offset of the optical axis of the dome camera in an image coordinate system;
determining the focal length of the dome camera according to the target matrix and the attitude matrix in the following way:
wherein f is the focal length, G (i, j) and R (i, j) represent the ith row and the jth column elements of G, R, and i and j are integers more than 0.
14. An image stitching processing apparatus, characterized by comprising:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for adjusting the rotation angle of the dome camera in the horizontal direction under the target magnification, acquiring a plurality of images and determining the target focal length corresponding to the target magnification, and a superposition area exists between adjacent images in the plurality of images;
the projection module is used for projecting coordinate points of the images to a spherical surface according to the pre-established corresponding relation between the dome camera coordinates and the world coordinates of the images, wherein the corresponding relation between the dome camera coordinates and the world coordinates of the images is determined according to the rotation angle;
the first back projection module is used for back projecting partial pixels in the overlapping area on the spherical surface to the plurality of images and determining a reprojection error between adjacent images after back projection;
the adjusting module is used for adjusting the target focal length and the roll angle of the dome camera according to the reprojection error to obtain an adjusted focal length and an adjusted roll angle;
and the second back projection module is used for back projecting the projection points on the spherical surface into a plane according to the adjusted focal length and the adjusted roll angle to obtain a spliced image of the plurality of images.
15. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 13 when executed.
16. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010307821.5A CN111507894B (en) | 2020-04-17 | 2020-04-17 | Image stitching processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010307821.5A CN111507894B (en) | 2020-04-17 | 2020-04-17 | Image stitching processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111507894A true CN111507894A (en) | 2020-08-07 |
CN111507894B CN111507894B (en) | 2023-06-13 |
Family
ID=71864744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010307821.5A Active CN111507894B (en) | 2020-04-17 | 2020-04-17 | Image stitching processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507894B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222878A (en) * | 2021-06-04 | 2021-08-06 | 杭州海康威视数字技术股份有限公司 | Image splicing method |
CN117541469A (en) * | 2024-01-10 | 2024-02-09 | 中山大学 | SAR image stitching method and device based on graph theory |
CN117876222A (en) * | 2024-03-12 | 2024-04-12 | 昆明理工大学 | Unmanned aerial vehicle image stitching method under weak texture lake water surface scene |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200523A (en) * | 2014-09-11 | 2014-12-10 | 中国科学院自动化研究所 | Large-scale scene three-dimensional reconstruction method for fusion of additional information |
CN105550995A (en) * | 2016-01-27 | 2016-05-04 | 武汉武大卓越科技有限责任公司 | Tunnel image splicing method and system |
CN106534670A (en) * | 2016-10-25 | 2017-03-22 | 成都通甲优博科技有限责任公司 | Panoramic video generating method based on fixedly connected fisheye lens camera unit |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
US20170347030A1 (en) * | 2015-02-16 | 2017-11-30 | Applications Solutions (Electronic and Vision) Ltd | Method and device for stabilization of a surround view image |
CN107424118A (en) * | 2017-03-28 | 2017-12-01 | 天津大学 | Based on the spherical panorama mosaic method for improving Lens Distortion Correction |
CN108109111A (en) * | 2018-01-12 | 2018-06-01 | 深圳市粒视界科技有限公司 | Pass through the method for the more fish eye lens panorama cameras of software and hardware combining assembly and adjustment |
CN108122191A (en) * | 2016-11-29 | 2018-06-05 | 成都观界创宇科技有限公司 | Fish eye images are spliced into the method and device of panoramic picture and panoramic video |
CN108364252A (en) * | 2018-01-12 | 2018-08-03 | 深圳市粒视界科技有限公司 | A kind of correction of more fish eye lens panorama cameras and scaling method |
CN108846796A (en) * | 2018-06-22 | 2018-11-20 | 北京航空航天大学青岛研究院 | Image split-joint method and electronic equipment |
CN109064404A (en) * | 2018-08-10 | 2018-12-21 | 西安电子科技大学 | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system |
CN109272445A (en) * | 2018-10-29 | 2019-01-25 | 中国航空无线电电子研究所 | Panoramic video joining method based on Sphere Measurement Model |
US20200092471A1 (en) * | 2017-03-01 | 2020-03-19 | Peking University Shenzhen Graduate School | Panoramic image mapping method |
-
2020
- 2020-04-17 CN CN202010307821.5A patent/CN111507894B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200523A (en) * | 2014-09-11 | 2014-12-10 | 中国科学院自动化研究所 | Large-scale scene three-dimensional reconstruction method for fusion of additional information |
US20170347030A1 (en) * | 2015-02-16 | 2017-11-30 | Applications Solutions (Electronic and Vision) Ltd | Method and device for stabilization of a surround view image |
CN105550995A (en) * | 2016-01-27 | 2016-05-04 | 武汉武大卓越科技有限责任公司 | Tunnel image splicing method and system |
CN106683045A (en) * | 2016-09-28 | 2017-05-17 | 深圳市优象计算技术有限公司 | Binocular camera-based panoramic image splicing method |
CN106534670A (en) * | 2016-10-25 | 2017-03-22 | 成都通甲优博科技有限责任公司 | Panoramic video generating method based on fixedly connected fisheye lens camera unit |
CN108122191A (en) * | 2016-11-29 | 2018-06-05 | 成都观界创宇科技有限公司 | Fish eye images are spliced into the method and device of panoramic picture and panoramic video |
US20200092471A1 (en) * | 2017-03-01 | 2020-03-19 | Peking University Shenzhen Graduate School | Panoramic image mapping method |
CN107424118A (en) * | 2017-03-28 | 2017-12-01 | 天津大学 | Based on the spherical panorama mosaic method for improving Lens Distortion Correction |
CN108109111A (en) * | 2018-01-12 | 2018-06-01 | 深圳市粒视界科技有限公司 | Pass through the method for the more fish eye lens panorama cameras of software and hardware combining assembly and adjustment |
CN108364252A (en) * | 2018-01-12 | 2018-08-03 | 深圳市粒视界科技有限公司 | A kind of correction of more fish eye lens panorama cameras and scaling method |
CN108846796A (en) * | 2018-06-22 | 2018-11-20 | 北京航空航天大学青岛研究院 | Image split-joint method and electronic equipment |
CN109064404A (en) * | 2018-08-10 | 2018-12-21 | 西安电子科技大学 | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system |
CN109272445A (en) * | 2018-10-29 | 2019-01-25 | 中国航空无线电电子研究所 | Panoramic video joining method based on Sphere Measurement Model |
Non-Patent Citations (1)
Title |
---|
吴泽俊;吴庆阳;张佰春;: "一种新的基于球面模型的鱼眼镜头标定方法" * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222878A (en) * | 2021-06-04 | 2021-08-06 | 杭州海康威视数字技术股份有限公司 | Image splicing method |
CN113222878B (en) * | 2021-06-04 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | Image stitching method |
CN117541469A (en) * | 2024-01-10 | 2024-02-09 | 中山大学 | SAR image stitching method and device based on graph theory |
CN117541469B (en) * | 2024-01-10 | 2024-05-10 | 中山大学 | SAR image stitching method and device based on graph theory |
CN117876222A (en) * | 2024-03-12 | 2024-04-12 | 昆明理工大学 | Unmanned aerial vehicle image stitching method under weak texture lake water surface scene |
CN117876222B (en) * | 2024-03-12 | 2024-06-11 | 昆明理工大学 | Unmanned aerial vehicle image stitching method under weak texture lake water surface scene |
Also Published As
Publication number | Publication date |
---|---|
CN111507894B (en) | 2023-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145238B (en) | Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment | |
CN109064404A (en) | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system | |
CN111507894B (en) | Image stitching processing method and device | |
CN108288292A (en) | A kind of three-dimensional rebuilding method, device and equipment | |
US10726580B2 (en) | Method and device for calibration | |
CN111754579B (en) | Method and device for determining external parameters of multi-view camera | |
CN111292413A (en) | Image model processing method and device, storage medium and electronic device | |
CN113140036B (en) | Three-dimensional modeling method, device, equipment and storage medium | |
CN111028155A (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
CN111815715B (en) | Calibration method and device of zoom pan-tilt camera and storage medium | |
CN108430032A (en) | A kind of method and apparatus for realizing that VR/AR device locations are shared | |
CN115965697A (en) | Projector calibration method, calibration system and device based on Samm's law | |
CN113516719B (en) | Camera calibration method, system and storage medium based on multiple homography matrixes | |
CN111445513A (en) | Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium | |
CN112950528A (en) | Certificate posture determining method, model training method, device, server and medium | |
CN113077524B (en) | Automatic calibration method, device and equipment for binocular fisheye camera and storage medium | |
CN112419424B (en) | Gun-ball linkage calibration method and device and related equipment | |
CN117522963A (en) | Corner positioning method and device of checkerboard, storage medium and electronic equipment | |
CN117437357A (en) | Model construction method and device, nonvolatile storage medium and electronic equipment | |
CN114693782A (en) | Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system | |
CN111353945B (en) | Fisheye image correction method, device and storage medium | |
CN105488764A (en) | Fisheye image correction method and apparatus | |
CN112308783A (en) | Rolling effect correction method and device and computer readable storage medium | |
CN111914856B (en) | Layout method, device and system for plate excess material, electronic equipment and storage medium | |
CN115567781A (en) | Shooting method and device based on smart camera and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |