CN112233185B - Camera calibration method, image registration method, image pickup device and storage device - Google Patents

Camera calibration method, image registration method, image pickup device and storage device Download PDF

Info

Publication number
CN112233185B
CN112233185B CN202011019202.2A CN202011019202A CN112233185B CN 112233185 B CN112233185 B CN 112233185B CN 202011019202 A CN202011019202 A CN 202011019202A CN 112233185 B CN112233185 B CN 112233185B
Authority
CN
China
Prior art keywords
camera
preset
imaging
parameters
positions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011019202.2A
Other languages
Chinese (zh)
Other versions
CN112233185A (en
Inventor
王子彤
王廷鸟
刘晓沐
王松
张东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011019202.2A priority Critical patent/CN112233185B/en
Publication of CN112233185A publication Critical patent/CN112233185A/en
Application granted granted Critical
Publication of CN112233185B publication Critical patent/CN112233185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a camera calibration method, an image registration method, a camera device and a storage device, wherein the camera calibration method comprises the following steps: acquiring spatial positions of a plurality of preset points in a preset view field range; the preset view field range is a common view field range of the first camera and the second camera; obtaining first imaging positions of a plurality of preset points projected on the first camera by using camera parameters and space positions of the first camera; obtaining second imaging positions of a plurality of preset points projected on the second camera by using the camera parameters and the space positions of the second camera; and determining imaging conversion parameters between the first camera and the second camera based on the first imaging positions and the second imaging positions of the preset points. By the aid of the scheme, the image registration effect can be improved.

Description

Camera calibration method, image registration method, image pickup device and storage device
Technical Field
The present application relates to the field of optical technologies, and in particular, to a camera calibration method, an image registration method, an imaging device, and a storage device.
Background
Image registration has been widely used in image processing as one of the most important techniques in image stitching, image fusion, stereoscopic vision, three-dimensional reconstruction, depth estimation, and image measurement. Image registration generally refers to the superposition of images captured by cameras to achieve a fully coincident state by using imaging transformation parameters between the cameras.
Accurate image registration is an important precondition for accurate implementation of applications such as image stitching, image fusion, etc. The existing registration methods often depend on that the scene where the camera is located has more striking characteristics, and the characteristic is used as an extracted or selected object. Therefore, for scenes with insufficiently salient features, the existing registration methods generally have poor registration effects. In view of this, how to improve the image registration effect is a problem to be solved.
Disclosure of Invention
The application mainly solves the technical problem of providing a camera calibration method, an image registration method, an imaging device and a storage device, and can improve the image registration effect.
In order to solve the above problems, a first aspect of the present application provides a camera calibration method, including: acquiring spatial positions of a plurality of preset points in a preset view field range; the preset view field range is a common view field range of the first camera and the second camera; obtaining first imaging positions of a plurality of preset points projected on the first camera by using camera parameters and space positions of the first camera; obtaining second imaging positions of a plurality of preset points projected on the second camera by using the camera parameters and the space positions of the second camera; and determining imaging conversion parameters between the first camera and the second camera based on the first imaging positions and the second imaging positions of the preset points.
In order to solve the above problems, a second aspect of the present application provides an image registration method, including: acquiring imaging conversion parameters between the first camera and the second camera; wherein the imaging conversion parameter is obtained by the camera calibration method in the first aspect; and registering the first image shot by the first camera with the second image shot by the second camera by using the imaging conversion parameters.
In order to solve the above-described problems, a third aspect of the present application provides an image pickup device including a memory and a processor coupled to each other, the memory storing program instructions, the processor being configured to execute the program instructions to implement the camera calibration method in the first aspect described above.
In order to solve the above-described problems, a fourth aspect of the present application provides a storage device storing program instructions executable by a processor for use in the camera calibration method in the above-described first aspect.
According to the scheme, the spatial positions of the preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the camera parameters and the spatial positions of the first camera are utilized to obtain the first imaging positions of the preset points projected on the first camera, the camera parameters and the spatial positions of the second camera are utilized to obtain the second imaging positions of the preset points projected on the second camera, and further, the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the preset points, so that the imaging conversion parameters between the cameras can be obtained only by the camera parameters of the cameras and the spatial positions of the preset points without extracting any scene characteristics, the accuracy of the imaging conversion parameters can be improved, and the subsequent image registration effect by utilizing the imaging conversion parameters can be improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a camera calibration method according to the present application;
FIG. 2 is a schematic view of a state of an embodiment of a spatial location of a preset point;
FIG. 3 is a flow chart of another embodiment of the camera calibration method of the present application;
FIG. 4 is a flow chart of an embodiment of an image registration method of the present application;
FIG. 5 is a schematic diagram of a camera calibration apparatus according to an embodiment of the present application;
FIG. 6 is a schematic frame diagram of an embodiment of an image registration apparatus of the present application;
FIG. 7 is a schematic view of a frame of an embodiment of an image pickup device of the present application;
FIG. 8 is a schematic diagram of a frame of an embodiment of a storage device of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a camera calibration method according to the present application. Specifically, the method may include the steps of:
step S11: and acquiring the spatial positions of a plurality of preset points in the preset view field range.
In the embodiment of the disclosure, the preset field of view range is a common field of view range of the first camera and the second camera, that is, if the intervention set point is located in the field of view range of the first camera, the intervention set point can be shot by the first camera, and a plurality of preset set points are also located in the field of view range of the second camera, and can be shot by the second camera.
In one implementation scenario, the first camera and the second camera may be integrated in the same camera device. For example, the first camera and the second camera may be arranged up and down in the image capturing device, or the first camera and the second camera may be arranged left and right in the image capturing device, and may specifically be set according to actual application needs, which is not limited herein.
In another implementation scenario, the types of the first camera and the second camera may be set according to actual application needs. For example, the first camera may be a wide angle camera and the second camera may be a tele camera; or the first camera and the second camera can be wide-angle cameras; or the first camera and the second camera may be both tele cameras, which is not limited herein.
In yet another implementation scenario, the number of preset points may be set according to the actual application needs. Specifically, the number of the preset points may be greater than or equal to 4, etc., for example, 4,5, 6,7, etc., and so on, and are not exemplified herein.
The spatial position of the preset point is the three-dimensional coordinates of the preset point, and for convenience of description, the spatial position of the i-th preset point K i among the preset points may be represented as (x i,yi,zi). In an implementation scenario, please refer to fig. 2 in combination, fig. 2 is a schematic state diagram of an embodiment of a spatial position of preset points, as shown in fig. 2, a plurality of preset points are all on a preset plane within a preset field of view, the preset plane is perpendicular to an optical axis of a camera device, the optical axis of the camera device may be an optical axis of a first camera (a dotted line a in the figure) or an optical axis of a second camera (a dotted line B in the figure), and when the optical axis of the first camera is not parallel to the optical axis of the second camera, the optical axis of the camera device may also be an angular bisector of an included angle formed by the optical axis of the first camera and the optical axis of the second camera, which may be specifically set according to practical application needs without limitation. In a specific implementation scenario, for convenience of description, a distance from the imaging device to the preset plane may be denoted as d, and a spatial position of an i-th preset point of the preset points may be denoted as (x i,yi, d). According to the mode, the preset points are arranged on the preset plane in the preset view field, and the preset plane is perpendicular to the optical axis of the imaging device, so that the accuracy of calculating the imaging conversion parameters in the follow-up process can be improved.
In a specific implementation scenario, under the condition that the field of view ranges of the first camera and the second camera are known, according to the position relation between the first camera and the second camera, the common field of view range of the first camera and the second camera can be determined, and a plurality of preset planes perpendicular to the optical axis of the camera device can be determined in the common field of view range, so that a plurality of preset points can be selected in one of the preset planes, and the spatial positions of the preset points can be obtained, so that imaging conversion parameters of the first camera and the second camera can be calculated later. Therefore, the scene features do not need to be extracted, registration images do not need to be provided, or complex prepositioning work is performed, only a plurality of virtual points in the range of a preset view field are needed to be selected and used as preset points, and the method is convenient to use and applicable to scenes with insufficiently outstanding features.
Step S12: and obtaining a first imaging position of a plurality of preset points projected on the first camera by using the camera parameters and the space position of the first camera.
In one implementation scenario, the camera parameters may specifically include internal element parameters, for example, may include a lens x-axis focal length f x, a y-axis focal length f y, and may also include an x-axis optical center position c x, a y-axis optical center position c y. In one specific implementation, the internal component parameters may be represented using the following matrix K:
in the above formula (1), s represents the pixel pitch of the camera photosensitive device (sensor).
In one implementation scenario, the first camera may be used as a reference camera, so that the internal element parameters and the spatial positions of the first camera may be utilized to obtain first imaging positions of a plurality of preset points.
In a specific implementation scenario, the spatial positions of the plurality of preset points may be multiplied by the internal element parameters of the first camera, respectively, to obtain corresponding first imaging positions. Specifically, it can be expressed as the following formula:
In the above-mentioned formula (2), Representing the internal element parameters of the first camera, P i represents the spatial position of the i-th preset point of the preset points, and X 1i represents the first imaging position of the i-th preset point of the preset points projected to the first camera.
In another specific implementation scenario, the second camera may be used as a reference camera according to actual application needs, and steps similar to the above are adopted to obtain a plurality of preset points projected to a second imaging position of the second camera, which is not described herein in detail.
Step S13: and obtaining a plurality of preset points projected on the second imaging positions of the second camera by using the camera parameters and the space positions of the second camera.
In one implementation scenario, the camera parameters may also include external position parameters, for example, may include a rotation parameter R, and a translation parameter T of the first camera to the second camera. Specifically, after or before the installation of the image pickup device, the installation information of the image pickup device can be obtained, so that the external position parameter of the image pickup device can be obtained based on the installation information of the image pickup device, and further, on the premise that the first camera is a reference camera, the internal element parameter and the external position parameter of the second camera can be utilized to obtain second imaging positions of a plurality of preset points.
In a specific implementation scenario, the spatial positions of the plurality of preset points may be multiplied by the internal element parameter and the external position parameter of the second camera, respectively, to obtain the corresponding second imaging position. Specifically, it can be expressed as the following formula:
In the above formula (3), K 2 and All representing internal component parameters of the second camera,Representing the rotation parameter R,/>The translation parameter T, P i, the spatial position of the i-th preset point of the preset points, and X 2i, the second imaging position of the i-th preset point of the preset points projected to the second camera.
In another specific implementation scenario, the installation information of the image pickup device may specifically include an installation angle of the image pickup device, so that the installation angle of the image pickup device may be used to determine a first included angle α between an optical axis of the image pickup device and a first preset plane, a second included angle β between the image pickup device and a second preset plane, and a third included angle γ between the image pickup device and a third preset plane, and further, the first included angle α, the second included angle β, and the third included angle γ may be converted by using a preset conversion mode to obtain a rotation parameter, where the first preset plane, the second preset plane, and the third preset plane are any two mutually perpendicular. Specifically, it can be expressed as the following formula:
in the above formula (4), R x (α) is the roll angle, and the direction of the right-hand helix is the same (i.e., counterclockwise in the yz plane), R y (β) is the pitch angle, and the direction of the right-hand helix is the same (i.e., counterclockwise in the zx plane), and R z (γ) is the yaw angle, and the direction of the right-hand helix is the same (i.e., counterclockwise in the xy plane). The first preset plane may specifically be the ground, and the installation angle may specifically be an included angle between an optical axis of the image pickup device and the ground.
In another specific implementation scenario, when the installation angle is not 0, that is, when the optical axis of the imaging device is not parallel to the ground, the spatial position may be updated by using the rotation parameter, and on the basis of the updated spatial position, the first imaging position and the second imaging position of the plurality of preset points may be obtained by using the above steps. In particular, the rotation parameter and the space may be multiplied, thereby enabling an update of the spatial location. In one specific implementation scenario, reference may be made to the following formula:
Pi=R*Ki……(5)
in the above formula (5), R represents a rotation parameter, K i represents a spatial position, and P i represents an updated spatial position.
In still another specific implementation scenario, the second camera may be used as a reference camera according to actual application needs, and steps similar to the above are adopted to obtain a second imaging position where a plurality of preset points are projected to the second camera and a first imaging position where a plurality of preset points are projected to the first camera, that is, the second imaging position where a plurality of preset points are obtained by using internal element parameters and spatial positions of the second camera, and the first imaging position where a plurality of preset points are obtained by using internal element parameters and external position parameters of the first camera. Reference may be made specifically to the foregoing description, and no further description is given here.
In one implementation scenario, the step S12 and the step S13 may be performed sequentially, for example, the step S12 is performed first, the step S13 is performed later, or the step S13 is performed first, and the step S12 is performed later. In another implementation scenario, the above step S12 and step S13 may also be performed simultaneously. The setting can be specifically performed according to actual application requirements, and is not limited herein.
Step S14: and determining imaging conversion parameters between the first camera and the second camera based on the first imaging positions and the second imaging positions of the preset points.
In one implementation scenario, an objective function related to the imaging conversion parameter may be constructed using the first imaging position and the second imaging position of the preset point, so as to solve the objective function by using a preset solution manner, and obtain the imaging conversion parameter. Specifically, for each preset point, an objective function related to the imaging transformation parameter can be constructed by using the first imaging position and the second imaging position, and the objective function can be expressed as the following formula:
in the above formula (6), H represents an imaging conversion parameter, Representing a first imaging position of the i-th preset point, and X 2i represents a second imaging position of the i-th preset point. Through a plurality of preset points, a plurality of equation sets related to imaging conversion parameters can be constructed, so that the equation sets are solved, and the numerical values of all elements in the imaging conversion parameters can be obtained. For example, when the imaging transformation parameter is a matrix, the values of the elements in the matrix can be obtained. Specifically, the size of the matrix may be set according to practical application requirements, for example 3*3, etc., which is not limited herein.
In a specific implementation scenario, when more than two cameras are included in the scenario, the imaging conversion parameters between any two cameras may also be obtained by using the steps described above. For example, when the scene includes three cameras, imaging conversion parameters between the first camera and the second camera may be obtained, imaging conversion parameters between the second camera and the third camera may be obtained, and imaging conversion parameters between the first camera and the third camera may be obtained. In addition, the above-mentioned two cameras may be integrated in the same image pickup device, for example, three cameras are integrated in the same image pickup device, or four cameras are integrated in the same image pickup device, which may be specifically set according to actual application needs, and is not limited herein.
According to the scheme, the spatial positions of the preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the camera parameters and the spatial positions of the first camera are utilized to obtain the first imaging positions of the preset points projected on the first camera, the camera parameters and the spatial positions of the second camera are utilized to obtain the second imaging positions of the preset points projected on the second camera, and further, the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the preset points, so that the imaging conversion parameters between the cameras can be obtained only by the camera parameters of the cameras and the spatial positions of the preset points without extracting any scene characteristics, the accuracy of the imaging conversion parameters can be improved, and the subsequent image registration effect by utilizing the imaging conversion parameters can be improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a camera calibration method according to another embodiment of the application. The method specifically comprises the following steps:
Step S31: based on the imaging device mounting information, external positional parameters of the imaging device are obtained.
In the embodiment of the disclosure, the image pickup device is integrated with a first camera and a second camera. The mounting information may specifically include a mounting angle of the image pickup device.
Reference may be made specifically to the relevant descriptions in the foregoing embodiments, and details are not repeated here.
Step S32: and acquiring the spatial positions of a plurality of preset points in the preset view field range.
In the embodiment of the disclosure, the preset field of view range is a common field of view range of the first camera and the second camera.
Reference may be made specifically to the relevant descriptions in the foregoing embodiments, and details are not repeated here.
In one implementation scenario, the step S31 and the step S32 may be performed sequentially, for example, the step S31 is performed first, the step S32 is performed later, or the step S32 is performed first, and the step S31 is performed later. In another implementation scenario, the above-mentioned step S31 and step S32 may also be performed simultaneously. The setting can be specifically performed according to actual application requirements, and is not limited herein.
Step S33: whether the installation angle is 0 is judged, if not, the step S34 is executed, otherwise, the step S35 is executed.
Step S34: the spatial position is updated with the rotation parameters.
Reference may be made specifically to the relevant descriptions in the foregoing embodiments, and details are not repeated here.
Step S35: and taking the first camera as a reference camera, and obtaining first imaging positions of a plurality of preset points by using the internal element parameters and the space positions of the first camera.
Reference may be made specifically to the relevant descriptions in the foregoing embodiments, and details are not repeated here.
Step S36: and obtaining second imaging positions of a plurality of preset points by using the internal element parameters and the external position parameters of the second camera.
Reference may be made specifically to the relevant descriptions in the foregoing embodiments, and details are not repeated here.
In one implementation scenario, the second camera may also be used as a reference camera, so that the internal element parameters and the spatial positions of the second camera may be used to obtain second imaging positions of a plurality of preset points, and the internal element parameters and the external position parameters of the first camera may be used to obtain first imaging positions of a plurality of preset points. The setting can be specifically performed according to actual application requirements, and is not limited herein.
Step S37: and determining imaging conversion parameters between the first camera and the second camera based on the first imaging positions and the second imaging positions of the preset points.
Reference may be made specifically to the relevant descriptions in the foregoing embodiments, and details are not repeated here.
Different from the previous embodiment, based on the installation information of the image pickup device, the external position parameters of the image pickup device are obtained, the spatial positions of a plurality of preset points located in the preset view field range are obtained, so that whether the installation angle is 0 is judged, when the installation angle is not 0, the spatial positions are updated by using the rotation parameters, when the installation angle is 0, the first camera is directly used as a reference camera, the first imaging positions of a plurality of preset points are obtained by using the internal element parameters and the spatial positions of the first camera, the second imaging positions of a plurality of preset points are obtained by using the internal element parameters and the external position parameters of the second camera, and then the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of a plurality of preset points, so that the imaging conversion parameters between the cameras can be obtained without extracting any scene characteristics, thereby improving the accuracy of the imaging conversion parameters and further improving the image registration effect by using the imaging conversion parameters.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of an image registration method according to the present application. Specifically, the method may include the steps of:
step S41: and acquiring imaging conversion parameters between the first camera and the second camera.
In an embodiment of the disclosure, the imaging conversion parameter is obtained through the steps in any of the embodiments of the camera calibration method described above. Specifically, reference may be made to the relevant steps in the foregoing embodiments, which are not described herein.
In one implementation scenario, the first camera and the second camera may be integrated in the same camera device. For example, the first camera and the second camera may be arranged up and down in the image capturing device, or the first camera and the second camera may be arranged left and right in the image capturing device, and may specifically be set according to actual application needs, which is not limited herein.
Step S42: and registering the first image shot by the first camera with the second image shot by the second camera by using the imaging conversion parameters.
In one implementation scenario, when the first camera is used as the reference camera in the process of obtaining the imaging conversion parameters, the pixel coordinates of the pixel points in the first image shot by the first camera can be multiplied by the imaging conversion parameters to obtain the pixel coordinates of the corresponding pixel points in the second image shot by the second camera, so that the pixel points in the first image and the corresponding pixel points in the second image are aligned, and the first image shot by the first camera and the second image shot by the second camera can be registered.
In another implementation scenario, when the second camera is used as the reference camera in the process of obtaining the imaging conversion parameters, the pixel coordinates of the pixel points in the second image shot by the second camera can be multiplied by the imaging conversion parameters to obtain the pixel coordinates of the corresponding pixel points in the first image shot by the first camera, so that the pixel points in the second image are aligned with the corresponding pixel points in the first image, and the second image shot by the second camera can be registered with the first image shot by the first camera.
In still another implementation scenario, after the first image and the second image are registered, the first image and the second image may be further subjected to a stitching process, a fusion process, or the like based on the registration result, which may specifically be set according to the actual application needs, and is not limited herein.
According to the scheme, the imaging conversion parameters are obtained through the steps in the camera calibration method embodiment, so that the accuracy of the imaging conversion parameters can be improved, and the registration effect can be improved when the first image shot by the first camera and the second image shot by the second camera are matched based on the imaging conversion parameters.
Referring to fig. 5, fig. 5 is a schematic frame diagram of an embodiment of a camera calibration apparatus 50 according to the present application. The camera calibration device 50 comprises a position acquisition module 51, a first projection module 52, a second projection module 53 and a parameter determination module 54, wherein the position acquisition module 51 is used for acquiring the spatial positions of a plurality of preset points located in a preset view field range; the preset view field range is a common view field range of the first camera and the second camera; the first projection module 52 is configured to obtain a first imaging position of a plurality of preset points projected on the first camera by using the camera parameters and the spatial position of the first camera; the second projection module 53 is configured to obtain a second imaging position of the plurality of preset points projected on the second camera by using the camera parameters and the spatial position of the second camera; the parameter determining module 54 is configured to determine imaging conversion parameters between the first camera and the second camera based on the first imaging positions and the second imaging positions of the plurality of preset points.
According to the scheme, the spatial positions of the preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the camera parameters and the spatial positions of the first camera are utilized to obtain the first imaging positions of the preset points projected on the first camera, the camera parameters and the spatial positions of the second camera are utilized to obtain the second imaging positions of the preset points projected on the second camera, and further, the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the preset points, so that the imaging conversion parameters between the cameras can be obtained only by the camera parameters of the cameras and the spatial positions of the preset points without extracting any scene characteristics, the accuracy of the imaging conversion parameters can be improved, and the subsequent image registration effect by utilizing the imaging conversion parameters can be improved.
In some embodiments, the first camera and the second camera are integrated in the same camera device, the camera parameters include internal element parameters, and the first projection module 52 is specifically configured to use the first camera as a reference camera, and obtain first imaging positions of a plurality of preset points by using the internal element parameters and the spatial positions of the first camera.
Different from the foregoing embodiment, the first camera and the second camera are integrated in the same camera device, and the camera parameters include internal element parameters, so that the first camera is used as a reference camera, and the internal element parameters and the spatial positions of the first camera are utilized to obtain first imaging positions of a plurality of preset points, which is favorable for reducing the calculation amount of calibration and reducing the calibration complexity.
In some embodiments, the camera parameters further include external position parameters, the camera calibration device 50 further includes an external parameter obtaining module, configured to obtain external position parameters of the image capturing device based on the installation information of the image capturing device, and the second projection module 53 is specifically configured to obtain the second imaging positions of the plurality of preset points by using the internal element parameters and the external position parameters of the second camera.
Different from the embodiment, the camera parameters further comprise external position parameters, and the external position parameters of the image pickup device are obtained based on the installation information of the image pickup device, so that the internal element parameters and the external position parameters of the second camera are utilized to obtain second imaging positions of a plurality of preset points, which is beneficial to reducing the calculation amount of calibration and the complexity of calibration.
In some embodiments, the installation information includes an installation angle of the image capturing device, the external position parameter includes a rotation parameter of the image capturing device, the external parameter obtaining module includes an included angle obtaining sub-module for determining a first included angle between an optical axis of the image capturing device and a first preset plane by using the installation angle of the image capturing device, a second included angle between the optical axis of the image capturing device and a second preset plane, and a third included angle between the optical axis of the image capturing device and a third preset plane, and the external parameter obtaining module includes an included angle converting sub-module for converting the first included angle, the second included angle and the third included angle by using a preset conversion mode to obtain the rotation parameter, and the first preset plane, the second preset plane and the third preset plane are any two of mutually perpendicular.
In other words, the installation information includes the installation angle of the image pickup device, and the external position parameter includes the rotation parameter of the image pickup device, so as to determine the first included angle between the optical axis of the image pickup device and the first preset plane, the second included angle between the optical axis of the image pickup device and the second preset plane, and the third preset plane, and convert the first included angle, the second included angle and the third included angle by using the preset conversion mode, so as to obtain the rotation parameter, and the first preset plane, the second preset plane and the third preset plane are any two mutually perpendicular, which can be beneficial to improving the accuracy of the rotation parameter.
In some embodiments, the first preset plane is the ground, and/or the camera calibration device 50 further includes a position update module for updating the spatial position with the rotation parameter when the installation angle is not 0.
Different from the previous embodiment, the first preset plane is set to be the ground, so that the calibration complexity can be reduced; when the installation angle is not 0, the space position is updated by using the rotation parameters, so that the calibration accuracy can be improved.
In some embodiments, the external position parameters further include translation parameters of the first camera to the second camera; the first projection module 52 is specifically configured to multiply the spatial positions of the plurality of preset points with the internal element parameters of the first camera to obtain corresponding first imaging positions, and the second projection module 53 is specifically configured to multiply the spatial positions of the plurality of preset points with the internal element parameters and the external position parameters of the second camera to obtain corresponding second imaging positions.
Unlike the previous embodiments, the above arrangement can be advantageous in reducing the computational complexity of camera calibration.
In some embodiments, the plurality of preset points are each on a fourth preset plane within the preset field of view, and the fourth preset plane is perpendicular to the optical axis of the image capturing device.
Different from the foregoing embodiment, the plurality of preset points are all set on the fourth preset plane within the preset field of view, and the fourth preset plane is perpendicular to the optical axis of the image pickup device, which can be beneficial to improving the calibration accuracy.
In some embodiments, the parameter determination module 54 includes a function construction sub-module for constructing an objective function for the imaging transformation parameters using the first imaging location and the second imaging location of the preset points, and the parameter determination module 54 includes a function solution sub-module for solving the objective function using the preset manner to obtain the imaging transformation parameters.
Different from the foregoing embodiment, the objective function related to the imaging conversion parameter is constructed by using the first imaging position and the second imaging position of the preset point, so that the objective function is solved by using the preset mode to obtain the imaging conversion parameter, which is beneficial to simplifying the calibration process and reducing the calibration complexity.
Referring to fig. 6, fig. 6 is a schematic frame diagram of an image registration apparatus 60 according to an embodiment of the application. The image registration device 60 comprises a parameter acquisition module 61 and an image registration module 62, wherein the parameter acquisition module 61 is used for acquiring imaging conversion parameters between the first camera and the second camera; the imaging conversion parameters are obtained by the camera calibration device in any one of the camera calibration device embodiments; the image registration module 62 is configured to register a first image captured by the first camera with a second image captured by the second camera using the imaging conversion parameters.
According to the scheme, the camera calibration device in any one of the camera calibration device embodiments is used for obtaining the imaging conversion parameters, so that the accuracy of the imaging conversion parameters can be improved, and the registration effect can be improved when the first image shot by the first camera is aligned with the second image shot by the second camera based on the imaging conversion parameters.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an image pickup device 70 according to an embodiment of the present application. The image pickup device 70 includes a memory 71 and a processor 72 coupled to each other, the memory 71 storing program instructions, the processor 72 being configured to execute the program instructions to implement the steps of any of the camera calibration method embodiments described above. In addition, the image pickup device 70 may be provided with a photosensitive element, an optical lens, or the like according to practical application requirements, and is not particularly limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps of any of the camera calibration method embodiments described above. The processor 72 may also be referred to as a CPU (Central Processing Unit ). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 72 may be commonly implemented by a plurality of integrated circuit chips.
According to the scheme, the spatial positions of the preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the camera parameters and the spatial positions of the first camera are utilized to obtain the first imaging positions of the preset points projected on the first camera, the camera parameters and the spatial positions of the second camera are utilized to obtain the second imaging positions of the preset points projected on the second camera, and further, the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the preset points, so that the imaging conversion parameters between the cameras can be obtained only by the camera parameters of the cameras and the spatial positions of the preset points without extracting any scene characteristics, the accuracy of the imaging conversion parameters can be improved, and the subsequent image registration effect by utilizing the imaging conversion parameters can be improved.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a frame of a storage device 80 according to an embodiment of the application. The storage device 80 stores program instructions 801 that can be executed by the processor, the program instructions 801 being used to implement the steps of any of the camera calibration method embodiments described above.
According to the scheme, the accuracy of the imaging conversion parameters can be improved, and further the effect of carrying out image registration by using the imaging conversion parameters can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A camera calibration method, comprising:
Acquiring spatial positions of a plurality of preset points in a preset view field range; the preset view field range is a common view field range of the first camera and the second camera, the preset point is a virtual point in the preset view field range, and the first camera and the second camera are integrated in the same camera device;
Taking the first camera as a reference camera, and multiplying the spatial positions of the preset points by the internal element parameters of the first camera respectively to obtain a first imaging position of the preset points on the first camera; and
Multiplying the spatial positions of the preset points with the internal element parameters of the second camera and the external position parameters of the image pickup device respectively to obtain second imaging positions of the preset points on the second camera; wherein the external position parameter is obtained based on the imaging device mounting information;
and determining imaging conversion parameters between the first camera and the second camera based on the first imaging positions and the second imaging positions of the preset points.
2. The method according to claim 1, wherein the mounting information includes a mounting angle of the image pickup device, and the external position parameter includes a rotation parameter of the image pickup device;
The step of obtaining the external position parameter includes:
Determining a first included angle between an optical axis of the image pickup device and a first preset plane, a second included angle between the optical axis of the image pickup device and a second preset plane and a third included angle between the optical axis of the image pickup device and a third preset plane by using the installation angle of the image pickup device;
converting the first included angle, the second included angle and the third included angle by using a preset conversion mode to obtain the rotation parameters;
Wherein any two of the first preset plane, the second preset plane and the third preset plane are mutually perpendicular.
3. The method of claim 2, wherein the first predetermined plane is a ground surface;
and/or, before obtaining the first imaging positions of the preset points projected on the first camera by using the camera parameters of the first camera and the spatial positions, the method further comprises:
And when the installation angle is not 0, updating the space position by using the rotation parameter.
4. The method of claim 1, wherein the external position parameters further comprise translation parameters of the first camera to the second camera.
5. The method of claim 1, wherein the plurality of preset points are each on a fourth preset plane within the preset field of view, and the fourth preset plane is perpendicular to an optical axis of the image capture device.
6. The method of claim 1, wherein the determining imaging conversion parameters between the first camera and the second camera based on the first imaging locations and the second imaging locations of the plurality of preset points comprises:
Constructing an objective function with respect to the imaging transformation parameters using the first imaging location and the second imaging location of the preset point;
And solving the objective function by using a preset mode to obtain the imaging conversion parameters.
7. A method of image registration, comprising:
Acquiring imaging conversion parameters between the first camera and the second camera; wherein the imaging conversion parameter is obtained by the camera calibration method according to any one of claims 1 to 6;
And registering the first image shot by the first camera with the second image shot by the second camera by utilizing the imaging conversion parameters.
8. The method of claim 7, wherein the first camera and the second camera are integrated in the same camera device.
9. An image pickup device comprising a memory and a processor coupled to each other, the memory storing program instructions, the processor being configured to execute the program instructions to implement the camera calibration method of any one of claims 1 to 6 or to implement the image registration method of any one of claims 7 to 8.
10. A storage device storing program instructions executable by a processor for implementing the camera calibration method of any one of claims 1 to 6 or the image registration method of any one of claims 7 to 8.
CN202011019202.2A 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device Active CN112233185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011019202.2A CN112233185B (en) 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011019202.2A CN112233185B (en) 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device

Publications (2)

Publication Number Publication Date
CN112233185A CN112233185A (en) 2021-01-15
CN112233185B true CN112233185B (en) 2024-06-11

Family

ID=74108025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011019202.2A Active CN112233185B (en) 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device

Country Status (1)

Country Link
CN (1) CN112233185B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770057A (en) * 2021-01-20 2021-05-07 北京地平线机器人技术研发有限公司 Camera parameter adjusting method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805910A (en) * 2018-06-01 2018-11-13 海信集团有限公司 More mesh Train-borne recorders, object detection method, intelligent driving system and automobile
WO2018233373A1 (en) * 2017-06-23 2018-12-27 华为技术有限公司 Image processing method and apparatus, and device
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018233373A1 (en) * 2017-06-23 2018-12-27 华为技术有限公司 Image processing method and apparatus, and device
CN108805910A (en) * 2018-06-01 2018-11-13 海信集团有限公司 More mesh Train-borne recorders, object detection method, intelligent driving system and automobile
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera

Also Published As

Publication number Publication date
CN112233185A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
US10755381B2 (en) Method and device for image stitching
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
US10726580B2 (en) Method and device for calibration
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN109785390B (en) Method and device for image correction
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN109598763A (en) Camera calibration method, device, electronic equipment and computer readable storage medium
CN109785225B (en) Method and device for correcting image
CN109584312A (en) Camera calibration method, device, electronic equipment and computer readable storage medium
CN114926316A (en) Distance measuring method, distance measuring device, electronic device, and storage medium
CN112233185B (en) Camera calibration method, image registration method, image pickup device and storage device
CN114979956A (en) Unmanned aerial vehicle aerial photography ground target positioning method and system
KR20040053877A (en) Method of Lens Distortion Correction and Orthoimage Reconstruction In Digital Camera and A Digital Camera Using Thereof
CN110470216B (en) Three-lens high-precision vision measurement method and device
CN112598751A (en) Calibration method and device, terminal and storage medium
JP2017103695A (en) Image processing apparatus, image processing method, and program of them
CN113034615B (en) Equipment calibration method and related device for multi-source data fusion
CN111292380A (en) Image processing method and device
CN114663519A (en) Multi-camera calibration method and device and related equipment
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium
CN114862934B (en) Scene depth estimation method and device for billion pixel imaging
CN112446928B (en) External parameter determining system and method for shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant